"Alcoholic" doesn't pay well anyway. I know.
That's why you (not you specifically) have to be a functional alcoholic.
i've had a goal to drink my first alcohol this year and i decided that i would take the plunge while in my hotel.
i chose a tiny bottle of jacob's creek and retired to my room.. i had two sips and had to stop.
seriously vile.
"Alcoholic" doesn't pay well anyway. I know.
That's why you (not you specifically) have to be a functional alcoholic.
http://www.torontosun.com/2011/09/29/the-end-of-infection.
there are all kinds of developments right on the edge of emerging.
this is one.. metatron.
I think it might be better phrased as "completely unguided evolution/natural selection is long over". Just like any other living organism, we will respond to evnironmental pressures outside of our control, but there a lot of things we respond that that we do control.
Just...the ones under our control, I think we should take a few minutes or decades and think about the downstream effects of making changes we don't fully understand.
I'm cautious like that.
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
anony mous, you have sufficiently impressed with with your geek-fu on file systems for laptops and desktops. however, with all that you have written, you miss the fundamental answer to virtually every technical question. The answer is "it depends". Do linux based file systems need to be defragged? It depends.
With regard to the 4K buffer space at the end of a file, your point is well taken that it won't be missed, the flip side of that is that 4K won't be noticed, either. It's not enough space to do anything with to prevent fragmentation.
Fragementation ALWAYS occurs unless your file system is constantly doing moves or, worse yet, copy+delete actions.
Pre-fetch algorithms can work, but unless your workload is highly sequential, you may see a lot more read misses than hits. I am fully aware of how cache in storage arrays and servers work (years ago I wrote a white paper on x86 based virtual memory managers). It depends.
Also, saying that applications only use about 550 MB of physical memory...I can't go along with that. Right now, out of my 4 GB in this laptop, I am sitting with 1587 MB of physical memory in use. Of course, I also don't use a page file. YMMV. It depends.
And, 50TB with 20 simultaneous users... not to be disrespectful, but that's not a lot. I am currently looking at distributed systems for scalable storage around 3 PB in size.
Point is, making blankets statements like "Linux file systems never need to be defragged" is not smart. A better answer is "it depends" and then go ask a ton of questions to find out what you need to do to tune for performance, in my humble opinion.
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
I actually gave that presentation to a customer earlier today. He, of course, was under an NDA :)
If there is something specifically you would like to know, I can speak to it in generalities. I really can't get specific on a public forum about something specific to my employer.
Having said that, if you are asking about enterprise file systems, I would suggest you thing about policy based data tiering, thin provisioning, wide data striping, that sort of thing.
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
Are you a Redhat instructor?
Oh god no. I work for Dell :)
And yeah, I tend to be more focused on enterprise applications, large scale compute clusters, distributed namespaces, analytical applications, transactional databases, that sort of thing.
yeah, you did say "nearly as much" when talking about linux. fair enough. it just amuses me when people say that, particularly given my experience. Usually there are multiple ways to implement file systems, each with it's own peculiarities, pros and cons. granted, my experience is far beyond the norm, far far far beyond.
I used to be a smart engineer, now I spend my time talking about theory, the future of file systems, roadmaps, deduplication, the value of data and tiering it between storage platforms, etc.
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
First, why did you say Mr. Gates?
Second, the article was a succint description of the two methods of writing files into a file system, but it also made a LOT of assumptions, didn't provide any boundary testing data (when it did talk about boundaries, it didn't provide any supporting information), it talked about large files but didn't define what large or small meant, it completely ingnored the downstream performance effects of having to rearrange non-fragmented files to allow new large files to be written when no contiguous free space is written to the file system, etc.
It was amateurish.
I don't work for Microsoft (although I used to, and dealt extensively with file systems, among other thing). I just taught a class on distributed global file systems two weeks ago, including what large vs. small inodes do for you, cache coherency algorithms, data read and write algorithms with multiple nodes acessing a file system, reading inode and metadata into cache using large translation look aside buffers, ensuring protection of data and metadata across clustered nodes, methods of distributing files across multiples nodes and drives in storage arrays without the use of a distributed lock manager, etc.
All of this runs on a file system in linux, and fragmentation is important. It just amuses me when I hear "Linux never needs defragmenting".
I might know a little something about file systems :)
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
So, Linux file systems intentionally leave extra space after each file? That seems ineffecient, space-wise. And how much is a "bit" of space" A true bit? a byte? What if the file double in size? How does the bit of space and modern adaptive caching play into that? Am I to understand that my PC running linux takes part of my memory on an adaptive basis and reserves it for the file system? What if I need it for application use? Can I get it back?
Other than the "bit of space at the end of each file" part, none of that told me why Linux file systems don't need defragmenting, and that part was woefully incomplete.
I am asking in all sincerity. What happens on a busy Linux file system to prevent fragmentation? A bit of space at the end of each file is not by any means sufficient to prevent fragmentation.
http://www.torontosun.com/2011/09/29/the-end-of-infection.
there are all kinds of developments right on the edge of emerging.
this is one.. metatron.
We are still evolving and the virii and bacteria DNA in us plays a part in that. They do play a major part in our lives, human lifespan not withstanding.
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
Yeah, that site was such adorable fluff. It still didn't answer my specific question.
i apologize if this sounds like a commercial, but i would like to share some info about a fantastic program that i've been using on my laptop for the last five years or so.
i've never had one that lasted this long, they usually would go in the garbage after 2 or 3 years.
from just normal usage, pc's get fragmented registries, misaligned files, all sorts of errors, accompanied by ever increasing blue screens of death.
Why, exactly, would you not have to worry about defrag with Linux?