On Tue, 2012-04-03 at 13:57 +1000, Marghanita da Cruz wrote:
Depends on the underlying filesystem.
depends a lot on the underlying file system
On XFS it's as many as you can fit filenames into an 8 Exabytye file!
Thanks - that sounds a lot.
You may want to think about directory structure here.
Name searches (for older file systems, at least) are linear, so if you
have 1000000 files in a directory, it will take 1000000 times as long to
figure out that the file of interest isn't there, than a directory with
one file in it.
One way of limiting this O(n) search time is to introduce directory
levels (aa/bb/cc instead of aabbcc); another is to use a file system
with O(log n) search times, like reiserfs. And, just maybe, what you
actually need is a database, not a flat directory structure, which
possibly giving O(1) search times depending on the database engine.
What constitutes a filename?
the bit between the /slashes/
You can't create a new link in that directory.
open(2) will report ENOSPC, because the directory data can't be made any
bigger because the disk is full.
Some file systems also have a limit on the total number of inodes
(files) on the disk, independent of the directory they are in. You will
also get an ENOSPC error for this one, but df(1) stubbornly insists
there are available data blocks... that's not what it ran out of.