SLUG Mailing List Archives
Re: [SLUG] is there a maximum file size under Linux?
- To: slug@xxxxxxxxxxx
- Subject: Re: [SLUG] is there a maximum file size under Linux?
- From: Malcolm Tredinnick <malcolm@xxxxxxxxxxxxxxxxx>
- Date: Mon Nov 5 17:24:02 2001
- User-agent: Mutt/1.2.5i
On Mon, Nov 05, 2001 at 03:17:14PM +1100, Greg Wright wrote:
> On 5/11/2001 at 12:35 PM Andre Pang ozone@xxxxxxxxxxxxxxxx [gregausit/slug]
> >If you have just the one gigantic file, it might be an idea to put
> >it on its own disk, or partition. (i.e. don't even bother with a
> >filesystem -- if you're using a database, a filesystem will
> >probably just slow things down). You should be able to use
> >/dev/hda or /dev/hda1 directly. Point your database package to
> >look at that file.
> I am interested in what you say here, but how do you use a disk with no FS
> ? How would you know your file has not become corrupt?
To name one example, Oracle already does this in its big database
systems. The logic is that the database application knows best how to
schedule the I/O sequencing and syncing and all that jazz to get maximum
throughput for itself. This is because nobody _but_ the database gets to
use that partition, so there is no need to play nice with others, as a
normal filesystem has to do. They thereby save on the overhead of
passing through the normal vfs layers.
As far as detecting corruption, etc, that has to be handle by the app
that is doing raw disk I/O, since they are taking on all the
responsibilities of the filesystem.
One of the things Oracle requested at the 2.5 kernel developers'
conference was better in-kernel support for things like raw disk I/O so
that they could support their products on enterprise Linux systems
If at first you don't succeed, destroy all evidence that you tried.