- To: slug@xxxxxxxxxxx
- Subject: Re: [SLUG] Increasing RAM
- From: Daniel Pittman <daniel@xxxxxxxxxxxx>
- Date: Sun, 19 Apr 2009 13:21:14 +1000
- Organization: I know I put it down here, somewhere.
- User-agent: Gnus/5.110006 (No Gnus v0.6) Emacs/23.0.60 (gnu/linux)
jam <jam@xxxxxxxxx> writes:
> On Sunday 19 April 2009 10:00:03 slug-request@xxxxxxxxxxx wrote:
>> On Sunday 19 April 2009 00:16:35 slug-request@xxxxxxxxxxx wrote:
>> >> I've decided to increase the RAM on my home CentOS server. As best I
>> >> can recall, the accepted wisdom is to have SWAP approx.~ 2 x RAM. Or
>> >> was that approx.~ 50% of RAM?
>> >> Can someone point me in the direction of an explicit tutorial on how
>> >> I might go about increasing SWAP without destroying data on my other
>> >> partitions please?
>> >> Or if I'm actually upping the RAM, should I just not worry about it?
>> >> Info I'm guessing would be relevant;
>> > Of course this is cockamany, urban myth, etc and typically you
>> > increase RAM and need even less swap than before
>> Actually, back in the day this was a good and solid guide, both for
>> performance and safety reasons. Today, less so, but I don't think it
>> is quite as laughable or untrue as you suggest.
> From the days of my first system (PDP11, 100K RAM, 15MB disk) till
> today I cannot see why this opinion is held. I first encounted it as a
> RedHat recommendation. Pray wax lyrical ...
Sure. First of all, some of this wisdom comes from other Unix kernels,
under which virtual memory management was handled quite differently, and
in which it was necessary for the system to preallocate backing store in
swap for all pages in use.
Second, it also comes from back in the 2.0 through 2.4 era, before the
big MM rewrite, under which allocating sufficient swap was significantly
helpful to various operations — it meant that the kernel could manage
memory more efficiently.
Finally, back when a machine with 32MB of RAM was big it was quite
conceivable that you would run multiple activities with a combined
working set much larger than available memory.
Allowing those to swap deeply, when they were not concurrent, meant that
you could handle a good deal more context with reasonable performance.
Sure, an ideal system wouldn't swap, but a common one would — especially
at the hobbiest end of the market.
Oh, and it is still possible today to set a strict overcommit mode under
Linux, ensuring that every page of allocated anonymous memory has
allocated a page of backing store in swap — this works well for ensuring
that applications never receive a late "out of memory" error due to
other applications stealing the overcommitted pages from them, at the
cost of working poorly with applications that allocate vast chunks of
unused virtual memory.
>> > EXCEPT for 1 tragic circumstance: Never *suspend* unless you have as
>> > much SWAP as RAM.
>> You mean suspend to disk, not suspend to RAM, right? Swap is irrelevant
>> to the later, and the amount you need varies with which of the three
>> implementations of the former you choose.
>> However, all of them require as much swap as you have *active memory*,
>> not as much as RAM — although, obviously, if you have no discardable
>> pages then you need the two to be equal.
> If your active RAM is not equal to physical RAM then the systems is
> not doing ir right (your definition of active ram ?)
Pages that are not discardable, mostly consisting of anonymous memory.
Unmodified page cache, for example, is valuable but not "active" in this
sense. Perhaps not the best choice of term on my part, though. :)
>> > Suspend writes all RAM starting at the beginning of swap and over
>> > everything along the way.
>> No, it doesn't. It uses the swap storage space just like the normal
>> kernel, except for adding some private accounting information and a
>> different header to make it possible to detect that it was used to
>> suspend to disk.
> OK it starts SOMEWHERE in swap then writes over everything. In any
> event I lost my home partition (root swap home)
You found an amazingly serious bug; which variant of suspend was this
with (or which distribution and release, from which I can derive that?)
Anyway, the suspend code is *supposed* to use the standard interface to
the swap space, or just the swap space directly, and will not run
outside of that space in normal use.
>> If it behaved as you describe then it would corrupt memory on the
>> way through as it overwrote swapped data (and, then, no one would
>> ever report a successful suspend to disk. :)
> I never use any suspend and clearly don't appreciate the fine detail,
> but how would this ever work
> 1G RAM
> 2G swap
> VasttExtravagentApp using 1.9G of swap, then suspend-to-disk?
... think, think, think, fail the suspend process with an error because
sufficient swap space cannot be found.
In other words: this is an error condition, under which normal error
reporting should happen and the suspend will not run.
The general algorithm of all the suspend implementations is to discard
pages until suspend can happen — any discardable pages without concern
for swsusp and uswsusp, and as few as possible for TuxOnIce.
Then, write what remains to swap, then shut down. If we can't fit
everything in swap handle the error gracefully.