SLUG Mailing List Archives
Re: [SLUG] Opinions sought: Exim vs Sendmail
- To: <lukekendall@xxxxxxxxxxxxxxxx>
- Subject: Re: [SLUG] Opinions sought: Exim vs Sendmail
- From: "Oscar Plameras" <oscarp@xxxxxxxxxxx>
- Date: Mon, 30 Jun 2003 14:52:59 +1000
- Cc: Sydney Linux Users Group <slug@xxxxxxxxxxx>
> On 29 Jun, Oscar Plameras wrote:
> > Ideally, one would want all list to be stored in the local Memory but
> > know this is impossible and with the internet growing in leaps and
> > the list is growing bigger and faster by the day. Also, you would want
> > DNS software that predicts the information that will be requested just
> > time when it is required. Again this is a mammoth task and out there
> > technical friends have been trying.
> Well, I can't see *any* difference between this problem and the
> classical caching problem. Your traffic typically has some coherency
> simply because communications tend to be between people who are in some
> kind of dialogue.
> It seems to me that the cost of storing an IP address as a string, plus
> a word for the decimal IP address, should cost roughly 50 bytes. I.e.
> I'd guess you should be able to cache about 20,000 addresses / Mb. I'd
> be surprised if any but very large organisations would receive email
> from more than that number of *domains* per day.
The reason why it is impossible to store all list in local CPU Memory
concurrently is, first, because of the physical limitations of the hardware
under the current state of technologies.
The reason is as follows:
Number of IPV4 addresses = 255*255*255*255 * 50 bytes (your allocation)
= 4,228Mb * 50 =
Number of IPV6 addresses = we can only imagine this number
If you have such a list, imagine the amount of cpu time required to search
such a list every time an address is to be found.
This is one reason why DNS BIND adopted its methodology and strategy.
It is meant to prevent having a list that enlarges to such a huge list with
out a way to control. The methodology and strategy is a compromise.
And the Sysadmin decides how much to compromise by way of
manipulating the configuration.
Another reason why there is this limitation is that the complete list
is scattered among DNS servers all across the Internet at any given time,
the list changes every minute (names change, addresses change, addresses
removed, addresses added and so on) and that a local DNS only knows
about those addresses previously queried for which this local DNS and its
authoritative DNS are answerable. If an address was not previously queried
it will not get included in the cache.
A single name change will instantaneously make a local list inconsistent
with reality. And there are hundreds, perhaps thousands of changes,
additions, and removals every minute.
Incidentally, this is the reason why, when you stop and start a DNS server
it takes a while for network throughput to return to normal depending
on the number of clients in the network.
The DNS cache, local or authoritative, is refreshed every so often, and is
expired every so often so the addresses in cache for more than a period
of time gets dropped and so the cache will never have the chance to
retain the entire list.
> If one cache entry saves you thousands or even just tens of
> milliseconds, then setting aside some space would give a speed-up of
> at least 3 orders of magnitude.
One can tune up the named to a point. Tuning up as you know
is a compromise; you win some and you lose some and there is
no one-way advantage.