Tugger the SLUGger!SLUG Mailing List Archives

Re: [Re: [SLUG] How to Use

Hi Mathew et al:

Matthew Palmer <mjp16@xxxxxxxxxxxxxxx> wrote:
On Sat, 24 May 2003, Louis Selvon wrote:

> wget http://www.domain.com/dir/*
> wget http://www.domain.com/dir/*.*
> wget -r http://www.domain.com/dir/*.*
> From the man page of "wget", I can't find a switch to meet 
> my requirements.

> You're just moving over from Windows, aren't you?  The reliance on *.* kinda
gives you away...

Louis> I am not sure what you mean. Both domains resides on a 
Linux OS.

>Anyway, you got 95% of the way there.  wget -r http://www.domain.com/dir/
will go all the way.  The way it does it is that if you don't give a
filename, the web server provides an index (generally - you don't want to
know the ugly exactitudes).  This is either a list of files in the
directory, or the contents of index.{html,htm,shtml,php,etc} if one of those
exists.  Either way, wget will troll through whatever the webserver gives
back, looking for links to other files to retrieve.  Eventually it will run
out of new files to get, and all your problems will be over.  <g>

Louis> I just tried "wget -r http://www.domain.com/dir/"; suggestion, and the
following happened:

1. Created a dir called "www.domain.com/dir";
2. In "www.domain.com/dir", I only see the index.html file.

The others were not downloaded.