[TriLUG] wget, curl ???
Robert Dale
robdale at gmail.com
Mon Jul 21 08:02:06 EDT 2008
On Mon, Jul 21, 2008 at 7:52 AM, James Jones <jc.jones at tuftux.com> wrote:
> All,
>
> I want to capture a list of files on a website. I don't want to
> download the files. I thought at first that wget would be the best for
> this, but it appears that it will download the files.
>
> What would be the simplest way to achieve my goal?
If you don't use the recursive option, they won't download the entire site.
wget http://slashdot.org - will save the web page to a file 'index.html'
curl http://slashdot.org - will print the web page to stdout
--
Robert Dale
More information about the TriLUG
mailing list