[TriLUG] Scheduling file transfers
Joel Ebel
jbebel at ncsu.edu
Thu Apr 7 09:57:00 EDT 2005
I can imagine a few possibilities here. Most depend on ssh access. Are
you planning on deleting them as soon as you download them? If so, I
think you should first check the modified times and only download files
that haven't been modified in the past several minutes. If it has, it's
potentially in the middle of a download. If you have ssh access, you
can run a find with -mmin +n where n is the number of minutes you want
it to have been stable.
The other option that comes to mind is to use rsync if you can. If the
file isn't complete, next time it runs, it will get the rest of it. You
could combine this with a cron job using find again to delete ones that
are old. As long as you don't use rsync with --delete it won't delete
your local copy if a file is missing on the server.
If you don't have rsync, then ncftpget, as John Turner mentioned, is
probably a good option.
Joel
Mark Freeze wrote:
> A year or so ago I had a problem downloading a file via ftp onto a
> Windows box with WS_FTP. The file was about 100MB and I started
> downloading the file while my customer was still uploading, so I only
> got about half of the file. WS_FTP allowed me to do this with no
> error. (Which I thought was kinda crazy.)
>
> Now I have an offsite ftp spot that my customers use to send me files
> at random times during the day. I want to automatically download and
> process these files onto my box as soon as they appear on the site so
> I was thinking that I would scehedule up a cron job to look for these
> files every 10 min. When I do this am I going to have the problem of
> seeing the file and trying to get it as they are uploading? Some of
> these files are over 100MB and might take my customer a while to
> upload. Someone told me to make sure that I have exclusive access to
> the file before I download it, but since I have no control over the
> ftp server I'm not sure on how to accomplish that task.
>
> Any help will be greatly appreciated.
>
> Regards,
> Mark.
More information about the TriLUG
mailing list