[TriLUG] bash help
rwshep2000.2725323 at bloglines.com
rwshep2000.2725323 at bloglines.com
Thu Oct 28 10:11:47 EDT 2004
Hi,
I have a server with a shared repository for files. I plan to devote
70GB of an 80GB HD (a single data partition) to the files. The files are
uploaded and placed in the repository via a web application. Here is what
I'd like to accomplish:
When directory size exceeds 70GB, delete files,
First-In-First-Out, until the repository is pared back to 70GB.
The best
case scenario would be to pare back the files each time a new file is added.
However, I am hoping to do this without adding web application logic, which
could cause additional latency for the user. Although it risks possibly exceeding
the size limit, I am thinking of using a bash script scheduled with cron.
To ensure against exceeding the limit, I'm leaving 10GB of the 80GB as buffer.
I know this is imperfect but my humble intellect can't think of another approach.
So I'm looking for comments on two things:
1. How to make a bash script
look at total directory size, then proceed to delete files FIFO until a target
size is reached;
2. Whether there is a better alternative than putting
this script on cron.
Thanks!
Bob Shepherd
More information about the TriLUG
mailing list