[TriLUG] bash help

Brian Henning brian at strutmasters.com
Thu Oct 28 10:29:30 EDT 2004


Here's at least a partial answer for you..

$ du --max-depth=0 /path/to/directory/of/interest
293805812    /path/to/directory/of/interest

That's an easy (though not necessarily quick...du recursively scans the 
entire directory structure under that path to tally the usage) way to find 
the total space occupied by a directory.  I believe there are ways to 
exclude subdirectories and perhaps suppress the path output; consult your 
local manpage for details.

ls can be instructed to sort by date with appropriate flags (again, see your 
manpage).  Capture its output and start whacking the files one by one at the 
old end of the list.  Have ls give you the file sizes; that way you can 
start with du's output and subtract ls's report of each file's size as you 
whack it and not have to re-scan the directory each time.  I used a similar 
trick with ls to lock aging files to read-only during a cleanup project. 
Unfortunately that was at my last place of employment and I no longer have 
access to the scripts, or I'd give you more detail about it.

Good luck!

HTH,
~Brian

----- Original Message ----- 
From: <rwshep2000.2725323 at bloglines.com>
To: <trilug at trilug.org>
Sent: Thursday, October 28, 2004 10:11 AM
Subject: [TriLUG] bash help


> Hi,
>
> I have a server with a shared repository for files.  I plan to devote
> 70GB of an 80GB HD (a single data partition) to the files.  The files are
> uploaded and placed in the repository via a web application.  Here is what
> I'd like to accomplish:
>
> When directory size exceeds 70GB, delete files,
> First-In-First-Out, until the repository is pared back to 70GB.
>
> The best
> case scenario would be to pare back the files each time a new file is 
> added.
> However, I am hoping to do this without adding web application logic, 
> which
> could cause additional latency for the user.  Although it risks possibly 
> exceeding
> the size limit, I am thinking of using a bash script scheduled with cron.
> To ensure against exceeding the limit, I'm leaving 10GB of the 80GB as 
> buffer.
> I know this is imperfect but my humble intellect can't think of another 
> approach.
>
>
> So I'm looking for comments on two things:
>
> 1.  How to make a bash script
> look at total directory size, then proceed to delete files FIFO until a 
> target
> size is reached;
>
> 2.  Whether there is a better alternative than putting
> this script on cron.
>
> Thanks!
>
> Bob Shepherd
> -- 
> TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
> TriLUG Organizational FAQ  : http://trilug.org/faq/
> TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
> TriLUG PGP Keyring         : http://trilug.org/~chrish/trilug.asc
> 





More information about the TriLUG mailing list