I got some replies on my summary, which were quite useful.
RESULT: newfs the filesystem with smaller blocks (e.g. -b 4096, but sun4u
can currently only use 8192 byte blocks) and allow smaller fragmentation
I haven't tested it (because the user removed most of his stuff), but
the information should be in an archive - shouldn't it? I also included
Casper's (ex. of newfs options) and Karl's (DB) replies ...
Thanks again to
Casper Dik <casper@hollan.Sun.Com>
firstname.lastname@example.org (Roland Grefer)
David Thorburn-Gundlach <email@example.com>
"Karl E. Vogel" <firstname.lastname@example.org>
email@example.com (David Mitchell)
"Burelbach, Jonathan" <JBurelbach@feddata.com>
From: Casper Dik <casper@holland.Sun.COM>
> When you have many small files, fragmentation is a problem but not
> one that's fixable usign dump/retore.
> The best way to deal with that is eitehr changign teh storage format
> or dump and then *newfs* with a smaller fragment and smaller block
> size. (1k fragment/8K blocks are the default; you could use 512 /4K
> fragment/ blocks. (Unfortunately, such filesystems are not
> mountable on Ultras, something Sun should fix)
> Ue fastfs when restoring such a filesystem or it will take forever.
>From "Karl E. Vogel" <firstname.lastname@example.org>
> You might be able to use the Berkeley DB routines to set up
> fixed-length files in such a way as to avoid the fragmentation. For
> Any file shorter than 128 bytes --> pad to 128 bytes and append to
> Any file between 129-256 bytes --> pad to 256 bytes and append to
> For a large enough collection of files, you would be dropping the
> fragmentation size of the system from 1K down to approximately 128
> bytes, as well as freeing up a bunch of inodes.
-- [ email@example.com | firstname.lastname@example.org IAKS Uni KA ] [ University of Karlsruhe, Markus Weber, Parkstr. 17, 76131 Karlsruhe,Germany ]