LINK="#3366FF" VLINK="#A000A0">

"The Linux Gazette...making Linux just a little more fun!"


(?) The Answer Guy (!)


By James T. Dennis, tag@lists.linuxgazette.net
LinuxCare, http://www.linuxcare.com/


(?) Out of Space....or Inodes? All Sparsity Lost?

From Derek Wyatt on Fri, 11 Jun 1999

Hi James,

I know this question has been asked before (i'v read the 'stuff' in the previous columns) but this one has an interesting wrinkle which i can't answer. I hope you can :)

I was copying a new slackware 4.0 installation from one disk to another. Incidently, i used two methods, using tar and find | afio, etc... It was the right way. I've done it many many many times before.

(!) You might not have preserved allocation "holes" (the "sparsity") of the files as you transferred them.
When a program opens a file in write mode and does a seek() or lseek() to some point that is more than a block past the end of the file, the Linux native filesystems (ext2, minix, etc) will leave the unnecessary blocks unallocated. This is possible in inode based filesystems (not in FAT/MS-DOS formatted filesystems).
These filesystems treat reads into such unallocated regions of a file as blocks of NULs (ASCII zero characters).
So, you use normal read and write commands in sequence (like 'cp' and 'cat' to to copy files) then you'll expand any such "holes" in the allocation map (the inode's list of clusters) into blocks of NULs and the file will take more space than it used to.
One possibility is that you used to have such "sparse" files and that your method of copying them failed to preserve those "holes." You could use the GNU 'cp --sparse=always' option to restore the "holes" in selected files (or create new ones wherever there are blocks of NULs in the data).
Most files are not sparse --- in fact there are only a couple of old dbm style libraries that used to create them in normal system use (the sendmail newaliases command used to be a prime example).
I don't think this accounts for your whole problem (i.e. it's not wholly a "holey" problem).

(?) Now, the problem is this: after the copy was complete, i used the slackware bootdisk and rootdisk to reboot things nice and clean to test the disk, and every copy i tried to do (including running lilo) resulted in a "file too large" error message. A 'df' reported that the disk had lots of space on it, as did 'du' (as did basic common sense :) ). The disk became completely unusable until i destroyed it and reinstalled slackware from scratch.

(!) Perhaps you should look at the output of the 'df -i' command.
Your Linux filesystems actually have couple of resources that are depleted at different rates from one another. If you have lots of small files than you are using up inodes faster than data blocks. The normal 'df' command output reports on your free data space. 'df -i' reports on the inode utilization.
So, its possible that you ran out of inodes even if you have plenty of disk space.

(?) Now, considering that the disk was just a 'raw' disk with data on it (ie. it wasn't the root partition at this point) I have no idea why it would behave like this. I tried eliminating /proc/* just for the heck of it, but to no avail.

(!) It is very easy to accidentally copy/archive your /proc (which is generally no harm). The problem is that you can easily restore that to your new root fs and mount a real /proc back over the restored copy/snapshot of the state of your proc fs back when you did the backup.
I recommend that you use the find -xdev or -mount options to prevent your find from crossing filesystem boundaries.
Let's say you have /, /usr/, /var, /usr/local, and /home as local filesystems. To use 'cpio' to back them up you could use a command like
find / /usr/ /var /usr/local /home -mount -depth | cpio ....
... to feed only the file names that are on that list of filesystem to cpio.
When using 'cpio' you can preserve sparsity while COPYING IN your data using the --sparse option.
Of course 'tar' works differently from 'cpio' in just about everyway that you could think of. You have to use something a bit more like:
tar cSlf /dev/st0 / /usr/ /var /usr/local /home
... where the -S preserves sparsity (during archive creation; and apparently NOT during restoration unless the archive was correctly created). Personally I think that this is a bug in GNU tar.
[ I suppose fortcing someone to use -S (or --sparse) when restoring offers the ability to desparsify the file, on a new filesystem which has room for it. Why it should be the default to not come out as it went in, though, I've no idea. -- Heather ]
The tar -l option instructs 'tar' not to cross fs boundaries.
The key general point here is that you might have mounted /proc or any other filesystem over a non-empty mount point. I personally think that the distribution maintainers should modify the default rc* scripts so that they won't mount filesystems over non-empty directories. I'd modify them to uniquely rename and remake any non-empty directory before mounting something over it (and post a nastygram to syslog, of course).
[ I disagree; I often touch a file named THIS_IS_THE_UNDERLYING_MOUNTPOINT for mount points, and I've actually had occasional administrative use for a few files to sit in the underlying drive in case that fs doesn't mount. Usually, notes about what should have been there, although I suppose that could be the content of the commentary filename above. -- Heather ]

(?) I hope i've given you enough information here. I've been using linux for years and have never come across something like this.

(!) I really don't know if I've given you the answer that will help for your situation. I've just tried to explain the most common couple of "out of space" situations that I've seen and heard about --- with the hopes that you're situation isn't more bizarre.
If your space problems persist through a reboot then you don't have the old "open anonymous file" problem (which I've described on other occasions). It's also a very good idea to run fsck (do a filesystem check) when you can't account for your missing space.

(?) Thanks alot! And keep up the good work :)

Sincerely,
Derek Quinn Wyatt

(!) I hope it helps.

(?) Need to learn details. Any suggestions?

From Derek Wyatt on Fri, 11 Jun 1999

Jim, thanks a lot for your quick reply.

I don't think this applies in my situation but there are a few things here that are news to me. It's good to know. If you were here, i'm sure you could figure it out :) But you're not. I simply need to learn more to solve something like this myself.

My knowledge of linux, how to use it and administrate it is in the upper intermediate level, i think. In order to get it higher, i need to learn about the details of the filesystem, the kernel, processes, etc etc... How would you recommend going about this sort of thing? Are there some online documents, or some books you would recommend? How about some source code to pour over?

Thanks again
:)
Derek

(!) Most Linux distributions, certainly all the large ones, contain the option to install the source code.
The Linux-Kernel mailing list (http://www,tux.org/lkml/) has archives mirrored in a few places, and several of the documents in the Linux Documentation Project (http://metalab,unc.edu/LDP/) are more rather than less technical.


Copyright © 1999, James T. Dennis
Published in The Linux Gazette Issue 43 July 1999
HTML transformation by Heather Stern of Starshine Techinical Services, http://www.starshine.org/


[ Answer Guy Index ] 1 2 3
4 5 6
7 8 9


[ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Next Section ]