ALINK="#FF0000"> << Prev  |  TOC  |  Front Page  |  Talkback  |  FAQ  |  Next >>
LINUX GAZETTE
...making Linux just a little more fun!
More 2-Cent Tips

See also: The Answer Gang's Knowledge Base and the LG Search Engine


aptfetch with rate limiting (to 5K/s)

Sat, 15 Mar 2003 12:54:17 -0800
Jim Dennis (The LG Answer Guy)

Here you go folks. This is a script to fetch a few things that apt s going to want to get - but at a badnwidth limited rate.

See attached aptfetch.bash.txt


download s/w ?

Thu, 10 Jul 2003 13:07:00 +0530
J. BAKSHI (cave_man from hotpop.com)
Answer by several members of The Gang

Hi all, could any one plz suggest me a good download manager under linux ?

thanks in advanced

[Jason] wget
:-)
Probably not what you meant.
[Dan Wilder] Yes, if you could say a little more about what a "download manager" might look like. What would such a program do?
[Ashwin] I think he is looking for a program that can stop and continue download operations if the internet connection is cut and then restored. (These noisy phone lines in India :-)

yes Ashwin , this is also a function of download manager. but a download manager also helps to download the file (like cd image of debian) from the ftp server a little bit quick. I have come to know that prozilla is such a DM.

thanks.

[Les Barron] d4x is an excellent program for the desktop it supports drag and drop ftp & http as well as resuming downloads it is also called nt which is the name used to call the program from an xterm, there are also several graphical ftp programs gftp for gnome, kbear for kde,there are others as well.
[Dan] Sounds sort of like my noisy phone lines in Seattle. In a neighborhood where DSL will be available "not this year" according to the local phone company.
I make a lot of use of the "wget" command-line utility which handles both ftp and http connections. From the man page:
Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.
Rsync is also your friend. Surprising how many places you can find an unpublicised rsync server parallel to a public FTP server, often at the same url. To find out:
rsync some.domain.tld::
should return an rsync package list if there's an anon rsync server sitting there, a "failed to connect" message if not.
[JimD] Note that rsync services are considerably more computationally intensive than HTTP, FTP, etc. Popular (read high volume) archive sites generally can't allow anonymous rsync (thus the emergence of BitTorrent for tremendously popular free files)
http://bitconjurer.org/BitTorrent
[Dan] The big advantage to rsync is its ability to re-download changed portions of files without downloading the whole thing. This can be an enormous boon in maintaining a mirror of a site over a slow or unreliable connection.
[JimD] You can also consider ckermit (Columbia Kermit package for UNIX); which does work over TCP sessions, can act as a telnet client, can work over ssh connections, does very robust file transfers, and includes its own scripting language.
However, in honesty I prefer ssh with rsync. However, I don't know just how bad these connections are.
The real question is: what protocols do the far end(s) of these connections support and which are supported a utility or front end that the querent finds reasonable.


how to download Suse Linux

Sat, 12 Jul 2003 21:34:56 -0700 (PDT)
Ken Robbins (gatliffe from yahoo.com)
Answer by Niel and Chris of The Answer Gang

how do I download linux suse I went to the site but there a lot of files there I not know what one I need I have a 20gig hd as slave I not useing I want to put linux there I have a high speed internet

[Neil Youngman] It's all in ftp://ftp.suse.com/pub/suse/i386/current/README.FTP
What's not clear?
[Chris G.] I bet Ken wants the ISO images. Do you think that's the case?
[Neil] It does say
[Chris G.] Hmmm. I guess that the instructions are kind of clear. I have not done the ISO thing yet, so that's kind of new to me. I still use dialup at home. I just looked at a few sites (www.linuxiso.org, ftp.suse.com, etc.) They are quite clear about the installation. I noticed that SuSE provides a live CD too.
At my work (Motorola), they keep iso images of Linux, too. I was surprised that they have all of the disks for SuSE 7.x (yea - older stuff), as well as other distributions. That certainly would deal with my slow dialup. Our machines at work (the ones on the Internet) have CD writing capability too.
Check the TAG Knowledgebase and you'll find more on burning CDs, as well... including under mswin, if that's where you're presently stuck. -- Heather


GIMP vs Photoshop - CMYK

Tue, 24 Jun 2003 10:20:17 +0200 (CEST)
Karl-Heinz Herrmann (The Answer Gang)
Answer by Ben Okopnik

Photoshop can't even compete, although they've made some nice improvements in the recent years.

Photoshop has all these cool extra filter thingies you can buy in the store. I'm not sure that Kai Power Tools is the only package. Its strengths are rather different from the GIMP but I wouldn't say "can't compete". GIMP began aiming in Photoshop's direction, but the people who really use it took it to other places. So if Kai starts selling Kai's Power GIMP Fu, then we'll be winning the Oscar. -- Heather

[K.-H.] a friend of mine is in print graphics and one major difference between photoshop and gimp is using CMYK (Cyan, magenta, yellow, kontrast=black) color space instead of RGB. RGB and CMYK can not be converted into each other easily -- there are corners of RGB which simply do not have a printable CMYK aequivalent (e.g. bright orange).

[Ben] The answer would seem to be "don't use bright orange." :) I haven't done anything with CMYK except when I was doing my own photo enlargement and printing, ages ago, but it seems to me that if it doesn't have some of the capabilities of RGB, that makes it a subset. Don't use what you don't need, and it'll all work - no?

[K.-H.] Hmm... it seems photoshop can show you all critical colors -- its not just orange, IIRC all corners of RGB space are a problem. Orange just stuck in my mind because a rather harmless looking bright orange is not printable in four color mode -- you need special colors for that.

Photoshop also has plenty of little tools explicitly for print purpose, e.g. special color printing where you have to enlarge a lower layer a little so you don't get white if the printing machine shifts the two print colors slightly. In this case of custom print colors (not regular four color printing) photoshop can separate colors according to these defined extra colors instead of the regular CMYK.

[Ben] Oh, I'm sure that Photoshop has features which are not available in the GIMP. However, the converse is also true, and I'm sure that there are people working in GIMP who would be unable to switch to Photoshop.

[K.-H.] Another one is color separation into "films", i.e. the four color channels which go on transparent film and will then be copied on the metal printing plates.

[Ben] Image -> Mode -> Decompose -> CMYK. It's that simple.

[K.-H.] You never stop finding new thing in gimp -- so I'm not convinced this covers photoshop abilities.

Mostly this is done in a "higher" layout program (quarkExpress, freehand) but Photoshop does support it too.

The basic filter set and Fu-stuff in gimp is quite competitive. For print graphics the non existant CMYK mode is a clear "can't use gimp".

[Ben] It's true that there's no "direct" CMYK mode for initial images; however, you can still work with CMYK images as above. GIMP has surprising depth to it.

[K.-H.] yes it has :-)


There Goes the Neighbourhood: arpd to the Rescue

Sun, 27 Jul 2003 11:32:01 +0300
Chapko Dmitrij (dima from tts.lt)
Answer by Jim Dennis

I read http://tldp.org/LDP/LG/issue59/lg_answer59.html#tag/2

At me one network in which now 1400 devices. While them was less than 1024 made the static table, now dynamic and periodically out the message " Neighbour table overflow ". It can is possible to correct something in a kernel?

If I'm reading this correctly: you have a LAN segment with about 1400 (ethernet) devices on it. When you surpassed 1024 devices on the segment you started noticing errors regarding the Neighbour table overflow.

The solution to this is to move ARP (address resolution protocol) handling out of the kernel and into user space. This involves two steps. Reconfigure your kernel with CONFIG_ARPD = y (You'll have to enabled the option to "Prompt for experimental features/drivers" near the top of your make menuconfig or make xconfig.

Under: Code maturity level options --->

   [*] Prompt for development and/or incomplete code/drivers

Then under: Networking options --->

   [*]   IP: ARP daemon support (EXPERIMENTAL) (NEW)

Then from the help text thereunder:

...............

Normally, the kernel maintains an internal cache which maps IP addresses to hardware addresses on the local network, so that Ethernet/Token Ring/ etc. frames are sent to the proper address on the physical networking layer. For small networks having a few hundred directly connected hosts or less, keeping this address resolution (ARP) cache inside the kernel works well. However, maintaining an internal ARP cache does not work well for very large switched networks, and will use a lot of kernel memory if TCP/IP connections are made to many machines on the network.

If you say Y here, the kernel's internal ARP cache will never grow to more than 256 entries (the oldest entries are expired in a LIFO manner) and communication will be attempted with the user space ARP daemon arpd. Arpd then answers the address resolution request either from its own cache or by asking the net.

...............

Then you have to go fetch and install an ARP daemon. Under Debian that would be as simple as: apt-get -f install arpd


Out of Space and Other Errors

Fri, 11 Jul 2003 15:27:34 +0800
Kamal Syah b. Mohd Sharif (kamal from centurysoftware.com.my)
Answer by Jim Dennis and Dan Wilder

I'm having problems where I when I tried to view a file I got this error message:

E303: Unable to open swap file for "/tmp/ERRLOG", recovery impossible.
[Dan Wilder] How did you try to view the file?
[JimD] Sounds like a vi/vim error message --- it's trying to create a backup or recovery copy of the file.

I'm also having problems whereby I always got an error telling me that no space left on device ... but when I look at my filesystems there are actually lots of space available.

Regards

[Dan] What's the output from;
df
...look like? How about:
ls -ld /tmp
??
Please post the actual text of the error message, and tell us what you were doing when you encountered the error.
[JimD] Also check 'df -i' --- check the inode utilization. Basically it's possible for a filesystem to be completely out of inodes even when there's plenty of disk space available. That would happen on filesystems with a very large number of tiny files (USENet news spools, qmail-style maildir, and MH are examples of applications that generate these sort of things).
Other possible causes:
Some filesystems are set to remount in read-only mode if the kernel (filesystem driver) detects errors while the system is up and running. Other tune2fs settings are: "panic" and "continue" there are also mount (/etc/fstab) options that relate to this "on-error" behavior.
Check to see if you have quotas enabled and if the user in question has them. Also check the reserved space settings reported by tune2fs since it's possible (though extremely unlikely) that someone set that up to reserve more than the usual 5%, and that configured it to reserve for some user or group other than root). Other filesystems may have alternatives to tune2fs (but tune2fs also works on ext3, of course).


filename.tar failing to untar

Fri, 18 Jul 2003 11:05:52 -0700
Steven (steven from poiema.org)
Answer by Faber Fedor

Hello

I've been searching high and low for any information that might help me restore from a backup tar file that is being difficult for some reason.

The file is just your basic tar file without any compression.

[Faber Fedor] Then that means the files that are in the tarball are 'simply' concatenated (with some header information in between).

Here is the command I'm typing:

tar xvf 2003-07-17.tar

And here is the last few lines from the result:

/DP/
/DP/PDEF.DP000000
/DP/PDEF.DP010000
/DP/RDEF.DP010000
tar: Skipping to next header
tar: Error exit delayed from previous errors
[root@lucia root]#

Here is the version of tar we are running:

tar (GNU tar) 1.13.25

The filesize of the backup file is consistant with the other files that have worked fine.

Does anyone know what options I have? Is there some way to look into the file to see what may be wrong?

Thanks so much in advance,

Steven

[Faber] You don't say if the files are binary or not. I assume so. Either way, you can use hexedit to view/edit the file, or maybe just vi/less to view (NOT edit) the file, then compare this file to one that worked.
Good luck!


LJWNN Tech Tips

Mon, 27 Jan 2003 15:41:22 -0800
LJWNN (Linux Journal Weekly News Notes)


Wireless but Wary - Print Safely

If your main home network is a wireless network, you don't want to wake up in the morning and find some joker has printed many pages of stuff to your networked printer. Put the printer on a wired, private network segment, and print to it with ssh.

To do this, install this script as lpr on your wirelessly connected laptop:

away from your e-mail. You can see who received your message with

vacation -l | cut -d ' ' -f 1 - > people_who_got_vacation_message


Spring Cleaning For Continuous Upgrades

If you have an easy-to-upgrade Linux system, you end up with a system that's been upgraded many times instead of backed up and reinstalled.

To get rid of all the unused libraries from your Debian system, try the deborphan utility: http://www.tribe.eu.org/deborphan

or, of course:

apt-get install deborphan

It finds all the libraries that no longer have anything depending on them.

To purge unused libraries, simply do this:

deborphan | sudo xargs apt-get -y --purge remove


Faster Web Service? Use that CPU

Want to make your web server faster without getting a faster connection? All common browsers will transparently download content with gzip compression, but your out-of-the-box Apache probably doesn't have mod_gzip installed and turned on. Get the source from: http://www.schroepl.net/projekte/mod_gzip

...and add the following lines to your httpd.conf to turn it on:

LoadModule gzip_module /usr/lib/apache/1.3/mod_gzip.so

mod_gzip_on                 Yes
mod_gzip_maximum_file_size  0
mod_gzip_keep_workfiles     No
mod_gzip_temp_dir           /tmp
mod_gzip_item_include       mime ^text/.*

We don't use it for images, which are already compressed, but it compresses most of the HTML pages on one test server by 50 to 80 percent.



Cure Num Lock Madness

When you boot Linux, the kernel turns off Num Lock by default. This isn't a problem if, for you, the numeric keypad is the no-man's-land between the cursor keys and the mouse. But if you're an accountant, or setting up a system for an accountant, you probably don't want to turn it on every single time.

Here's the easy way, if you're using KDE. Go to K --> Preferences --> Peripherals --> Keyboard and select the Advanced tab. Select the radio button of your choice under NumLock on KDE startup and click OK.

If you only run KDE and want Num Lock on when you start a KDE session, you're done. Otherwise, read on.

To set Num Lock on in a virtual console, use:

setleds +num

If you choose to put this in a .bashrc file to set Num Lock when you log in, make it:

setleds +num &> /dev/null

...to suppress the error message you'll get if you try it in an xterm or over an SSH connection.

Finally, here's the way to hit this problem with a big hammer--make the numeric keypad always work as a numeric keypad in X, no matter what Num Lock says. This will make them never work as cursor keys, but you're fine with that because you have cursor keys, right? Create a file called .Xmodmap in your home directory, and insert these lines:

(from a Usenet post by Yvan Loranger: http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BFD087F.2000300%40iquebec.com&rnum=3+)

Dramatis personae


dmarti: example user name
bilbo: your desktop system
frodo: host running sshd
linuxjournal.com: some web site

Port forwarding also is called tunneling, so I'll call the key "tunnel". cd to your .ssh directory and create the key:

dmarti@bilbo:~/.ssh$ ssh-keygen -t dsa -f tunnel
Generating public/private dsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in tunnel.
Your public key has been saved in tunnel.pub.
The key fingerprint is:
77:b4:02:d9:32:c2:cc:18:58:c3:23:0a:13:46:a7:fa dmarti@capsicum

Now edit tunnel.pub and add the following options to the beginning of the line:

command="/bin/false",no-X11-forwarding,no-agent-forwarding,no-pty

That means this key is no longer any good for anything but port forwarding, because the only command it will run is /bin/false, and it won't forward X or agent commands.

sshd understands the options only when reading the key from authorized_keys, but if you put the options into the original .pub file, they'll stay with the key wherever it goes.

Now copy tunnel.pub to the end of your .ssh/authorized_keys at all the hosts to which you want to tunnel, and try it:

dmarti@bilbo:~$ ssh -i ~/.ssh/tunnel frodo
Connection to zork.net closed.

No errors, nothing runs; that's what you want. If you get errors, you may have mangled the authorized_keys file on the server end; if you get a shell you need to check and fix the options.

Another possibility is that if you're running with ssh-agent and have the SSH_AUTH_SOCK environment variable set, you could be using a key provided by ssh-agent instead of the one on the command line. Put env -u in front of the command line to be sure not to use the agent.

Tunnel time! Let's use the long-suffering linuxjournal.com web server as a guinea pig and make a tunnel:

dmarti@bilbo:~$ ssh -i ~/.ssh/tunnel -N -L 8000:linuxjournal.com:80 frodo

To review that command line:



Snip those extra quotes with vim

It's always inconsiderate to quote more of someone's posting than you have to in a mailing list. Here's how to bind a key in Vim to delete any remaining quoted lines after the cursor:

map . j{!}grep -v ^\>^M}

...where . is whatever key you want to bind.



Train your anti-spam tools

If you want to train a Bayesian spam filter on your mail, don't delete non-spam mail that you're done with. Put it in a "non-spam trash" folder and let the filter train on it. Then, delete only the mail that's been used for training. Do the same thing with spam.

It's especially important to train your filter on mail that it misclassified the first time. Be sure to move spam from your index to your spam folder instead of merely deleting it.

To do the training, edit your crontab with crontab -e and add lines like this:

6 1 * * * /bin/mv -fv $HOME/Maildir/nonspam-trash/new/* $HOME/Maildir/nonspam-t
rash/cur/ && /usr/local/bin/mboxtrain.py -d $HOME/.hammiedb -g $HOME/Maildir/no
nspam-trash

6 1 * * * /bin/mv -fv $HOME/Maildir/spam/new/* $HOME/Maildir/spam/cur/ && /usr/
local/bin/mboxtrain.py -d $HOME/.hammiedb -s $HOME/Maildir/spam

Finally, you can remove mail in a trash mailbox that the Bayesian filter has already seen:

2 2 * * * grep -rl X-Spambayes-Trained $HOME/Maildir/nonspam-trash | xargs rm -
v

2 2 * * * grep -rl X-Spambayes-Trained $HOME/Maildir/spam | xargs rm -v

Look for more information on Spambayes and the math behind spam filtering in the March issue of Linux Journal.



Who knows what time it really is?

It's easy to see what timeserver your Linux box is using with this command:

ntptrace localhost

But what would happen to the time on your system if that timeserver failed? Use

ntpq -p

to see a chart of all the timeservers with which your NTP daemon is communicating. An * indicates the timeserver you currently are using, and a + indicates a good fall-back connection. You should always have one *, and one or two + entries mean you have a backup timeserver as well.



Tell cd how to get there

In bash, you can make the cd command a little smarter by setting the CDPATH environment variable. If you cd to a directory, and there's no directory by that name in the current directory, bash will look for it under the directories in CDPATH. This is great if you have to deal with long directory names, such as those that tend to build up on production web sites. Now, instead of typing:

cd /var/www/sites/backhoe/docroot/support

...you can add this to your .bash_login:

export CDPATH="$CDPATH:/var/www/sites/support/backhoe/docroot"

...and type only:

cd support

This tip is based on the bash section of Rob Flickenger's Linux Server Hacks.



Make the most of Mozilla

In order to store persistent preferences in Mozilla, make a separate file called user.js in the same directory under .mozilla as where your prefs.js file lives.

You can make your web experience seem slower or faster by changing the value of the nglayout.initialpaint.delay preference. For example, to have Mozilla start rendering the page as soon as it receives any data, add this line to your user.js file:

user_pref("nglayout.initialpaint.delay", 0);

Depending on the speed of your network connection and the size of the page, this might make Mozilla seem faster.



To each their own - window features in Sawfish

If you use the Sawfish window manager, you can set window properties for each X program, such as whether it has a title bar, whether it is skipped when you Alt-Tab from window to window and whether it always appears maximized. You even can set the frame style to be different for windows from different hosts.

First, start the program whose window properties you want to customize. Then run the Sawfish configurator, sawfish-ui. In the Sawfish configurator, select Matched Windows and then the Add button.

 


Copyright © 2003, . Copying license http://www.linuxgazette.net/copying.html
Published in Issue 93 of Linux Gazette, August 2003

<< Prev  |  TOC  |  Front Page  |  Talkback  |  FAQ  |  Next >>