ALINK="#FF0000">

"Linux Gazette...making Linux just a little more fun!"


CHAOS: CHeap Array of Obsolete Systems

By Alex Vrenios


Introduction

If you are anything like me, you're probably not exactly sure how fast the latest processor is. You probably didn't wait in line to buy the latest Windows upgrade, and the machine you use to get your work done probably doesn't look too good next to even the $1000 specials. Maybe, like me, you need a little spice in your computing time slice.

This article describes a year-long project to create a network of old PCs - a loosely coupled multi-processor, if you will - all for the cost of a reasonably priced PC, a lot of my personal time, and a little bit of luck.

Last year our vintage 1988 Deskpro 386s reached the limits of their upgradeability, and they still couldn't run all the applications I use at work. It was time to buy new. My wife gets the first new one this time, and I'll decide to wait a little while longer. The two old machines were top of the line, in their day. I still have all the manuals, the original maintenance diskettes, and a few spare parts. I'll be sorry to see them go.


It Begins

With the new PC up and running I found myself reorganizing things. I went to my favorite computer store for a cable. Being one to avoid paying for a new cable whenever I can, I went into the back room - the salvage area - very much like a high tech junk yard. There, near the corner, on a bottom shelf, were three Deskpro 386s, just like my old ones at home!

I moved closer. (Didn't want to cause a scene, you know.) Each was priced from $100 to $150 and the stickers were yellow. The big sign on the wall said yellow means I can take another 20% off. I did some quick mental math and decided to offer him $300 for all three of them.

"Excuse me," I said. "The old Deskpro 386s in the corner?" "Twenty bucks apiece," he said. I put my credit card on the counter and the PCs in my trunk. He even threw in three AC cords. After the deed was done, I heard myself asking about others like them because I had to build a home network. He even agreed to give them to me at the same rate!

I took them home, took them all apart, and blew out some nasty dust. The cases cleaned up like new with a little spray cleaner. (Okay, a lot of spray cleaner.) They all had at least 40 MB hard, and standard floppy drives, and some even had extra memory. Every one of them booted, and all the hard drives reformatted properly. This was surely an omen.

The cheapest network cards I could find were NE2000 compatible 10Base2 at $29 each. I got commercially made coax cables because I know what I can do to a BNC connector with a soldering iron. Where was I going to put all this stuff?


The Plan

I have a desk, credenza, and a side table in my little office area at home. The side table happens to be wide enough for three PCs to sit side-by-side under it, on floor pads. I cut a shelf to fit under it and got two sliding keyboard drawers for the top. Two on top, with keyboards and monitors, three on the shelf, and three on the floor makes eight - that's a nice sized network. I got a pair of 1x4 data switches to connect the pair of VGA monitors and keyboards to each set of four machines. Mice do not switch well, so only the top two machines have them. For what I wanted to build, a lot of mice were not necessary anyway.

I found three more matching 386s and a very clean Deskpro 486 that I just couldn't pass up. (It even had a CD-ROM drive!) My final configuration uses the 486 as the "build" machine, seven 386s as the multi-processor test bed, and the eighth 386 as a spare. The two monitors, keyboards, and mice look good up top. The matching PCs underneath look very natural. The rats nest of wires are tucked out of site.

The Red Hat Linux version 4.2 box said it would work in character mode on a 40 MB hard drive, but required 8 MB of RAM to run. I did some quick combinatorics and bought the minimum number of memory chips that would bring every machine up to that standard. Time to saddle up.

I used a DOS boot diskette to bring up each machine, establish the type codes for the hard drives, and initialize the network cards. Each card came with a tee connector, and the coaxial cables went together quickly.

I got a small label maker and named the 386s after the seven deadly sins. The 486 was named omission. A local sysadmin friend said 192.64.9.1 through 192.64.9.8 would do fine for my IP addresses. This was starting to look pretty good.


The Installation

I've done my share of software installations, including a few operating systems. Red Hat tries to make things as easy as possible for the reasonably experienced person, so I expected an easy time of it. Not true.

In hind sight I guess it all makes perfect sense, but there were a few dark moments. Asking for a "Default Gateway" and a "Primary, Secondary, and Tertiary Nameserver" was a bit over my head. (I got eight machines on a private network. I don't need no stinkin' nameserver... Do I?) And a friend had to set me straight on how many partitions I really needed, explaining how a single partition containing a swap "file" works fine under Linux. Oddly, the installation program doesn't ask for NFS mounts if only one partition exists. (It seems to me like that's when you need them most.) I had to add this information manually to the /etc/fstab file after the installation was complete. I updated the /etc/hosts file and switched both accounts to use the C shell, while I was at it.

I still haven't a clue how to create a Linux boot disk. Nor do I understand the "rescue" mode on the installation boot floppies. When the network card "autoprobe" actually recognized my NE2000 compatible, however, I knew this was all going to work out fine. And when the second machine started reading the CD-ROM drive in the first one, I got a little smug.

When I got to one of the machines with a 40 MB hard drive, I discovered that a 40 MB set of installation files doesn't fit. After frantic posts on the news groups and the mailing lists, I discovered that I could de-select some of the software components that I didn't need and chip the installation set size down to 35 MB, which fit nicely. With /home and /usr mounted through NFS from the big 486, I had no fears of running out of work space. In addition to the root account for maintenance, I created one user account for myself so I could do the ordinary stuff.


The Network

With the evidence mounting, I still didn't really believe it all worked until I actually switched to different systems and did pings back and forth. When I compiled a simple client/server pair of test programs, started the server on one, and the client on another, I was convinced. This is good.

So what, you might ask, am I going to do with an 8-PC network?

I've taken a few graduate courses in distributed and fault tolerant systems, and I read a lot. There is something I find fascinating about a distributed algorithm: locally each of the individual processes obeys the same set of rules, but globally the "system" exhibits an emergent behavior. All these individual processes look like a single machine to the casual user.

With sophisticated software running on each of the seven machines, they can band together to form a single computer that runs application software, taking advantage of the overlap inherent in most algorithms, by running a piece of the whole on each machine, collecting and combining results as each of them completes. The "sophisticated software" is called a distributed operating system, and the application it runs has to be modified by hand in order to realize any performance improvements. The January 1998 issue of Linux Journal is dedicated to such systems. Beowulf clusters, discussed in that issue, are within my reach, now that Red Hat released their Extreme Linux CD, with the associated NASA code and documentation.

Beyond number crunching clusters, there are database server clusters. The many machines are used to distribute the client transaction loads so no one machine crashes from overwork. If a process fails, an associated monitor process might restart it on the same, or some other machine. And when one machine gets bogged down for whatever reason, some of its processes might be intentionally stopped and restarted elsewhere just to redistribute the overall load. This is leading edge fault tolerant research material.

Finally, there are dozens of simple distributed algorithms along with dozens of variations on each. Without any add-on sophisticated software, one may use a C compiler and some UDP socket programming to first imitate what has been done, then perhaps improve on it. I expect this will be what I work on first. The seven 386s can each run a copy of the algorithm under test, instrumented to write behavior trace records, and the 486 can monitor these traces, displaying the global behavior in some way that makes sense to me.


Conclusion

Whatever your computing interests, a hardware architecture must come first. The current glut of high performance PCs provides us an opportunity to build a system that fits our needs, without spending too much money. The Linux operating system provides a substrate upon which an interesting software project may grow. I recognized that a small network of PCs would provide me with a platform that fit well with what I think is fun. I hope my experience will encourage you to pursue your own.

My next step is to define and construct a framework for my 486 to become the monitor, sampling and reporting the behavior of some distributed algorithm running on the other machines. Maybe that will be the subject of my next article here.


Copyright © 1998, Alex Vrenios
Published in Issue 30 of Linux Gazette, July 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next