August 2002, Issue 81       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)

Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm],
This page maintained by the Editor of Linux Gazette,

Copyright © 1996-2002 Specialized Systems Consultants, Inc.

The Mailbag

HELP WANTED : Article Ideas

Send tech-support questions, Tips, answers and article ideas to The Answer Gang <>. Other mail (including questions or comments about the Gazette itself) should go to <>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.

Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.

Before asking a question, please check the Linux Gazette FAQ (for questions about the Gazette) or The Answer Gang Knowledge Base (for questions about Linux) to see if it has been answered there.

Linux terminal services server can't connect to internet via network

Tue, 02 Jul 2002 12:22:11 -0500
pat ring (pat_ring from

Great eZine. I think the typos, editorial asides and comments, and rough edits are endearing and "personalizing" experience for the linux enthusiasts. Your nitpicking detractors obviously are ignorant of the fact that LG is a labor of love in what seems to be in the spirit of the open source environment. I know this is a terribly long-winded question, so I apologize in advance.

Thanks, Pat. As for long questions, it's okay. We like that you actually made some effort ahead of time. In fact since we didn't reply to your detailed request I have to assume we're stumped, so I'm letting the readers take a crack at it. -- Heather

I have a stumper that I can't seem to get answered. I suspect this is more of a two-NIC network question than a LTSP or K12LTSP question.

I have been testing terminal services. I couldn't really get the actual LTSP working properly (something wrong with X on the client that I couldn't figure out,) so I downloaded and installed the K12LTSP version of Redhat7.2.

This is a great version that offers LTSP as an install option and it works great right out of the box. My clients log right in and can utilize terminal services perfectly. However, on my normal installations of Redhat, I can assign a static IP to the linux PC and use my Win2K gateway to surf the internet. But when I install the LTSP'ized version with two NICs, I can ping the gateway, the gateway can ping the LTSP server, but I can't surf the internet. I think I've tried just about everything to try and use the gateway for internet access. If I can get the LTSP server on the internet via the gateway, then I believe the LTSP clients will fall into place, as well.

Some details.

My network "server" is actually a Win2k PC with internet connection sharing.

I use VNC to virtually connect to the gateway to open and close dial in connections to the internet. I have to use win2k because I need an "internet answering machine" to answer the phone when I am online and there is no linux support in this area (living in the sticks, as I do, also makes separate lines very much cost prohibitive for dial in access to the internet.)

The terminal services PC has two NICs. ETH0 attached to the terminal services clients via a 3com switch. ETH1 is attached via an additional switch to my network.

I might have a problem with the way the subnets are setup:

ETH0 is assigned by the K12LTSP default install to and serves the LTSP clients .100 to .253.

ETH1 also gets its 192.168.0.x IP address either manually or through DHCP from the network. It doesn't matter if I manually assign the IP or let DHCP handle the IP asignment, but I have known for years that if I let DHCP handle the assignment, I can't surf, so I just use This may be because the DHCP services via Windows Internet COnnection Sharing aren't really full DHCP.

My win2k gateway PC is and I always enter this address as the DNS server.

I tried to manually change the LTSP subnet on ETH0 to, etc., but I'm not sure this is the problem. Does the fact that the two subnets are using the same subnet scheme create the problem? I could see if the clients couldn't surf, then that may be the case, but the LTSP gateway can't surf.

After about 30 installs, different configurations, etc., I'm not sure where to go further with this issue. Can I provide some conf files that might give you an idea of where I need to go? Is this DNS or a route problem? Can the same IP adress scheme be used because the subnets are on different NICs, or is this the problem? Can you push me in the right direction of where to get some help?

Thanks for your help.

Pls Help (Squid/2000)

Tue, 16 Jul 2002 17:03:21 +0530
Vikas Kanodia (vikas from

Hello ,

I've installed Squid-2.5.PRE8 & Samba 2.2.5 on RedHat Linux 7.1.So i wanted to authenticate windows 2000 users in Squid.So i've install the Winbind & configure as per the documentation available on the net , link is attached pls see(Authentication tab). <>;

After doing all the things successfully...when i run the squid it gives the message like this...

[root@gnspl-prx bin]# ./squid
2002/07/15 10:46:23| Parsing Config File: Unknown authentication scheme 'ntlm'.
2002/07/15 10:46:23| Parsing Config File: Unknown authentication scheme 'ntlm'.
2002/07/15 10:46:23| Parsing Config File: Unknown authentication scheme 'ntlm'.
2002/07/15 10:46:23| Parsing Config File: Unknown authentication scheme 'ntlm'.
2002/07/15 10:46:23| squid.conf line 1746: http_access allow manager localhost localh
2002/07/15 10:46:23| aclParseAccessLine: ACL name 'localh' not found.
2002/07/15 10:46:23| aclParseAclLine: IGNORING: Proxy Auth ACL 'acl AuthorizedUsers
proxy_auth REQUIRED' because no authentication schemes are fully configured.
2002/07/15 10:46:23| aclParseAclLine: IGNORING invalid ACL: acl AuthorizedUsers
proxy_auth REQUIRED
2002/07/15 10:46:23| squid.conf line 1751: http_access allow all AuthorizedUsers
2002/07/15 10:46:23| aclParseAccessLine: ACL name 'AuthorizedUsers' not found.
2002/07/15 10:46:23| Squid is already running!  Process ID 9957
[root@gnspl-prx bin]#

Pls guide me...


Vikas Kanodia

This is a bit more complicated than the stuff Thomas' "Weekend Mechanic" column covered in issue 78 ( -- anybody care to help him out?
Some articles on living the life of a Windows server when you're really a Linux box would be cool, too. -- Heather

suggestion for link...

28 Jul 2002 23:10:53 -0400
D. Goel (deego from


i went to and tried to find a 'subscribe to paper version' link to send to a coworker, but could not find one.

if you could please let me know of such a site, and include a link to it on the main page.. :)


Maybe we should put a link on the mirrors page about paper copies to the FAQ entry for which formats LG isn't available in, since it describes how to make quality printouts. -- Heather
Attention publishers, there continues to be high demand for a print version of LG .
LG is not available in printed format. Since it's freely redistributable, anybody has the right to offer this service. Since nobody has done this in the six years LG has been in existence, even though there have been numerous requests, one has to consider why. It costs money to print and deliver a paper version, and the subscription rate would be higher than most people would be willing to pay. Those outside the publisher's own country or region can forget it; the mailing cost alone would be prohibitably high. Plus there's the labor-intensive world of "subscription fulfillment": taking down names and addresses, processing payments, updating addresses, etc. It can't all be automated, unless you can somehow wave a wand and get everybody to fill out the forms perfectly correctly every time.
Commercial magazines can justify all these costs by building a business around selling advertisement space, but LG does not accept advertisements. Consumer Reports don't accept advertising either, but again they have built a whole business around it. One can't see the incentive for building such a business around Linux Gazette , especially since Linux print magazines are already available. (Unashamed plug for Linux Journal .) -- Mike

Finding a Windows user's home directory from Linux

Wed, 10 Jul 2002 11:13:39 -0600
Dee Abson (dee.abson from


I've decided to try and integrate a RedHat 7.3 computer into our Windows NT domain based network, going for that brass ring of single sign-on and integrating the Windows necessities - access to Windows print queues and Windows file servers.

I have successfully implemented winbind (and samba, natch) under RedHat 7.3 and am now able to log on using a Windows domain based user name and password. Through a little more research and such, I have Linux configured so the user directory is setup automatically when the Windows user logs in for the first time, printconf makes it easy to connect to an SMB-based print queue and LinNeighborhood helps locate and mount SMB file shares. The only missing piece of the puzzle, as far as I'm concerned at the moment, is mapping the Windows user's home directory (which is a share on an SMB server) to a subfolder under their Linux home directory. I'm certain that I can accomplish the automatic mapping using the PAM module pam_mount (available at if anyone's interested in a look), it's retrieving the user's Windows home directory that eludes me.

Thus my question is this: How can I retrieve the Windows user's home directory, that elusive little string that will complete my puzzle, from my Red Hat system?

Many thanks,
Dee Abson, MCSE

Okay, this question has two parts. As an MCSE he may already know where MSwin keeps this valuable information stored; what he needs to know is how to make Linux properly ask for it, or dig it up across the shares.
It wouldn't be as easy as running 'grep' against some plaintext file, or maybe in a pipeline combined with 'strings'... would it? If it would, is that a security problem?
p.s. Don't attach HTML along with the plaintext. It's so messy and sent 3 times the text for the exact same message. -- Heather

article idea - making the minidistro

Mon, 8 Jul 2002 14:25:24 -0400
Tony Tonchev (tony from


This article idea may sound silly. I don't even know how to describe the topic, but here it goes...

For some time now, I've been thinking of developing a minimal/modular Linux distribution designed to allow small businesses to use Linux for their server needs rather then the M$ solutions. This idea is inspired partially by PizzaBox file server that Kyzo ( made available a few years ago, but their product is crippled and not Open Source. The same is partially true for and their excellent product.

Anyway, my problem is that I don't know where to start. I've looked at "Linux From Scratch" and "BYO Linux", but the most helpful information came from "Building Tiny Linux Systems with Busybox" Parts 1 through 3, published in ELJ. The three articles did help me understand some fundamentals and allowed me to actually plan my next step more intelligently.

Imagine having a modular Linux-based server that consists of a core and modules. The core will contain the basic services (kernel, security, networking, dhcp, etc. Web-based administration of all services should be available as well as equivalent console-based administration. Typical Modules will be a Web Server module, Workgroup File Server module, Mail Server module, Firewall module, FTP module, etc. All modules should be independent of each other and include their respective web and console-based administration components.

In other words if I want just a file server, then I install the core and file server module only. If I want a file and mail server then I install the core, file and mail modules and that's it.

Here is yet another requirement: The core and all modules must have the smallest possible memory footprint reasonably possible. I like uClibc, BusyBox and TinyLogin because they all fit on a floppy. Why can't the core and each installable module fit on one or two installation floppies? That will be easy to download and install unlike a 600meg ISO.

As you can probably tell, I know where I want to go, but don't know how to get there. Maybe my whole idea is flawed due to my lack of knowledge. An article or articles on how to build that unique Linux mini-distribution will be great.


Thanks for the time

Tony Tonchev

Hmm, let me see if I have this right. You want to be able to do all these cool things, where maybe the real core fits on one floppy, and maybe each "module" as you put it (not to be confused with kernel modules) fits on a floppy of its own. Load up enough of them and you have the dream server, which fit in your lunchbox or purse.
I note that a 196 MB cd-rom fits in the same space as one floppy (except that it's slimmer). But you're right - watching someone take us through this process of development would be a great article.
You may want to keep an eye on current development in the LNX-BBC project. Nope, it has nothing to do with Britain's prime television station. It's what happens when you use cloop compression to cram a fairly usable Linux setup on a 50 MB "bootable business card" . Think LNX = squished LiNuX. Since you're interested in rolling your own, I recommend reading about the new GAR setup and, quite literally, checking it out. (
There are piles of specialized "mini distros" out there. This request clearly aims towards the general use setup. A making-of article for any of the minis might be fun to see, though. -- Heather



Mon, 1 Jul 2002 08:37:14 -0400
Scott Sharkey (ssharkey from

Hi Heather,

Just read your TAG about IMAP. You're right that Courier-IMAP is the best.... run ith with Postfix instead of sendmail and you'll be even happier. Then mix in Sqwebmail (from Courier's author) and you'll be REALLY spoiled.

Just for grins, I mixed in OpenLDAP, and now have a server with no Unix accounts, full IMAP/Pop/WebMail capability, and very easy to maintain.

I use sylpheed as a mail client so far -- gotta try Evolution sometime. The OpenLDAP handles the address book too.


LG #80: add to `Red Hat and USB devices'

Mon, 01 Jul 2002 14:39:17 +0100
Daniel Baumann (danielbaumann from

hi lg team,

i have a little add to the article `Red Hat and USB devices' in you current issue.

the missed kernel config files from the different redhat default kernels are located in /usr/src/linux-*/configs.

greetings, daniel

Normally I don't leave sig blocks in, but since we occasionally get requests asking us about free ISPs who cater to linux users... this isn't specifically an endorsement, but you're all welcome to go look. -- Heather

Get your free email from

Using debug to write fresh MBR

Wed, 3 Jul 2002 18:53:23 -0400
Ben Okopnik (the LG Answer Gang)

A recent follow-up to my MBR-rewriting article: a guy who had an E: drive (yup, Wind*ws) that he wanted to blow off contacted me - seems that Partition Magic wouldn't touch it as it was. He either didn't want to or didn't know how to open up the machine and swap cables, so I tweaked that debug program for him:

mov dx,9000
mov es,dx
xor bx,bx
mov cx,0001
mov dx,0080
mov ax,0301
int 13
int 20

Change the numbers in "mov dx,0080" for the appropriate drive:

hda	C:	0080
hdb	D:	0081
hdc	E:	0082

Worked like a charm, according to the very happy fella.


Sat, 06 Jul 2002 00:45:22 +0000
Daniel Young (alandanielyoung from

A question. Do you pay your Mirrors?

They don't pay us, either.
-- Dan Wilder
No. The mirrors are run by people who want to host a mirror.
You didn't ask, but none of the LG staff is paid either, we're all volunteers. I'm the only one who's "paid", but paid in the sense that SSC donates some of my work time to LG. (I normally do web application and sysadmin stuff for Linux Journal.)
-- Mike Orr, Editor, Linux Gazette

Re: [LG 76] mailbag #1 cybercoffee shop

Wed, 10 Jul 2002 13:43:00 -0700
sandra (sfg from

I just want to make a small mention of our own little cybercafe... we're not gurus but we're definitely geeks here. :)

Sandra Guzdek (waving hi to Heather Stern)
Sip N Surf Cybercafe
Eugene, OR

Hi Sandra! (Sandra is the webmaster at one of my client sites.) Thanks to Sandra I also found a really cool search engine specific to hunting up internet coffeshops and kiosks - - which may be a little spotty since it relies on visitor reports, but at least it's international in scope. I was kind of amused when I looked up San Jose and had to pick through the entries checking that I was finding places in California. -- Heather


Wed, 17 Jul 2002 19:59:29 -0500
Tim Chase (gumnos from

As a long time self-taught user of Linux/Unix/Ultrix (and several other flavours), I've become addicted to such handy tools as vi, grep, sed, awk, ctags, and the bazillion other little utilities that can be so artisticly chained together to produce the desired results. I've stumbled across your LG archives, and all I can say is "WOAH!" I'm going to have to find myself a text-to-speech translator so I can read/listen-to all of this good stuff whilst at work, because there's just so much in here. Thanks for such a fabulous (and fun!) resource...

-tim chase

On behald of everbody here, THANKS! BTW, I've heard festival ( is pretty nice. Lots of things at Freshmeat that are supposed to use speech really use either it or ViaVoice under the hood. -- Heather

Ideas, huh?

Fri, 19 Jul 2002 22:12:14 -0700
The Gaijin (blades from

Home-brew hardware plans! Genertic GPL motherboard designs, SCSI cards, video, audio, PCI modems, NICs...everything Microsoft is trying to corner the market on. Some people feel Linux has only ten good years left if the current trend continues.

Some people believe that the Moon is made of green cheese and that big-bellied Santa Claus (with a sack of presents, no less) comes down a foot-wide chimney. "Other people are/think/do" is a very poor reason for doing something; I prefer to believe that people are _not_ sheep. -- Ben

Since the anti-trust suit, Microsoft's political contribution budget has gone from $100,000 per year to over $6.1 million, and now they're trying to get manufacturers to implement Microsoft-specific anti-piracy security measures directly at the hardware level (called "Paladium").

And those who do will end up in the same toilet as the winmodem/ winprinter manufacturers: the domain of the ignorant. I think that lesson has been well ingrained. There's a small market out there that sells to the gullible, but the whole world certainly isn't about to switch en masse. -- Ben

The only true solution I can see is to go back to the days of bread-boarding our own hardware in Dad's garage...public domain circuit designs from electronic hobbyist magazines and soldering irons. We've "de-marketized" software. Why not the hardware, too? If we can create the greatest operating system on the planet, imagine what Linux users can do with computers themselves. It would be nice to have something no organization or agency can legally touch or ruin for a buck. A collection of Linux-friendly hardware diagrams in the public domain that anyone can produce for the cost of parts alone. Our own hardware would completely end our dependency on third-party drivers and vulnerability to corporate rail-roading. I think creating our own hardware database would be the best move we could ever make.


I believe that you're seriously underestimating the difficulty and the complexity of what you propose. Even if Joe Average did have the necessary soldering, etc. skills (and I assure you that soldering multi-layer PCboards _is_ a skill, one that takes time and patience to acquire), where would he get the boards themselves? The average mainboard today is at least a six- or a seven-layer type; there's no way for the average experimenter to make one of those. Besides all that, there's the troubleshooting of the finished board - I can assure you that this will be required in most cases. How many people are capable of it? How many of them will burn a trace just as they're about to wrap up the project (i.e., after they've sunk hours into it?) How many have an oscilloscope, which is what's necessary for troubleshooting high-speed digital electronics?
I suggest that mainboard manufacture is the province of highly skilled, highly knowledgeable people - not something that can be retailed to Joe Average. I suggest that a much better tactic would be to create a Linux certification authority, someone who can brand hardware "100% Linux-compatible" in bright red ink; a goal that manufacturers could strive for and easily achieve, given how much hardware support already exists in Linux. -- Ben
There is a thing called "open-hardware". AFAIR they got open pci, agp, bridges and stuff. For a short time they even had a open-processor (arm clone) but that was pulled when arm pissed them off. So, the designs are there, but who is going to build the stuff? Writing 0.18um structures in your kitchen isn't that easy ;-)
I think that the problem lies not with us linux users, we KNOW that M$ is up to something "bad". But what about those windows dau's that simple stick to win "because it`s all so easy". Do you think they will go through much trouble to make their own computer? No, if the thing is cheap and it's easy (like in sharing your whole hdd with other kazaa users ;-) they even let the government spy on them and allow ms to know what dvd they watch and how ofter.
When such M$ hardware with the fritz chip arise these people will buy them (in large numbers) so that it will be hard to get hardware that does not feature these chips. But I think there will be a small market (for us linux users and some intelligent win users) and where there is a market there will be a seller.
Lets hope for the best
-- Robos
While I'm a big fan of the make it yourself philosphy, remember that the widespread presence of all the good toys ... cars, and computers themselves come to mind ... came not from the individual skilled crafstmen, but from the assembly line. I find it far easier to maintain an old 386 for ten years past its expected lifespan, than to figure out how I'd compose a replacement out of loose copper wire and transistors. Given that I'm among those whom Ben describes as able to wield a soldering iron and knowing what an oscilloscope is (I don't own one, but I know where to borrow a few) I just don't think garage made P7-oids are going to happen real soon.
The buzzword you're looking for is "economy of scale". We haven't "de-marketized" software ... we've shown there's a growing market for a much greater variety of software.
Speaking of "so easy" ... the ease is mostly an illusion, fostered by all those strong-arm OEM deals that resulted in nearly all systems being preloaded with MSwin. Now that Linux, and perhaps rarely, occasional others, are also being pre-loaded you'll see that particular bubble pop. It's mostly flat already, since reinstalling MSwin after it crashes too many times is so painful.
In countries where someone cannot simply wander into a department store, buy a few new couch pillows, tortilla chips and salsa, and a box of the latest rev of MSwin on special, buying into an expensive foreign standards probably won't happen either. Indeed, here's looking to a long and profitable time for companies that don't buy into the "palladiium" chip game. Can you say "sink the Clipper chip?" Knew you could. -- Heather
A better solution might be to join the struggle to give some of the power back to the people through the establishment of public campaign financing. It should help to fight many more problems than just M$ taking over.
Some URL's to check for more info about this are:
-- John Karns

This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 81 of Linux Gazette August 2002
HTML script maintained by Heather Stern of Starshine Technical Services,

More 2¢ Tips!

Send Linux Tips and Tricks to

Spam comments

4 Jul 2002 15:52:02 -0400
Karl Vogel (vogelke from
This is in reply to the LG issue 80 TAG blurb.

In LG 80, Heather was rumored to have said:

Almost the only spam that escapes Dan's traps anymore are those dratted conman scams telling me about how their late uncle / business partner / revered general or whatever left them a quadzillion dollars / francs or whatever and they can't get at any of it unless you as a friend / distant relative / confidant / conveniently uninvolved sucker open your bank account to help them launder it.

Do you use "ifile"? That nails just about all the spam I get, including those stupid laundering schemes. The best part is that it gets smarter with time; the more spam you feed it, the better it weeds out crap.


Mailing list:

Some tips plus a nice procmail setup and ifile database:

My .procmailrc is below.

-- Karl Vogel

See attached vogel.procmailrc.txt

Playing CD Music Digital Output

Tue, 2 Jul 2002 11:17:04 -0400 (VET)
Ernesto Hernandez-Novich (emhn from
This is in reply to the LG issue 79, help wanted #2.


Regarding Bill Parks question on the June issue, as to how to play CD audio without the analog cable usually connecting CD-ROMs to audio cards, a similar situation happens if you have one of the latest iBooks. There is no way to tweak the sound driver to do what he wants, but XMMS can be of help. He should try using the "CD Audio Player" Input Plugin (select it via Preferences -> Audio I/O Plugins) and configure it accordingly, say have /dev/hdc (the "real" CD-ROM device, not /dev/cdrom which is usually a symlink) and /cdrom. Then, put the audio CD, and open a "Playlist" in XMMS but instead of selecting a File, select the /cdrom directory; he'll see the audio tracks there and be able to play and listen to them.

That's right, the system will be doing CDDA extraction from the CD into XMMS, which then plays it through OSS/ESD/ARTS. Ugly, but works.

Ernesto Hernández-Novich
GPG Key Fingerprint = 438C 49A2 A8C7 E7D7 1500 C507 96D6 A3D6 2F4C 85E3

Getchar and loops...

Mon, 8 Jul 2002 08:34:35 -0500 (CDT)
Jay R. Ashworth, Pradeep (the LG Answer Gang)
Question by Zaikxtox (

Hello. I'm trying to write a very simple C program that needs to attend the user input without blocking a loop. I have porgrammed many time on pascal, and there the code will be something like:

  while not keypressed
    writeln('hello! i'm still alive');

well... when i use C code i try the getchar function, but it waits until a key is pressed blocking the program.

How can i know if there is a key into the buffer without blocking the execution of my programs?

Thanks in advance :) Zaikxtox

[jra] Well, you can, but it's not exactly trivial, and how you do it depends on which environment you're coding: raw-C for the glass-tty, curses/termcap, X, KDE, Gnome, etc.
This is more generic C stuff than Linux stuff; I'd recommend you look into books like The Unix Programming Environment, by (I think) Kernighan and Pike, and the Stevens books.
[pradeep] As the other poster mentioned, it depends on where you want this behaviour. Assuming that you want to do this on a console, ncurses is a great library to use. It gives you the right abstraction.
Read my howto at
Particularly the function halfdelay() should help you for non-blocking key input.


Sun, 30 Jun 2002 02:22:29 -0700
Heather Stern (Linux Gazette Technical Editor)

Recently one of the gang mentioned renaming an rpm file to a much higher version number before running alien, so that the Debian package system would not want to overwrite the result.

The key to doing that "the right way" is a value that the Debian maintaineers call the epoch.

Of course people are used to seeing package versions like 1.2 or even 1.4.3p1.

In the Debian world that might be 1.4.3p1-2 meaning that this is the second time the Debian maintainer had to build the same version. Probably he or she has patches in it.

But to handle programs whose version numbers don't go constantly up like time goes forward ... a certain typesetting package comes to mind ...

Must have been some other package. According to its FAQ, TeX's version number asymptotically approaches pi, growing digits along the way. -- Heather

... they invented an epoch. epochs start at the invisible "1" and go up to 99.

So a version:


Would be 98 epochs ahead of a mere:


and the same number of epochs ahead of:


If you want your package and the Debian one to live together in harmony, then rename yours to something before the version number that does not overlap:


Of course that's safest if the files inside their file list don't overlap either!

That was the problem, of course; the filesets were exactly the same. -- Ben

Using either of these methods is safer than setting a hold on the package, which is sometimes recommended, but which I've seen fail before.

crypt undefined

Tue, 2 Jul 2002 16:48:03 +0200
Chris Niekel (chris from
This is in reply to the LG issue 80, 2c Tips #8.

g++ -lcrypt server.c Error: 'crypt' undefined

The order of the arguments matter. You should try:

g++ server.c -lcrypt

The linker links from left to right and is a bit dumb. After compiling server.c, the crypt call is undefined. Then libcrypt.a is tried, and crypt is defined in there. So it will be resolved.

In your case, libcrypt.a doesn't match any undefined symbols (YET!), so it is not linked into the executable. Then server.o is linked, and that has an unresolved symbol (crypt). The linker isn't smart enough to go back to libcrypt.a.

The answerer of the questions talks about the name mangling. If you mix C and C++ code, you have to tell the compiler what is C. That is usually done by doing:

extern "C" void foo(int);

This tells the compiler that function foo takes an int, returns nothing and is a C function. But all standard libraries already do that for you, so it's very safe to call crypt() from C++ code.


Chris Niekel


Mon, 15 Jul 2002 14:07:38 -0400
LF11 (lf11 from
This is in reply to the LG issue 80, 2c Tips #10.

I've mainly been connecting to the internet using diald, but I've noticed that I'm only getting about 3.5 KBps , whereas on W98 I get about 5KBps. A little experimentation shows that dialling with kppp gives about 5KBps as well.

kppp seems to use an initialisation string of ATM1L1, but changing MODEM_INIT to "ATM1L1" in /etc/diald/connect, didn't improve the performance.

MODEM_INIT started out as "ATZ&C1&D2%C0". I changed "%C0" to "%C3" to ensure that compression was enabled, but this made no difference. I can't find an option in diald to log exactly what's sent to the modem and I can't see any conflicting options in the configuration for pppd.

Any suggestions for how to track down why kppp gets better performance than diald would be appreciated.

The modem is an MRI 56K internal modem.

Check the port speeds. It's likely that diald is using a port speed of 28.8KBps or 56KBps. Try to have something well above the actual speed of the modem, as the data coming from the modem may be substantially higher in volume than the actual modem's capability (due to hardware compression).

The only exception to this is with a USR 56k Faxmodem I have when used with WvDial; it must be at 56k, and I don't know why. If the computer port speed is set higher than that, what comes across the line from the modem seems to be escaped characters of some sort, along the lines of

f [18] f [18] `[1e]~[1e]~[1e][06][1e]x[1e][18]x

And pppd says "LCP timeout sending Config-Requests" in syslog. Just thought I'd let you know about this problem in case you have it.

HTH, -cj

[Neil] Beware, it doesn't read /etc/diald/diald.conf. According to the man page "diald reads options first from /etc/diald/diald.defs, then from /etc/diald/diald.options".
Putting speed 115200 in diald.options gave me a throughput 4.9KBps downloading Mozilla 1.1 alpha.

Killing GUI applications under KDE

04 Jul 2002 08:17:43 +0530
Ashwin N (ashwin_n from

Here's a quick way of killing a GUI application that has hung or is not quitting (or you just want to kill for fun :-). Press Ctrl-Alt-Esc and your mouse pointer turns into skull-and-bones. Now, click on the offending application to kill it. This works only under KDE.

Of course, "xkill" command does the same thing, but this is much easier and faster to use.


[Ben] Good tip, Ashwin! Under IceWM, I have "xkill" tied to "Alt-Ctrl-K" for the same functionality:
(from "~/.icewm/keys")

key "Alt+Ctrl+k" /usr/bin/X11/xkill

GRUB - Window XP can not load

Fri, 28 Jun 2002 16:43:08 +0100
Neil Youngman (n.youngman from
Question by Soufian Widjaja (

I find some info online that we can overwrite the boot loader and then install boot loader for Window by run fdisk / MBR on Windows If this is the way, how can I do that? What to do with my Linux once we overwrite the MBR?

I think what's needed is to experiment with the GRUB command line mode. When the menu comes up press 'c' to go to command line mode and try a few variations on the command sequence you've got in /boot/grub/menu.last When you come up with a command sequence that works, then edit your GRUB config to match.

2 things to try are:

1 After the rootnoverify command add the command makeactive. 2 Try varying the partition numbers in the rootnoverify command.

There's lots of handy info in Linux Journal #85, see

Hope That Helps


Wed, 3 Jul 2002 20:04:57 GMT
Chirag Wazir (wazir from
Question by Octavio Aguilar (
This is in reply to the LG issue 80, Help Wanted #1.

Does anybody know how to run a program that's compiled in Kylix, but without having the Kylix environment around at runtime?

If you want to run a compiled Kylix program outside the IDE you need to run

source /usr/local/kylix2/bin/kylixpath

first, or add it to your /etc/profile

I had the same problem initially - so I presume that's what the question is about - my Spanish is non-existent.

The alternative interpretation could be about making a distribution package to run on machines where Kyilx isn't installed - I haven't tried that yet.

Chirag Wazir

use an .rpm without installing it

Sat, 6 Jul 2002 13:40:26 -0500 (COT)
RE Otta (obob from
Previous Tip by Ashwin M (
This is in reply to the LG issue 80, 2c Tip #18.

It is simpler to use Midnight Commander. Click on the rpm file like you would a directory and transverse the rpm as you would a branch of the directory tree. Locate the file or files and copy them to an actual directory with the copy button. Simple and effective!

[John Karns] I've found that some mc versions changed the rpm handling behavior. I had grown quite accustomed to viewing rpm contents and copying parts via mc, then after installing SuSE 7.1 on my laptop, was no longer able to view more than a partial list of the files in the rpm; specifically the rpm headers (description, etc.). I was able to correct the problem finding the mc scripts used for rpm handling, and changing one to agree with a previous mc version script.
One other point is that for very large rpm files (over 2 or 3 MB), the process can be very slow. When dealing with rpm files containing large tar balls of source code, I usually just "install" the rpm, which copies the desired file to /usr/src/packages/SOURCES.

Linux Journal Weekly News Notes tech tips

Watching multiple log files at once

Recent versions of the GNU tail command let you tail multiple files with the same command. Combined with the -f option, you can watch multiple log files. For example:

tail -f /var/log/httpd/access_log /var/log/httpd/error_log

will monitor the Apache access and error logs.

Switching to Maildir format mailboxes

If you're moving from old-style mailboxes to Maildir directories for your mail, you can force Mutt to create Maildir directories by default with:

:set mbox_type=Maildir

in your .muttrc file.

To get Procmail to deliver to directories as Maildir and not MH folders, put a / after the directory name in your recipes, like this:

# Dump mail from Microsoft viruses into a trash Maildir
:0 Bf
* Content-Type: application/octet-stream;

Running screen-oriented programs directly

To run a screen-based program such as top remotely with one ssh command, use the -t (terminal) option to ssh, like this:

ssh -t myserver top

Your running processes

For an easy-to-understand, compact view of what's running on your system now, try the pstree command. A handy option is -u, which shows the name of the user running each process. Option -p shows the process ID, so if you want to memorize only one option combination, try:

pstree -pu

(No pun intended.)

pstree is a good way to make sure that privilege separation is working in your upgraded ssh install--you did upgrade sshd, didn't you?

This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 81 of Linux Gazette August 2002
HTML script maintained by Heather Stern of Starshine Technical Services,


(?) The Answer Gang (!)

By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and... (meet the Gang) ... the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to The Answer Gang for possible publication (but read the guidelines first)


¶: Greetings From Heather Stern
(?)Can't See Boot Messages Even Though RedHat 7.2 Boots OK --or--
Shedding Light On A Monitor's Troubles
We're still in the dark, here.
(?)ide-scsi emulation for IDE IOMEGA ZIP 250MB
(?)Version incompatibility

(¶) Greetings from Heather Stern

It's been a slow month here at the Gazette, with some days actually being so light that some gang members piped up with "did I fall off the list?" Not so slow that we didn't get tips and threads, though. And not quite so slow that I'm publishing all the threads we got or anything like that.
Readers are being very helpful to each other and I'm glad that the Help Wanteds are popular. Andy Fore tells us that with so much fuss over the Alcatel Speedtouch and its gory details, it now has a HOWTO of its very own (although not by him). Here it is:
I'm glad to see that with enough people's work on tiny bits here and there, we all grow richer for it.
The Peeve Of The Month this time around is some fellow who, not getting more than raw guesses from The Answer Gang, and figuring it out himself, instead of chiming in with it decided to tell us off how stupid and flatfooted we all are, not to mention how dare we ask him for the answer. I'll save you all the grumpy replies. Let's just say that we didn't promise you our rose garden wouldn't have thorns, bugs, or that we'd have an instant gauze bandage (brand-name or otherwise) handy if you get bit. We try our best, when we've got a few moments free. That's all we can really do.
Which brings me to the topic of my babbling this month. I certainly say it often enough face to face...

When Did Your Important Data Become Important To You?

With the sub-thought... and perhaps you should decide what really is important, instead of discovering it in emergency. Take a good look at your own day for a week or two, and notice the things and people that you need the most.
Unfortunately the financial world continues to be slow too. While Linux increasingly creeps onto people's desktops and has pretty much taken root in their LAN closers - especially places still small enough to use closets instead of glass houses and cardkey setups - there continues this persistent and patently false feeling that those free software zealots have no interest in spending money.
Not true in the slightest. If it was there wouldn't be all these shows on our topic, like this month's LinuxWorldExpo in San Francisco ( We just want to get our money's worth when we do.
Take this example. Just last week the news that Linux Weekly News ( was almost out of money - again - resulted in another heartening rush of help for them from readers eager to keep getting their dose of Linux events... enough for them to consider that a web based subscription model might not fall plop on its rear, after all.
Free projects are good, but many of the finer ones have their commercial support avenues too, and not always with the obviosu product name placed in their URLs, either. Berkeley DB has Sleepycat (, and so on. Without noting anybody in particular - I'm sure you all have different things you really use your computers for out there - I'd like to encourage everyone to continue to put their money where their mouth is. Pick up a distro at an installfest, and decide it doesn't drive you as batty as some other distro or OS you tried before? Buy their next version. Like an incredibly cool free software project that never asks a dime and says "we're doing this 'cuz it works, not 'cuz we need the bucks" ? Send something in thought of a thank you to one of the organizations that defend making it easy to pass these things around. The tops on my list are the Debian project (, the Free Software Foundation (, and the EFF ( but you probably already guessed that, and I'm sure there are others.
Don't be afraid to thow kudos in package authors' directions either - that's at least some pay in "The Coin of The Realm" ( If your life with a package isn't quite perfect but you like it anyway, then pitch in with some elbow grease, being willing to fuss with ltrace, strace or waste a bit of disk space on some more verbose logging so your bug reports can be more useful, and even more importantly, be willing to try the new code when they think they've got some sort of fix for you.
Err, don't forget to turn off all the traces and debug stacks when you're done, or you'll find yourself buying terabyte storage to go with it.
If you have terabyte storage and nothing better to do with it, or at least don't mind, consider mirroring some projects that your site uses and enjoys the benefits from. You get a local download, crosslinks, and that's one less chance the project might disappear on you just because some poor fellow loses his job or completes his college curriculum and has to move... or that some poor company who was primary-hosting it will go the way of the dodo and the dotbomb.
And what people usually mean when they say that fun little phrase:
  1. don't forget to make backups!
  2. They aren't bloody well helpful if they don't work, so check the restore procedures once in a while. Go ahead. Grab a spare machine and try it. Figure out if it really only takes you an hour and a half or all afternoon to bring the mail server or accounting department back up if that last power surge fries the UPS and a computer with it.
    (You laugh now, but I have seen a UPS blow out and take the machine with it. I can still smell the burning plastic and hear that horrible squeal. Thank goodness we cut off the real power before a real fire started. Ugh!)
  3. Make sure that you've rescued the human-generated work that goes into a system, not just the grubby details that make it able to boot up. People are limited in what they can regenerate, and people stressed from losing a lot of work, even more so.
Did I mention that we just passed Sysadmins Appreciation Day? It's the fourth Friday in July. ( From the website: "Let's face it, System Administrators get no respect 364 days a year..."
It's a tough world out there, folks. We've got to stand together these days. If we can't all be heroes, we can at least put our own sense of what heroism is to good use.
Have a great August, folks. Now, enjoy the threads :)

This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 81 of Linux Gazette August 2002
HTML script maintained by Heather Stern of Starshine Technical Services,

"Linux Gazette...making Linux just a little more fun!"

News Bytes


Selected and formatted by Michael Conry and Mike Orr

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to

 August 2002 Linux Journal -- 100th issue!

[issue 100 cover image] The August issue of Linux Journal is on newsstands now. This issue focuses on LJ's 100th issue. Click here to view the table of contents, or here to subscribe.

All articles through December 2001 are available for public reading at Recent articles are available on-line for subscribers only at

Legislation and More Legislation


It emerged during the past month, to the dismay of everyone interested in open file formats and free software, that the JPEG image compression scheme may be subject to patent royalties. As reported by The Register, Forgent Networks have recently come into possession of a patent which they claim covers the transmission of JPEG images, and have even managed to claim royalties from two companies. If this patent proves to be enforceable, the ISO have said that they will withdraw JPEG as a standard (the licencing terms being enforced by Forgent are not compatible with ISO regulations for standards). Hopefully the patent will not stand up. To make sure of this, the JPEG committee is seeking examples of prior art which would render the patent null and void. If the worst comes to the worst, it appears that the patent will expire in 2004 in any case.

The Software Patent Working Group of the FFII, has pointed out that there are also European Patents in existence which could be used to put a lean on JPEG compression. A small step can be taken against EU software patents by signing the Petition for a Software Patent Free Europe.

A webpage bringing together many links on this story is the new Burn All .JPEGs! website. Forgent's website also has a list of recent appearances of the company in the news, which has a couple of links to stories regarding the JPEG patent.

 Perens and DMCA

Bruce Perens generated some publicity this month by threatening to violate the DMCA live on stage at the O'Reilly Open Source Convention. The plan was to demonstrate how to remove the region-code control built into a DVD player. However as reported by Dan Gilmore and by Slashdot, Bruce backed down from openly breaking the law following a request from his employers, HP.

There are several good links regarding this story on the O'Reilly Open Source Convention Conference Coverage page.

 More DRM (sigh)

A bill proposed by US Senator Biden would make certain kinds of Digital Rights Management circumvention a felony (capital crime). ZDNet coverage states that the bill was originally intended to combat large-scale piracy (e.g., fake Windows holograms), but was quietly rewritten to include DRM. (Courtesy Slashdot)

 Open source: EU, US, Peru and Pakistan

It was reported in various locations (in The Register, on Slashdot and in ZDNet), that a recent EU report has called for wider open-source adoption, in order to have greater exchange of software between different administrative branches, and also between countries. The Slashdot story has links to the original EU report in various formats.

On a not unrelated theme, Sam Williams at O' has taken a look at the impact of open source software in government--both inside and outside the U.S.

Wired reports that the US Ambassador to Peru has come out against Peruvian Congressman Villanueva's bill advocating usage of open-source software in gonvernment computers. Also, Bill Gates personally delivered Peru's president Alejandro Tolero a donation estimated at $550,000 for the national school system. Not surprisingly, the money is to go to the same schools Villanueva's bill targets. Villanueva said he believes Microsoft isn't worried so much about losing the small Peruvian market as the cascading effect that might happen if other Latin American countries follow suit. Similar bills are pending in Argentina, Mexico and Brazil, and Spain's Extremadura region has already adoped Linux as the official operating system of its public schools and offices.

Pakistan is getting into the Open Source game too. 50,000 Pentium IIs running GNU/Linux are being installed in schools and colleges all over Pakistan, at a cost of less than US$100 each. "Proprietary software for these PCs would cost a small fortune. Surely more than what the computers cost!"

Linux Links

The Register have an excellent report (originally from NewsForge) by Grant Gross on a public workshop on digital rights management. It would appear that "fair use" advocates got less than a warm reception from Hollywood and Dept. of Commerce representatives.

Marcelo Tosatti, maintainer of the stable kernel branch in an interview with ZDNet.

From LWN, come links to reports in CNET and in ZDNet of the Netherlands' NAH6 plans to release a Secure Notebook incorporating a program that encrypts files transparently. The user runs applications on Windows, which is installed in a VMWare virtual machine. VMWare is run on Debian GNU/Linux, which keeps files encrypted in case the laptop is stolen or mislaid.

A few links from Linux Journal which might be of interest:

Some interesting links from The Register over the past month: are publishing the winning essays from their wIndependence Day contest.

Privacy International have an FAQ and other information on proposals to introduce ID cards to the UK. Probably of quite wide interest.

A couple of links from Slashdot which might interest you:

How Marty Roesch made the journey from obsessive gamer to successful open source developer and entrepreneur.

The binary nature of freedom, at Advogato (with talkbacks at NewsForge).

NewsForge have a report on the various instant messaging options available to Linux users.

Howard Wen at O'Reilly takes a look at Sony's upcoming Linux distribution kit for the PlayStation 2.

Some interesting links from Linux Today

NewsForge article on the game theory of open code.

Timo Hannay of Nature compares [O'Reilly] the scientific method to the mechanics of open source development.

Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

USENIX Securty Symposium (USENIX)
August 5-9, 2002
San Francisco, CA

LinuxWorld Conference & Expo (IDG)
August 12-15, 2002
San Francisco, CA

LinuxWorld Conference & Expo Australia (IDG)
August 14 - 16, 2002

Communications Design Conference (CMP)
September 23-26, 2002
San Jose, California

IBM eServer pSeries (RS/6000) and Linux Technical University
October 14-18, 2002
Dallas, TX

Software Development Conference & Expo, East (CMP)
November 18-22, 2002
Boston, MA

News in General

 Linux Journal articles available for documentation

Linux Journal has changed its author contract to clarify that any author may include his/her articles as freely-redistributable documentation in a free software project or free documentation project after the article has been published. Authors have always had this right since the founding of LJ, but some did not realize they had it. Motivations for doing so are to make the information available to all users of a program, in a convenient location, and so that the project can use the article as primary documentation if desired, updating it as the program evolves.

 Linux Weekly News

It looked like the end of the road for Linux Weekly News earlier this month, when they announced that the August 1st edition would be the last ever. The basic cause for this decision was lack of money, and the absence of any plan which could generate money. Following the announcement, many disappointed readers put their money on the table and contributed to LWN's donation scheme. This quickly raised $12000, leading to a rethink of LWN's future. A final decision on the magazine's fate has not been made.

 Mandrake at Walmart

Following last month's launch [NewsForge] by Wal-Mart of PC's with Lindows pre-installed, comes a new announcement of the availability of Mandrake-equipped versions. The Wal-Mart catalogue contains full details and prices of both the Lindows and Mandrake PC product lines.

NewsForge have reported on this story, as has The Register. Hopefully the Mandrake version of this product will prove more satisfactory than the earlier Lindows offering, which received a very lukewarm review from NewsForge.

 Ogg and Real

Congratulations to the folk behind the Ogg Vorbis project, who have released a version 1.0. As linked from LWN, there are currently various news items related to the 1.0 release on Ogg Vorbis News. This story was also reported by The Register and by CNET,

Ogg Vorbis and have also been in the news this month due to the links being forged between the open source format and the new Helix software of RealNetworks. This development should see some parts of RealNetworks' software being released under "a community and open source licence". Inclusion of the Ogg Vorbis codec into RealNetworks products should follow.

Bruce Perens has written an in-depth account of the issues surrounding the RealNetworks-Xiph link-up, and has criticised many features of the deal, such as the fact that Real's codecs will remain proprietary, and the use of community licencing (rather than opensource) for parts of their software. Rob Lanphier of RealNetworks has replied to Bruce on Slashdot, and asked for good will to be shown to the company's open source contribution. The Register has also reported on Real's open source experiment, as has CNET. The Helix Community website should report future developments in the open source development of the RealNetworks Helix project, and also contains copies of the licences the software will be released under (comments are invited).

 GNU Scientific Library (GSL) 1.2 is Released

Version 1.2 of the GNU Scientific Library is now available. The GNU Scientific Library is a collection of routines for numerical computing in C. This release is backwards compatible with previous 1.x releases. The project home page is at Information for developers is at

Distro News


Big news in the Debian world this month, Debian GNU/Linux 3.0 (Woody) has been released! Debian GNU/Linux now supports a total of eleven processor architectures, includes KDE and GNOME desktop environments, features cryptographic software, is compatible with the FHS v2.2 and supports software developed for the LSB. This was also reported by The Register. As reported by Debian Weekly News, the new testing distribution will be called "sarge".

A new revision of Potato, 2.2r7, was also released. Main changes were security updates, and a couple of corrections.

Debian Weekly News reported that the patent claims being made against the jpeg image compression scheme could require the movement of libjpeg62 and everything compiled against it into non-free.

 Gentoo have published an interview with Daniel Robbins of Gentoo Linux. There are a few talkbacks on NewsForge.

LinuxPlanet have a recent review of Gentoo Linux 1.2.


Redflag Software Technologies Co., Ltd and Opera Software have made a strategic announcement, and are looking forward to working together on embedded browser solutions for the Chinese market. RedFlag will seek to join as an Opera reseller, with joint development and market efforts to tailor Opera for the Chinese embedded market.

With Opera included, RedFlag will be able to offer original equipment manufacturers (OEMs) and hardware manufacturers Web-enabled solutions customised to fit with Red Flag's current product line.


SuSE Linux has announced the availability of the SuSE Linux eMail Server 3.1 with expanded system functionalities. SuSE's e-mail solution which assists in managing appointments, tasks, and resources, is aimed specifically at small and medium-scale enterprises as well as workgroups and public administrations.

SuSE Linux has also announced its participation in TSANet - the "Technical Support Alliance Network". TSANet is a global support platform that hosts more than 150 hardware and software providers. Within the scope of TSANet, various manufacturers cooperate in providing solutions for problems their enterprise customers encounter in connection with their applications.

For detailed information on the support offer of SuSE, please check

Software and Product News


Opera Software has announced the release of Opera 6.02 for Linux. The new version includes important fixes to the document and user interface, with special emphasis on the display of Asian characters, making this a useful upgrade for Linux users all over the world. Opera 6 opened up Asian markets to Opera, because of its added ability to display non- Western characters, and the Linux version has proved to be especially popular in this region.

Opera 6.02 for Linux is available for free in a ad-supported version at Users can purchase a banner-free version for USD 39. Discounts apply.

Opera Software ASA has also announced that SuSE will distribute the popular Opera for Linux Web browser in their Linux distribution. The deal is Opera's first major Linux distribution agreement. Opera is available in SuSE Linux 8.0.

 Random Factory

The Random Factory, have a range of scientific software for Linux, covering subjects such as astronomy, chemistry, and biotechnology. Also available are Linux workstations, preloaded with a choice of Random Factory products.

 Magic Software

Magic Software Enterprises, a provider of application development technology and business solutions announced today the introduction of Magic eDeveloper into the Chinese market. Magic Software support Linux on some of their product lines.

 Other software

VariCAD has announced the release of a new VariCAD Update for both Windows and Linux operating systems. This mechanical 3D/2D CAD package offers tools for 3D modelling, 2D drafting, libraries of mechanical components, calculations, BOM's, and many others. It is priced $399. Free trial versions for Windows 98/NT/200/XP and Linux (RedHat, Mandrake, SuSE) are available for downloading at

Linux Game Publishing is looking for beta testers for Mindrover 1.07b. You can register your interest at betas website. Successful applicants will be notified by e-mail.

Copyright © 2002, Michael Conry and the Editors of Linux Gazette.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more lovable!"

[picture of mechanic]

The Weekend Mechanic

By Thomas Adam

Welcome to the July edition

Well Howdy. Glad you could all make it. How are you all??? Still keeping up the pace with the rest of the LG thus far? I hope so, 'cos I can't see that this article is going to be any different :-)

News for this month?? Well, I have installed myself in my house now. When I get the chance, I'll get some pictures together for you all to have a sneak preview into the isolated, but pretty corner of Somerset that I now reside in when I am not at University, that is.

I also have a new job!! I work in a small factory, which produces eight different types of luxury dessert for a chain-store called Waitrose. For those of you who don't know who this company is, Waitrose is part of the John Lewis Partnership, plc. They specialise in nice, high quality food. For the really curious among you, here is a list of the desserts I make:

I start at 6:00am :-) That's the only the drawback. However it does mean that I finish around 2-4 in the afternoon.

That's about as exciting as my life gets really, I think it is time to move on to some proper material. Linux.....

A Brief Introduction: Quotas

What is Quota?

Way, way back in issue 13 Jim Dennis wrote a small article about how to set up your Linux machine so that it would tell you if you were going to run out of disk space. (SLEW). I read this article, and decided that you can make sure that your users do not run amok on disk space by enforcing a set rules by either specifying the number of inodes or blocks that a particular user cannot exceed.

Quota is handled on a per-user basis though, and is only active on one file system at a time. Thus, if a user has access to more than one file system, and you wish to enforce quotas on each of them, then you must do so separately.

So in short, quota is a way of setting maximum disk space that a user can consume, at any one time


As of Kernel version >=2.0, Quota support has been bundled in with the kernel, and as such, if you come from the dark ages, and have a kernel version <2.0, then obtain the latest source ( NOW!!

And as for the rest of the GNU/Linux planet, you should find that you already have quota support enabled by default in the kernel anyway. If you think you have not, then download the latest stable release and re-compile. It can't hurt.....much :-). For instructions on how to do this, please refer to the INSTALL file, under the source directory.

Incidentally, for those users running a nice shiny SuSE Box, Quota automatically comes compiled into the kernel :-)

But the fun-and-games continue, since Quota is not directly runnable from the kernel itself (i.e. it is not a self-contained module). You have to either install an RPM for Source file.

The RPM file (should you be using a distribution that uses this system of package handling) in question is:


And the tarball file is called:


Both of which are available from the following FTP repository:

To install the RPM file:

Issue the command:

su - -c'rpm -i /path/to/quota-1.70-263.rpm'

To install the source file

1. su -
2. cd /path/to/tarball/
3. tar xzvfm ./all.tar.gz
4. ./configure
[ Allow for configure script to run ]
5. make && make install
6. logout
[ To exit out of root's "su;ed" account ]

That's all there is to it :-) Now the real fun begins

Setting Quotas

The first step in configuring this, is to have a logical idea in your head as to how you are going to organise this. Quota gives you the option of either specifying a single user, or a group (which has been assigned to specific users), or both. If you are on a large network, then perhaps a mixture of the two is preferable. Think about it :-)

Group version is usually good, if you assign all users to that specific group. Makes life easier, n'est pas?

But the first actual step is to make some system-wide changes. For this, log in as user root. Please though, do not simply "su" in, as this simply changed your effective UID, and does nothing about export variables, etc.

We must first modify "/etc/fstab" so that the kernel knows that the filesystem mount point will make use of the quota support. A typical "/etc/fstab" file looks like the following:

/dev/hda1	/boot	       ext2	defaults 1 2
/dev/hda2	swap	       swap	defaults 0 2
/dev/hda3	/	       ext2	defaults 1 1
/dev/cdrom	/cdrom	       auto	ro,noauto,user,exec 0 0
/dev/fd0	/floppy	       auto	noauto,user 0 0
proc	        /proc	       proc	defaults 0 0
usbdevfs	/proc/bus/usb  usbdevfs	defaults 0 0
devpts	        /dev/pts       devpts	defaults 0 0

#NFS clients....
#Updated: Thomas Adam, Tuesday 03:45am??? -- Can't remember.
server:/etc	    /mnt/etc    nfs    rw,user,rsize=1024,wsize=1024,hard,intr 0  0
server:/home	    /mnt/home   nfs    rw,user,rsize=1024,wsize=1024,hard,intr 0  0
server:/usr/doc/lg/lg/lwm	    /mnt/lwm	nfs    rw,user,hard,intr 0  0
#server:/usr	    /mnt/usr    nfs    rw,user,hard,intr 0  0
server:/cdrom       /cdrom      nfs    ro,user,rsize=1024,wsize=1024,hard,intr 0  0
server:/dev	    /mnt/dev	nfs    ro,user,rsize=1024,wsize=1024,hard,intr 0  0

What we are concerned with, is not the last part of the file [ ** although quota can be used with nfs exported file types -- see "man rquota" ** ], but with which mount point is to be issued with quota support. This will depend upon where your user's $HOME directories are located. Unless you have got a separate partition or drive for this, then typically the mount points you will want to use is either "/" or "/usr" (if /home is a symlink to "/usr/local/home/" -- and "/usr" is on a separate drive or partition.)

Now I come back to my original question that I first posed at the beginning of this section. How are the users going to be managed? If you have decided to do it just on a user by user basis, then add usrquota to your fstab file. If you are going to do it by group then add grpquota. If you are going to use a mixture of the two, then add them both.

Thus, we are now concerned with adding to the fourth field the following:

/dev/hda3	/	       ext2	defaults,usrquota,grpquota 1 1

Change as appropriate for your version of fstab. If you are unsure as to which quota to use, I recommend that you include both in the fstab file, since it means that should you need to swap, you'll already have it set up. Now save the file.

OK. The next thing we have to do is to make sure that for whichever option you chose (i.e. namely usrquota or grpquota), that you create the necessary file(s) on the root of the partition that you changed in the fstab file. To do this, enter the following commands (still as user root)

touch /top/of/partition/quota.user && chmod 600

touch /top/of/partition/ && chmod 600

Lastly, you have to ensure that when your system boots up, that quotas are enabled along with it. For those of you who installed Quota from an RPM/.DEB, etc should find that they already have a script named "quota" or something similar in "/etc/init.d/". If you installed from source however, this might not be the case, which means that you will have to add the following script into your main init-script AFTER the mounting of all files in "/etc/fstab" has taken place.

(text version)

#Check quotas
[ -x /usr/sbin/quotacheck ] && {
  echo "Checking Quotas (please wait)...
  /usr/sbin/quotacheck -avug
  echo "Done."
} || {
  echo "Checking Quotas FAILED"

[ -x /usr/sbin/quotaon ] && {
  echo "Initialising Quotas..."
  /usr/sbin/quotaon -avug
  echo " Done."
} || {
  echo "Turning Quotas On: FAILED

What the above does, is runs a test on the named file, for the "-x" flag which means that it is checking to ensure that the file is executable, before it processes the rest of the script. It checks to see what quotas are defined (if any), and then goes on to enable them.

Once you have done that, issue:

init 6

And wait for your computer to reboot.

Caveat Emptor: If you did have to recompile your kernel, ensure that if you are using LILO as your boot-loader that you run:


BEFORE you reboot so that it knows about your new kernel-image :-)

An Example

Right. We should now have your machine acknowledging the fact that we are going to use Quota. What we haven't done yet, is the most important bit, and that is, who or which groups will be using the quota rule.

What I have decided to do, is to use an example if a user, and show you how you go about setting up a quota limit for him. We shall call the user lg.

Assuming lg is already on your system, what we must do is, depending on which format you are using, edit the appropriate file. For the purposes of this example, I shall do this on a per-user basis (i.e. I shall be using the usrquota format, although everything I shall explain here, is exactly the same for the grpquota option, if you have decided to do that.

The command that we shall be using is called "edquota" What we must do is edit a quota for user lg by issuing the command:

edquota -u lg

What this does, is launches an editor, and opens a new quota. If you haven't set the environment variable EDITOR="/usr/bin/jed" or some equivalent editor, then this command will not work. To set up this variable, add this to your "~/.bash_profile"

export EDITOR

Change the program as you see fit, i.e. Vi, jed, joe, emacs, etc. Then to make the changes active, source the file, by typing:

source ~/.bash_profile

What you should find, is that for user lg you get something similar to the following:

Quotas for user lg:
/dev/hdb2: blocks in use 0, limits (soft = 0, hard = 0)
      	   inodes in use: 356, limits (soft = 0, hard = 0)

Now your thinking: "err...." :-) Don't worry. It is much more simpler than it looks.

Blocks indicate the total number of blocks that a user has used on a partition (measured in Kilobytes, KB).

Inodes indicate the total number of files that a user has on the partition. N.B. These values you cannot change.

What we are concerned with, is the bit in brackets, right at the end of each line. This is the key to setting the entire quota. You'll notice that there are two options, one for soft and one for hard.

Soft limits indicate the maximum amount of space (in Kilobytes) that lg is allowed to have. It acts as a boundary which when set along with a grace period informs to user lg that he is exceeding his limit.

A grace limit is a period of time before the soft limit is enforced. This can be set from (sec)onds, (min)utes, hour, day, week, month. This is set by issuing the command:

edquota -t

You'll see that you should get the following:

Time units may be: days, hours, minutes, or seconds
Grace period before enforcing soft limits for users:
/dev/hdb2: block grace period: 0 days, file grace period: 0 days

Change both values for block and file to whatever you see fit. I recommend 14 days (2 weeks) for both. But then, I am generous :-)

A hard limit indicates the maximum amount of space that a user cannot exceed. This only works when a grace period has been set.

That's all there is to it. Now, you are probably wondering how the hell you are supposed to assign the same quota to every user on your system. Well, having just followed the example for lg, what you can do, is to use user lg as a template, and issue the command:

awk -F: '$3 >= 500 {print $1}' /etc/passwd'

What this does, is prints a list to the screen of all users who start with a UID greater than 499 (i.e 500 onwards). If this set of users on the screen looks OK, then we can use the above, in conjunction with the edquota, as shown below:

edquota -p lg $(awk -F: '$3 > 499 {print $1}' /etc/passwd')

This uses the quota we have already enabled for lg as a template to assign it to the string of users that the awk script produces for us.

That's all there is to it :-). I have found quota to be an excellent tool in keeping users at bay. I use it for my non-root account, as it stops me from going wild in my home directory, and thus forces me to clean it out once in a while.

A Brief Introduction: DansGuardian

What is DansGuardian?

For those of you who followed my last miniature introduction to the world of linux proxying for Squid and SquidGuard, will remember that I showed you how you could filter certain webpages that matched a certain regex. What Dansguardian does. is take the concept of filtering and stretches it so that you can filter webpages, based on content!!. Also though, Dansguardian allows you to filter out mime-types and block file extensions, thus meaning that should your users have the unfortunate punishment of using an M$-Windows machine, you can block files such as .exe, .com, .dll, .zip .... etc


Dansguardian can be obtained from the following:

You can either download an RPM or tar.gz file from his site. If you're a budding Debian GNU/Linux user, then you can always use the alien package to convert the RPM file to a DEB file :-). To actually install the files, follow the instructions as in the Quota section.

It is also worth noting, that Dansguardian requires the use of the nb++ library. There is a link to a download site, on the main site of dansguardian. This library is used to look at the content of webpages, and is thus essential to the operation of Dansguardian.

On install, dansguardian, main program is installed as "/usr/sbin/dansguardian". What you must do, is either in "/etc/init.d/rc.local" OR "/etc/init.d/boot.local" (depending on which distribution you are using), add:


So that Dansguardian is loaded up on init.


There really is not too much to configure when it comes to Dansguardian. What takes all the work, is the various regex expressions that you may want to build for really accurate content filtering.

It should be pointed out that DansGuardian can be used in conjunction with SquidGuard so that you don't have to replace any existing filters that you may already have in place :-) Good, eh?

So, the first thing we should do, is check where the package has put the configuration files. Well, it should be no surprise that they have been out in "/etc/dansguadian", and it is the files contained in this directory that we shall concentrate on. We shall begin by looking at the configuration file /etc/dansguardian/dansguardian.conf.

This is all the settings that Dansguardian will require. Typically, the only options that I have had to change are listed below:

#DansGuardian config file

reportinglevel = 1 #  0 = just say 'Access Denied'
                   #  1 = report why but not what denied phrase
                   #  2 = report fully

[Network Settings]
filterport = 8080   # the port that DansGuardian listens to
proxyip = # loop back address to access squid locally
proxyport = 3128    # the port DansGuardian connects to squid on
accessdeniedaddress = "http://grangedairy.laptop/cgi-bin/"

[Logging] # 0 = none  1 = just denied  2 = all text based  3 = all requests
loglevel = 2

[Content Filtering]
bannedphraselist = "/etc/dansguardian/bannedphraselist"
bannedextensionlist = "/etc/dansguardian/bannedextensionlist"
bannedmimetypelist = "/etc/dansguardian/bannedmimetypelist"
exceptionsitelist = "/etc/dansguardian/exceptionsitelist"
exceptioniplist = "/etc/dansguardian/exceptioniplist"

[Phrase Scanning] # 0 = normal  1 = intelligent
scanningmode = 1
# normal does a phrase check on the raw HTML
# intelligent does a normal check as well as removing HTML tags and
#  multiple blank spaces, tabs, etc - then does 2nd check

[ ** Many other options elided ** ]

The only things I changed here, was the filterport, the proxyport and the accessdeniedaddress tags, to reflect the configurations I used in "/etc/squid.conf". Having changed your options accordingly, you can save the file, and ignore it :-)

OK, moving on. In the same directory, you should notice files with the following filenames:

I shall take each file in turn, and explain what each one does. Where appropriate, I shall list small portions of the file.


This file contains explicit words and as such, I shall not list its contents here. Suffice to say, this is the file that holds keywords which are blocked if found anywhere in the HTML page.

As you will see, each word is enclosed within < > signs, as in:

< sex >

These angled brackets are important, since without them, the word would not be blocked.

You will also notice, throughout the file that some words have a space either side of the angle brackets, and some only have one space, either end of the bracket. This is important, since it says to dansguardian how to block the word.

< sex >

Indicates that the word sex (and only the word sex) should be blocked when it is found, nothing more.


Indicates that the word sex should be blocked, regardless of where it is found in a sentence or phrase. I.e. if it is found in hellosexyhowareyou? then it will be blocked.

< sex>

Means that anything is blocked, to the left of the word.

<sex >

Is the converse of the above.

As you look down the file, you'll see a number of different words which are being blocked. :-) You will also notice that there a number of comments of example words or hyphenated phrases which are not blocked, because you have already blocked part of that word. For example:


Need not be blocked, since the phrase:


is already blocking any other word that contains the word sex. That is an important fact to remember of you are going to be adding to the list at a later date.


Simply contains a list of file extensions that will be blocked by dansguardian, thus:

#Banned extension list


This is pretty much self explanatory!!

# banned MIME types


MIME types are used to identify different multi-media portions of applications, and as such is particularly useful when sending e-mail, however MIME has its uses in HTML too :-)

Again, I would add some other options here.


Lists those sites which, are allowed to be viewed, even though it would ordinarily be blocked by the rules defined in any of the other files, thus:

#Sites in exception list
#Don't bother with the www. or
#the http://

You can obviously add more sites as you are going along :-)

#IP addresses to ignore and just
#pass straight through.
#These would be servers which
#need unfiltered access for
#updates.  Also administrator
#workstations which need to
#download programs and check
#out blocked sites should be
#put here.
#Only put IP addresses here,
#not host names

#these are examples above
#delete them and put your own
#in if you want

The comments in this, pretty much say all :-). Obviously I would say be careful as to which machines you allow override access to :-)

And that rather short explanation explains how dansguardian works. You may well find, as I did, that it is very frustrating at first, since it really does block what you tell it to, but once you yourself have shoved a whole load of domain names into the exceptionsitelist, things should not be too bad at all! :-)

Touchrec: Recursively touches files in a directory

In issue66, there appeared a thread about how to recursively touch files in a given directory. Since the GNU version of touch does not support an option (yet), a few suggestions were offered using GNU find.

Indeed I was intrigued by this idea. I have often been in a situation where having a recursive feature to touch all files and/or directories was a necessity. Such an example is where I had forgotten to add the "m" flag to the tarball I was un-tarring, and as a result had a while load of future modification dates on my files (See issue79). Deleting the entire directory would have been a pain, since it took ages to untar. Thus, I decided to write the following shell-script :-)

(tar.gz file)


#touchrec -- Recursively "touches" files in a directory  #
#Ackn:       Written for TAG (Linux Gazette) :-)         #
#Version:    Version 1.0 (first draft)                   #
#Author:     Created by Thomas Adam                      #
#Date:       Saturday 15 June 2002, 16:58pm BST          #
#Contact:                     #

#Declare Variables
bname=$(basename $0)   #Basename of program (path stripped)
curr_dir=$(pwd)	       #Current dir
dironly=0              #-d off
filesonly=0   	       #-f off
quiet=0       	       #-q off
toplevel=    	       #-l off
redir=$(tty)  	       #verbosity redirection tag
version="$bname: Created by Thomas Adam, Saturday 15 June 2002, 16:58pm BST,
Version 1.0"

#Start Procedures

#Help Procedure
echo "
$bname usage: [-s directory path] [-q] [-d] [-f] [-t] [-h] [-v]


-s (optional starting directory, default is 'pwd')
-q (quiet mode -- suppresses verbosity)
-d (only touch directories)
-f (only touch files)
-t (touches the top-level directory, i.e. '.')
-h (prints this message)
-v (version of program)

Issue command \"man \\1 touchrec\" for full documentation

exit 0

run_default ()
  for lists in $(find ${curr_dir} ${toplevel} -depth 2> /dev/null); do
    #If it's a directory....
    [ -d $lists ] && {
      #All directories -- yes
      [ $dironly = 0 ] && {
        #Just the files? -- continue to the next instance of a loop, if so
      	[ $filesonly = 1 ] && {
      	echo "touching dir $lists/" >$redir && touch -c $lists
      } || [ $dironly = 1 ] && {
      	#then we are only checking for directories
	echo "touching dir $lists/" >$redir && touch -c $lists
    #This time check for files...
    } || [ $dironly = 0 ] && {
      [ -f $lists ] && {
      	[ $filesonly = 1 ] && {
	  #Only checking for files....
      	  echo "touching files $lists" >$redir && touch -c $lists
      	} || [ $filesonly = 0 ] && {
	   #As a result of no flags passed at run-time, this executes :-)
           echo "touching files $lists" >$redir && touch -c $lists


#Check for presence of command-line switches
if [ "$#" = 0 ]; then
  echo "No command-line args given"


  while getopts ":hqlfvdts: " opts; do
    case $opts in
      d )
      	#Only Check for Directories

      q )
      	#Quiet -- suppresses verbosity to console

      f )
      	#Only check for files, no directories
      t )
      	#Only process the top-level directory "."
      	toplevel="-maxdepth 1"
	#echo $toplevel  #for debugging purposes
      s )
	#Get path as specified
	#If $optarg is blank, print help_user()
	[ $OPTARG = "" ] && {
	  echo "No Parameter Given"
	} || curr_dir=${OPTARG}
      h )
      	#Print help message
      v )
      	#Prints the version
	echo $version
      	exit 0
      -* | * )
        #Any other options -- ignore

#Process optional commands...
shift $(($OPTIND - 1))

#Start main procedure -- once all options processed.  


For those of you who are completely new to BASH programming, please refer back to Ben Okopnik's excellent tutorial earlier on in the LG series (issue52 onwards). For those experienced programmers, you'll notice that I have used the "[ ..... ] && {} ||" construct, rather than the more traditional "" method, since the former gives more control with exit status :-), and I prefer coding like this anyway. Perl also requires this syntax too :-)

The script in itself is quite simple. Basically what happens is that I initialise all of my variables first. Now, BASH does not require this, but I find it much easier to know what's going on if I do.

I set up various variables, most of which are just switch identifiers so that I can tell whether or not any command-line switches (and which ones) have been issued. I set up another variable bname which returns the name of the program, with the PATH stripped, which I used in my help_user() function.

The other variable I defined is the redir variable. This is initially set to whichever tty you invoke the script from, so that if you did not specify the "-q" option, you will get messages to your screen. I think I have been quite clever since, whenever a file/directory is found, I issue a command thus:

echo "touching $lists/" >$redir

which, as I say is set to whatever tty you invoked it from. (/dev/tty1)???. But, if you specified the "-q" flag $redir equals "/dev/null" so that no messages appear

With regards to command-line switching, I have made good use of the getopts command. Refer to man 1 touchrec for more information on that.

I was so taken by this, that I even wrote a manual page :-). Simply download the tar.gz file, untar it, and run ""

For more information about how to use this script, please refer to "man touchrec". Hope you get good use out of it :-)

GNU Find: Evaluating its effectiveness

How many of you have been guilty of using mc (midnight commander), hitting the key sequence "<ALT><SHIFT><?>" before, and then filling out that nice dialog box to find the file that you require?? Don't lie, we've all done it (Hi Ben :-)). And why? All that dialog box is, is a front end to the command find(1) anyway. This article will help try and wean you off pretty command-line dialog boxes. While there is nothing wrong with using mc's searching feature, it does not give you the full benefit to do complex searches. GNU find is a very powerful tool indeed.

Finding one particular file

The most common use of find, is knowing where a certain file is. Now, usually, if it is a binary file, you would most likely either use the commands which and where to find it :-), but what if you were looking for that file called etherh.c? You'd issue the command:

cd /
find / -name etherh.c -print 2>/dev/null

Now don't panic :-) The syntax of the find command is thus:

find {/path/} [name_of_file/expression] [options...]

So, what the command does, is beginning at "/" (the root directory of the partition), search for the file called etherh.c, when it finds it, -print it to stdout (and in this case the 2>/dev/null redirects any errors to oblivion. -- used here because I am not user root I obviously have permission problems looking at certain files from "/" that I don't care to know about!).

The -name flag above has more than just one use as shown here. It is in fact a flag which allows you to pass shell metacharacters to the filename that you are trying to look for, and we will discuss that in the next section.

If you have your kernel sources installed, you should find that the file, is at:


Finding filenames using shell metacharacters

It is all very well, knowing the exact name of the file that you trying to search for. That makes life easier. But what if you didn't know the exact name of the file, what then?

Well, in that situation, you would obviously have to use wildcards, or more specifically shell metacharacters. These are characters such as:

{} -- [ although these have their uses as we shall see later on ]

Quite simply then, we can try something like the following:

find /usr/bin -name 'xa*' -print

Which should return:


The sharp eyed among you will have noticed that I

Find involving actions

You can also tell find to run a program on the file(s) that it finds too. This is an extremely useful feature, and you will be surprised at just how often you will have cause to use it.

Suppose you have a bunch of files, say in $HOME, and I wanted to look for a regular expression in each of them, i.e. "#!/bin/bash". I can do the following:

find $HOME -name '*' -print -depth -exec egrep -n '#!/bin/bash' {} \;

The syntax of the last part, may seem strange, but what is happening is that the flag

accepts first the command, and then any addition options (in this case, a regular expression), followed by two brackets {} which when run, will be expanded to the current filename which is returned by the find parameter (be it a regular expression or specific filename -- in this case all files (*). The backslash (\;) terminates the command.

Therefore, in short, the syntax is:

find -path -name {pattern/regex} -exec {name_of_program} [options] {} \;

You can then apply this principle to use any command that you can see a use for :-)

Finding particular types

Once again, find makes our life even easier, by allowing us to look for specific file types. Now, while you might well think that you could use a combination of

ls, test, find
to do the same thing, don't re-invent the wheel :-). Here are some examples:

find / -name '*' -print -depth -type d

-- which prints only directories

find / -name '*' -print -depth -type f

-- which finds only files

find / -name '*' -print -depth -type l

-- finds symbolic links only

If you only want to search say, on the top-level directory, and not traverse any lower, then you can use the:

-maxdepth {number}

switch. For example, if you only wanted to search for directories which is in your $(pwd) -- (current working directory) you can do:

find / -name '*' -type d -maxdepth 1 -print

Which on my computer (laptop) returns the following:


The number indicates the number of subdirectories that you wish to descend during your search.

But the fun doesn't stop just with this mini-article. find has a whole host of other options many of which I cannot ever see the need for, but that's only because I have a limited use for it.....

Definitely check out the command:

man find

Interview: John M. Fisk

[ Yep! It is the same John Fisk that started off this magazine, and thie Weekend Mechanic series. I was really pleased when John sent me an e-mail out of the blue saying he had read my article :-) So, I post a transcript of our conversation here -- Thomas Adam ]

Dear Thomas,

I want to say thanks for keeping up the "Weekend Mechanic" column in the 
LG.  I have to admit that I've had little time for pleasure reading (and 
much less for writing) these past several years.  On a whim, I started 
reading the latest LG (after seeing an announcement for it on the 
site) and noticed the WM column was still there.  I'm absolutely delighted 
that you're keeping it going and wish you the very best.

Trust your end-of-semester exams go well.  Have a great summer 'linuxing.



p.s., kudos for providing "install from source" instructions for squid.  I 
suppose that a growing number of users are simply dependent on rpm or deb 
binaries (and there are good reasons for using these) but I still tend to 
"roll my own" from source and make a package out of it when I'm not feeling 
so lazy :-)
Hello. I must say that I feel *very* honoured to have
received an e-mail from you -- especially as you
founded the Linux Gazette :-) You have no idea just
how much I valued your efforts way back in 1996. Had
you any idea that the LG would be as popular as it is

Absolutely not.  I started getting interested in Unix/Linux during the 
summer of 1994.  I had just switched from being a general surgery resident 
at a very busy tertiary care hospital to working as a research assistant in 
the lab of one of the hepatobiliary surgeons.  I managed to get a dial-up 
account on the universities VAX machine (2400 baud :-) and started using 
gopher.  Somehow, I ran across information on Linux and decided to give it 
a try since I was interested in doing a medical informatics fellowship.

It took countless days to download the installation floppy set from TAMU 
(Texas A&M Univ.).  I had a 2 MB limit on my shell account so I would ftp 
an image to the account and then turn around and transfer it to my local 
box via kermit.  TAMU was the first distribution I ever tried.  Almost 
immediately, I decided to give Slackware a try -- it was the "new kid on 
the block" and was supposed to be so much better than SLS.  That was 
August, 1994.

After playing with Linux for a year or so I decided that I wanted to learn 
how to write HTML and so the Linux Gazette was born out of similar 
"itches":  learning Linux and learning HTML.  I absolutely loved Linux, and 
do so to this day, so it was easy to come up with content.  I just wrote 
about the things I loved and found exciting.  I had no earthly notion that 
it would be that popular.  I had a good deal more time on my hands back 
then and so I was able to do almost everything -- write columns, coordinate 
inclusions by others, format the entire batch so that the HTML was at least 
reasonably "legal", etc.

About a year later (issue 8) I was in over my head and was now back in 
school full time working on a computer science degree (actually, I was a 
non-degree seeking graduate student, but I took the entire computer science 
curriculum and enough math courses for a math minor).  Phil Hughes from the 
Linux Journal got in touch with me.  He was interested in an online 
magazine of this type and offered to take over the administrative 
work.  So, I was terribly relieved to turn the reins over to him and 
continue writing.
I started reviving your article several months ago.
Although I started submitting articles once a
month, they have been more intermittent of late, due
to University work getting in the way :-(

Brother, I know that feeling... :-)
Incidentally, I hope you didn't mind me, re-using your
article name, and images. I had tried to get in
contact with you, to ask your permission, but it seems
that you don't mind :-)

Not in the least.  I'm absolutely delighted that you've done this and wish 
you the very best.
If time permits, you should start reading the LG. It
would be nice, if you could send an e-mail to the
editor :-) Just to say, hi. I'm sure that'll cause 
quite a stir.....especially with Jim Dennis :-) 

I'll do that.
What are you doing these days??? Your last ever
article said that you'd finished your degree
(congratulations) and was going to work for a medical
centre?? Is this still the case?? How is your wife??

At the moment, I'm within a couple years of finally finishing up my medical 
training!  I went on to do a brief medical informatics fellowship under 
Drs. Randolph Miller and William Stead at the Vanderbilt University Medical 
Center and then decided to finish my formal medical training in 
Pathology.  I matched at Yale (here in New Haven, Connecticut) and have 
completed 2 years of Anatomic Pathology training.  This year, I was able to 
take a year off to do some medical informatics research with Dr. Prakash 
Nadkarni.  We've just finished writing our first paper (in information 
retrieval) and I'm working on two additional projects.  I start back "on 
the wards" as a Clinical Pathology (AKA, Laboratory Medicine) resident on 
July 1.

Life has gotten a rather busy of late.  My wife and I adopted a little girl 
from China in August, 2000.  She's a cutie pie, but definitely takes a good 
deal of what little "free time" we had left :-).  Any more, I try to keep 
up with things in the Linux Community but I've had no time to write.

What distribution are you using?
I'm using SuSE 7.1, soon to switch back to using Debian

I'm still using Slackware :-).  You mentioned that you've been using Linux 
for around six years.  That's long enough that you've probably given a 
number of distributions a whirl.  I have to say that I really like 
Mandrake, and I've run Debian for short time.  Eventually, however, 
managing the *.deb's and *.rpm's becomes a headache and I start fighting 
with the package manager.  In time, I just get disgusted and go back to 
Slackware.  It's stable, Patrick V. does a nice job of keeping current 
without pushing too far toward the bleeding edge.  And I still compile 
nearly everything from scratch.
Thanks again John :-)
Keep in touch,

You too.  Best wishes,


John M. Fisk, M.D.
Postdoctoral Research Associate, The Yale Center for Medical Informatics
Yale University School of Medicine, 333 Cedar Street, PO Box 208009, New 
Haven, CT  06520-8009
phone: (203) 764-8132

Closing Time

Oh well, until next month -- take care.

Send Your Comments

Any comments, suggestions, ideas, etc can be mailed to me by clicking the e-mail address link below:

Thomas Adam

My name is Thomas Adam. I am 18, and am currently studying for A-Levels (=university entrance exam). I live on a small farm, in the county of Dorset in England. I am a massive Linux enthusiast, and help with linux proxy issues while I am at school. I have been using Linux now for about six years. When not using Linux, I play the piano, and enjoy walking and cycling.

Copyright © 2002, Thomas Adam.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Introduction to Programming Ada

By Thomas Adam


I'm quite old-fashioned when it comes to computers. I am one of these people whom prefers working at a command-line driven interface rather than a GUI. So it should not come as a shock to you that many of the programming languages I have experimented in are also based on textual input / output. Ada, along with Perl, Bash, Sed, Awk, C, etc is no such exception.

Over the years, there have been quite a few programming languages mentioned in the Linux Gazette. Ben Okopnik has done two very good tutorials on both Perl and Bash, and other people have contributed to describing other languages, such as: Smalltalk, C++, Python. Over the next couple of months, I shall be writing a series of articles to do with programming in Ada.

What is Ada?

Glad you asked :-) Originally Ada was a US governmental (DoD) developed programming language. The standard was originally known as Ada83, but this is now obsolete, as it was recently "overhauled" and re-born as Ada95. This is now the preferred standard and implementation of the Ada programming language.

In 1983, Ada was standardised by ANSI. Thus, it went through all the official motions and in that year, that first edition was released. Then four years later in 1987, ISO released an equivalent standard. At this time though, the idea of so called OOP (Object-Orientated Programming) was a concept that had not really been considered.

Ada however, was not designed by a committee. The original design was implemented by Jean Ichbiah, who won a language design competition. Then in 1995, Tucker Taft led a small group of developers and Ada95 was born. Unlike the previous version (Ada83), the implementation of Ada95 (or Ada9X, as it is sometimes known) underwent a public "benchmark" test; whereby testers of the language gave their feedback, and suggestions to make the syntactical and lexicographical layout more efficient.

The name Ada is attributed to a woman called Ada Loveless (1815-1852) who is considered to be the world's first programmer. Ada is used in all sorts of situations, and since it is a concurrent programming language, it is most commonly used in embedded systems. Ada has been used in some of the following:

The list however is by no means exhaustive :-)

Ada Compilers

Unlike other scripting programming languages (Perl, Bash, Python, tcsh, etc), Ada like C is compiled rather than interpreted. This means that the person that is going to run the program does not need the interpreter installed to use it. Ada programs are therefore standalone from having any kind of Ada packages installed. [Unless you have used pragmas to interface with other languages, like C, in which case you might have libraries, but more on that later -- TA]

The Ada compiler that I recommend to you is called GNAT, which stands for: GNU NYU (New York University) Ada Translator. It is free (GNU license :-), and there is a wealth of information on it. It is based on the gcc compiler, which has had the Ada syntax bundled in with it.

It is available from the following website, which then has a link to the GNAT compiler:

A word of caution here. I recommend that you download a pre-compiled binary version of GNAT, and use the package alien if need be to convert to .DEB .RPM .TGZ, etc. The reason I say this, is because you will need an Ada version of gcc (often called gnatcc) to bootstrap the compiler. If this is the first time you are installing GNAT then the compilation from source code will not be possible.

That said, you can then go ahead and install the package once you have downloaded it. You'll find that with the RPM version of GNAT, that there should be one single RPM: "GNAT-3.13p-7.rpm" which will contain everything you need to start programming in this wonderful language :-).

Ada IDE's

Before we start our first program in Ada, I thought it would be good to make you aware of some of the IDE's (Integrated Development Environment). These are programs which help you to program in the specified language by offering features such as:

The two that I would recommend to you are:-

TIA (TIny Ada) -- a console based IDE written in Ada and is built around the use of GNAT

GRASP -- an X11 IDE which supports among other languages, Ada!

For all you EMACS fans, there is an extension called Glade which is installed as part of the main GNAT distribution. This is an EMACS extension which supports the GNAT compiler, syntax highlighting, etc. More information on that can be found at gnuada website

You do not have to use and IDE at all to be able to use GNAT. I actually don't bother, and instead use jed if I am at the console (although this does not yet support Ada syntax highlighting) and Nedit if I am in X11. This does support Ada syntax highlighting :-)

The Features of Ada

Ada95 has been greately enhanced over its predecesor Ada83. The biggest improvement has been object orientation, which many people will find useful. Some of the features that Ada has are:

In addition to the above, there are also:

And many more....

Hello World!

Now it's time to write our first ada program. In time-honoured tradition, we are going to start by writing a Hello World example. Open up a text editor, and type in the following:

  with text_io;
  use text_io;

  procedure hello_world is  


    put("Hello World!");

  end hello_world;

Easy, isn't it :-). Before we can run the program, we have to save it. But it has to be with the correct suffix. GNU/Linux doesn't require any suffix (file extension) as a rule, but it is essential when programming in Ada and using the GNAT compiler. A list of valid extensions are:

When writing anything other than a package (which we won't be doing for some time yet -- I can assure you) :-) you should append a ".adb" extension to your filename. This is so that the compiler knows that the file it is compiling is a program and not a package specification!

So, save your file as hello_world.adb

Now we are ready to start to compile / build the program. This has to be done so that we can run it. You cannot run an Ada program until it has been compiled and built.

Change to the directory that you have just saved the file, and issue the command:

gnatmake hello_world.adb

This will compile -> link -> build your Ada code into a compiled program.

Now if you type in:


the response:

hello world!

is output to the screen, and the program exits.

You should also have noticed that as you issued the command, the following output was produced:

gnatgcc -c hello_world.adb
gnatbind -x hello_world.ali
gnatlink hello_world.ali

You could, if you wish, type each of the above commands in turn to both compile, bind and link your program (respectively). Luckily gnatmake provides a nice automation for this :-). If you now look in your directory, along with the main program, you'll find that GNAT has created other files too, namely:

.ali files are GNAT link files that contain information about debugging and linking for the main program

.o files are object files which can be used in conjunction with the program-debugger: gdb.

In short, unless you plan to debug your program, you can delete these files.

Explanation: Hello World

In perl, you can issue a command such as: print("Hello"); and that can be the only line in your program (excluding the she-bang line), and it will run.

Ada however, has to be told exactly which packages it is to use before it can perform even the simplest of commands like echoing statements to the VDU. A package is a collection of functions and procedures that perform specific tasks. If you do not declare explicitly these at the start of the program, GNAT, when it comes to compile your program, will bomb out immediately.

Therefore, if we wish to read and write I/O (Input/Output) to a screen terminal, this has to be stated. All I/O functions are found within the package text_io, and the first two lines within our hello_world example are crucial....

  with text_io;
  use text_io;

The with statement in Ada indicates that we will be requiring the use of the named package, in this case text_io. If more than one package is required then this can be added, by separating each package name by a comma (,). When we have finished, we must append a semi-colon (;) to the end of the line, similiar to that of Perl. The with statement is a mandatory command that must ALWAYS be present at the start of your program in order for it to work.

The package text_io, as I have already stated allows I/O functions / procedures. This involves printing messages to the screen, allowing user input to be entered, etc. It is a package that is used in virtually every program you will ever write in Ada.

The use statement MUST be used only after the with statement has been made. It allows for unqualified references to be made to procedures and functions from other packages. Without the use of this statement, each procedure or function call must have the name of the package that it belongs to, followed by a period (full stop) preceeding it. For example, below is what the hello_world program would look like without the use statement.

with text_io;
  procedure hello_world is  


    text_io.put("Hello World!"); 

  end hello_world;

You can see how this has increased the amount of information that we have to type in, without the use of the use statement. When more than one package is used that might have the same procedure or function names, the compiler can usually tell to which package you are referring, based on the parameters passed to it.

The third line:

procedure hello_world is  

declares that we are writing a new procedure with the name hello_world. The statement word is tells us that we are about to start the declarative section of the procedure, more on that later.

The keyword


then tells us that we are going to start the executable part of the procedure -- i.e. where all the statements will appear and be executed, which in this case is:

put("Hello World!")

Which calls the procedure put from the package text_io to print the message Hello World! on the screen.

The last line:

end hello_world;

simply just ends the named procedure.

In short, the basic structure for an Ada program looks like the following:

with text_io;
use text_io;

procedure program_name is 

      [ declaritive part here ]


      [ executable section here ]  

end program_name;

Also within the package text_io are commands such as:


Plus many others...

put does what we have already seen.
put_line does the same as put, except starts on a new line.
new_line is a command issued on its own, which starts a new line. If you use it, make sure that you put a semicolon at the end of it, like:


In fact, that statement about the semicolon (;) goes for each command that you make in Ada.

Next month, we will be looking at:


Well, that is all for this month. I'm sorry if it seems like I'm not explaining enough things all in one go, but trying to explain anything more at this point, is I think overload. So, I am going to leave you with a few exercises for you to try.....

1. Print your name on the screen

2. Print your address on the screen, using only put and new_line

3. Repeat exercise 2, this time with put_line

If you submit them to me, I will print them in my next installment of this article!

As with all of my articles, if you have any questions, suggestions, rants or raves (hopefully not complaints :-) drop me a line!!

with text_io, ada.integer_text_io;
use text_io, ada.integer_text_io;

procedure happy_programming is

loop_number : integer :=0;


  while loop_number /= 10 loop
    loop_number := loop_number + 1;
    put("Happy Programming in Ada");
  end loop;

end happy_programming; 

Thomas Adam

My name is Thomas Adam. I am 18, and am currently studying for A-Levels (=university entrance exam). I live on a small farm, in the county of Dorset in England. I am a massive Linux enthusiast, and help with linux proxy issues while I am at school. I have been using Linux now for about six years. When not using Linux, I play the piano, and enjoy walking and cycling.

Copyright © 2002, Thomas Adam.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Office Linux: Ideas for a Desktop Distribution

By Matthias Arndt


I remember one of the meetings of my LUG a few weeks ago. We argued about Linux and its readiness for the desktop. We all had the same opinion that Linux is ready for the desktop, at least where software is concerned. We discussed other related things but this is the thing that made me think about distributions.

In this article I want to propose to create a special desktop distribution for end users especially those who sit and work in an office all day long like secretaries.

Why another distribution of GNU/Linux?

To summarize my thoughts:

Office Linux Manifest

Office Linux should not be one of the bloated 6 CDs full of programs distributions but a simple distribution that fits on one CD and that brings all needed applications and tools to create a productivity environment using GNU/Linux.

Office Linux should

Office Linux could consist only of free software but this is not a requirement.

The bare system

Office Linux should only come with a proven and stable version of the Linux kernel. The kernel should be compiled to run on standard hardware out of the box supporting typically office hardware as networking and printing. Multimedia support would be nice but not required.

The standard set of GNU tools like Bash, sed, awk and find should come with Office Linux. However Office Linux should not present the user or admin with a huge list of tools to be installed. Installing a standard subset should be enough.

As Office Linux puts emphasize on secretaries and other office personnel it should not come with much applications for the console. One or two proven editors should be enough.

Desktop environment

Office Linux should be easy to use. Therefor a proven stable and possibly fast desktop environment is required. The K Desktop Environment could fit to meet this. However it is not the fastest possible solution.

Pro KDEContra KDE
  • easy to use
  • known and well supported in the GNU/Linux community
  • can be configured to feel like M$ Windows ™
  • Desktop environment with file manager and panel
  • easy to configure by the end user
  • comes fully internationalized
  • needs a considerable time to launch
  • huge memory footprint both in RAM and on hard-disk

Personally I do not like KDE that much but I recommend it for Office Linux.

Office productivity

This is a very important field and Office Linux should concentrate on this field as its name suggests. A reliable and commonly accepted office suite like Star Office or OpenOffice should come with it.

Compatibility with M$ Office ™ is required to allow the intended user audience to import and reuse their old files. This compatibility should be achieved through the office suite and not through external tools. Not only to provide GUI but to make it more easy to use. A worst case scenario may invoke a GUI shell for command line tools.

I do not recommend KOffice for Office Linux just because it will find more resistance from the intended audience than suites that resemble M$ Office ™.

The distribution should provide reliable PDF readers and converters. Perhaps an installable PDF printer for Office Linux would be a nice idea. Users could print PDFs from any application then.

The printing subsystem should be able to communicate with existing network printers of any kind including SAMBA printers and standard Unix printers. The subsystem should be easy to install and use. It should be compatible with Unix convention in resembling the BSD printing system. CUPS would be a fine solution and I suggest using it in Office Linux


A standard compliant Internet suite is another main part of Office Linux.

Although there a many fine programs out there Office Linux should only provide on of them in a preconfigured and working way. A stable Mozilla release in a complete install with all needed plugins such as Macromedia Flash and a Java VM.

A security tweaked default configuration should be included.

Help System

To be easy to use Office Linux has to include a help system that is easy to use and navigate.

The help system should provide

Markup in HTML is recommended for the Help System.


I think the creation of a distribution upon these ideas is entirely possible. It will require some work and patience but it shouldn't be impossible.

A distribution providing only a few but proven components might be as easy to use as M$ Windows ™. And then GNU/Linux might be ready for the desktop. It is a matter of time, hard work and patience but it is possible.

Matthias Arndt

I'm a Linux enthusiast from northern Germany. I like plain old fifties rock'n'roll music, writing stories and publishing in the Linux Gazette, of course. Currently I'm studying computer science in conjunction with economics.

Copyright © 2002, Matthias Arndt.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Playing DVDs on Linux

By Tunji Durodola

Hello dear readers.

My name is Tunji Durodola and I write from Nigeria, West Africa, the largest collection of black peoples on the face of the planet.

The purpose of this article is to give an insight into how to get Linux to play DVDs using one or more of the now readily available tools on the web. You should have basic DVD playing in a matter of minutes; are more detailed section will follow later.


The key to watching DVDs lies in the ability of hardware or software to decode and read encrypted movies. DVDs are encrypted with a special algorithm called Content Scrambling System or CSS, to prevent illegal copying of the material contained on the disc. The algorithm is not a secret, but to get a copy of it to put in your device (hardware or software), you have to pay a license fee plus sign a mean set of agreements to prevent you from disclosing the algorithm to anyone.

Each DVD has its own key, rather akin to each door having a separate key to unlock it. The key itself in Windows is kept secret.

All commercial Windows DVD players have the algorithm contained in it, but they have paid, and as such, charge for their software, or the cost is embedded in the price of the DVD drive your purchase, so in effect you are paying a fee for the “bundled” software player.

The whole philosophy of Linux is freedom, which would be defeated if you have to pay for a Linux DVD player. Some chaps tried to get the algorithm from the owners, but were told they had to go through the same process as the Windows people.

For those earthlings who haven't got a clue as to what DeCSS is, I'll give a brief summary.

A young lad, a few years ago, desiring to watch his legally purchased DVDs in Linux, thought to develop a player for Linux, when none existed at the time, stumbled on a flaw in a now defunct Windows’ player called Xing, which had the unfortunate habit of leaving the key in the program itself. He then used his knowledge of maths to reverse-engineer the code and generate the algorithm.

The software he wrote to do that job was called DeCSS. He then teamed up with a few friends collectively called Masters Of Reverse Engineering (MORE) to develop a DVD ripper on Windows, and a small set of Linux-based utilities to view the un-encrypted files.

No fee was charged, but was posted on the ‘net for anyone with a similar desire to view their DVDs in Linux. The MPAA found out and subsequently obtained a court order forbidding any US site from hosting DeCSS. That of course sparked worldwide interest in Linux-based DVD players. The case is still in court in the Land of the Free. For more info please click here.

Today, there are other software decryptors available for Linux which do not use the original DeCSS code, but do the same job, and are not subject to any litigation. We shall focus on these.

The Goods!

Just to get you warm, I'll tell you what system I've got in my crib.



CPU: Pentium III 750 (old, I know, I'm planning for an Athlon XP 1900+)

RAM: 1GB PC 133 SDRAM (hey, ram was cheeeep when I bought)

BOARD: MSI BX Master, 4 IDE Slots (2 on an on-board Promise Controller)

Case: ATX Extended Tower with 9 5.25 Slots

Sound: SoundBlaster Live! 5.1 Platinum (lovely card!)


HDD: 2x WD400 7200 RPM, 40GB drives, 2MB Cache (I'm showing off here)

Speakers: Front: 80W Kenwood speakers, driven by a Technics 80W Power Amp connected directly to the card

Rear: Some mid-budget 20W RMS computer speakers

Center: As Above

Sub: A no name 40W Sub in a wooden enclosure

Monitor: 18" NEC TFT Flat Panel



OS: SuSE Linux 8.0 Professional

Sound: ALSA 0.9.0rc2, running emu10k1 SoundBlaster driver. This is the only audio driver for Linux capable of using the Surround capabilities of the SB Live 5.1. Even the Windows drivers and software don't have half the features of this driver. Linux driver can handle up to 8 such cards on 1 system, whereas Windows can't handle two (don't bother, I've tried it). Hats off to the ALSA team!


1.     Xine 0.9.12 (Complete with its plugin capabilities makes Xine hard to beat)

2.     Ogle 0.8.2 (Fast and quick DVD-only player that supports DVD menus)

3.     Mplayer 0.90 (Mainly Console-based player with an unusual assortment of options. Mplayer will play almost any type of file format available today including VOB, VIVO, ASF/WMV, QT/MOV, Xanim, AVI, DiVX, VCD, SVCD, and of course DVDs It has a GUI option with skins.)

Both Xine and the Mplayer now offer FULL multi-channel (5.1) surround audio.

To compile mplayer:


libdvdread 0.8 and libcss (not libdvdcss)


libdvdread 0.9 and libdvdcss 0.0.3 (not libcss NOR libdvdcss 1.0)

all may be obtained at

The libdvdcss is used to decrypt the DVD and libdvdread to read its contents, and for chapter support.

I recommend you use ALSA 0.9.0rc2, for audio, if you have a modern sound card, such as the SoundBlaster Live! 5.1 series. The Audigy range may work, but alas, I don't have one :-(

Please read the INSTALL and README files in all packages

Step 1



compile install it with "make && make install && ldconfig"

Step 2


compile and install as above

Step 3

mplayer 0.90

./configure –help

make && make install

It should then install itself in /usr/local/bin as mplayer

Step 4

(a) if /dev/hdc is your dvd drive, make a link ln -s /dev/hdc /dev/dvd

(only needs to be done once)

type mplayer -dvd 1 -ao oss

The software should give some info such as the encryption key for the DVD, and then start to play the "encrypted" movie.

There are a gazillion options available, too numerous to dig into here, but multichannel audio is possible with -channels x, where x is 2,4 or 6 speakers. Remember, it is pointless if you have a basic 2-channel card. These multichannel cards are affordable these days so spoil yourself and get one!

Other useful options:

-title x                  – select DVD title

-chapter y               – select chapter in title specified above

-ss hh:mm:ss           –jump to specific time point

-vcd x                  - play vcd chapter

-channels 4 - play thru 4 discrete channels (front & rear)

On-screen display is also available, but not regular DVD subtitles.

Mplayer has rapidly become the most widely downloaded Linux software by a far margin (see if you don’t believe me), but it is not as easy to set up as Xine, if you don’t like compiling apps.0

To get Xine up and running in 5 minutes flat.

Step 1

download the latest xine releases from

You will need the following RPMs if you do not feel like compiling. x86 refers to your type of Pentium processor; i686 for Pentium III or higher, i586 for Pentium and AMD K6

There are others, but these are the bare essentials.

Step 2

Copy all the RPMs into an empty folder and from there, logged in as root, run the following:

rpm -Uvh xine*.rpm

If you are averse to using the console, call up kpackage or gnorpm and install them in the GUI instead.

Step 3

In GUI, open up a console (purely to see the output from the player, once you are comfortable with the settings, you won't need the console), and type the following (mind the case sensitivity of each letter) xine -pq -A oss -V xv -u0 dvdnav://

It may look cryptic but it is easy to explain. The purpose of the switches is to set defaults for audio and video in the config file which is stored in

“.xine/config” in your home folder.

-pq play immediately, and quit when done

-A oss use oss as the audio driver

-V xv use xv as the video driver

-u0 select the first subtitle (usually English, u1 refers to French, etc.)

dvdnav:// is the optional plugin that actually plays the DVD. It also has menu functionality and allows you to jump from chapter to chapter with 9/3 on the numeric keypad.

Type "xine --help" or man xine for full details.

As stated earlier, the skin may be changed in the menu. All settings are also possible in the menu including multichannel audio.

Xine plays a whole range of media: DVDs, VCDs, CDs, ogg, mp3, wav, DiVX... on and on and on.







xinedvdnav plugin (to decrypt DVDs, with DVD menus):

I hope to keep you posted with a more detailed paper sometime soon, with tips and tricks.



Tunji Durodola

Tunji is a Lagos-based computer consultant specialising in Linux solutions.

Copyright © 2002, Tunji Durodola.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Is Your Memory Not What It Used To Be?

By Madhu M Kurup


The intent of this article is to provide an understanding of memory leak detection and profiling tools currently available. It also aims at providing you with enough information to be able to make a choice between the different tools for your needs.

Leaks and Corruption

We are talking software here, not plumbing. And yes, any fairly large, non trivial program is bound to have a problem with memory and or leaks.

Where do problems occur?

First, leaks and such memory problems do not occur in some languages. These languages believe that memory management is so important that it should never be handled by the users of that language. It is better handled by the language designers. Examples of such languages are Perl, Java  and so on.
    However, in some other languages (notably C and C++) the language designers have felt that memory management is so important that it can only be taken care of by the users of the language. A leak is said to occur when you dynamically allocate memory and then forget to return it. In addition to leaks, other memory problems such as buffer overflowsdangling pointers  also occur when programmers manage memory themselves. These problems are caused where there is a  mismatch between what the program (and by extension the programmer) believes the state of memory is, as opposed to what it really is.

What are the problems?

In order for programs to be able to deal with data whose size is not known at compile time, the program may need to request memory from the runtime environment (operating system). However, having obtained a chunk of memory, it may be possible that the program does not return to back to the environment after use. An even more severe condition results when the address of the block that was obtained is lost, which means that it is no longer possible to identify that allocated memory.  Other problems include trying to access memory after it has been returned (dangling pointers). Another common problem is trying to access more memory that was originally requested and so on (buffer overflow).

Why should these problems bother me?

Leaks may not be a problem for short-lived programs that finish their work quickly. Unfortunately, many programs are designed to function without termination for a long period. A good example would be the Apache webserver that is currently providing you this web page. In such a situation, a malfunctioning leaky program could keep requesting memory from the system and not return it. Eventually this would lead to the system running out of memory and all programs running on that machine to suffer. This is obviously not a  good thing. In addition to a program requiring more memory, leaks can also make a program sluggish. The speed at which the program is context-switched in and out can decrease if the memory load increases. While not as severe as causing the machine to crash, an excessive memory load on a machine could cause it to thrash, swapping data back and forth.
    Dangling pointers can result in subtle corruption and bugs that are extremely unusual, obscure and hard to solve. Buffer overflows are probably the most dangerous of the three forms of memory problems. They lead to most of the security exploits that you read about[SEC].  In addition to the problems described above, it may be possible that the same memory chunk is returned back to the system multiple times. This obviously indicates a programming error. A programmer may wish to see how the memory requests are made by a program over the course of the lifetime of the program in order to find and fix bugs.

Combating these problems

There are some run time mechanisms to combat memory problems. Leaks can be solved by periodically stopping and restarting the offending program [OOM]. Dangling pointers can be made repeatable by zeroing out all memory returned back to the operating systems. Buffer overflows have a variety of solutions, some of which are described in more detail here.
    Typically, the overhead of combating these problems at runtime or late in development cycle is so high that finding them and fixing them at the program level is often the more optimal solution.

Open Source

GCC-based alternatives

The gcc toolset now includes a garbage collector which facilitates the easy detection and elimination of many memory problems. Note that while this can be used to detect leaks, the primary reason for creating this was to implement a good garbage collector[GC]. This work is currently being led by Hans-J. Boehm at HP. 


The technology used here is Boehm-Demers-Weiser technique for keeping track of allocated memory. Allocation of memory is done using the algorithm's version of the standard memory allocation functions. The program is then compiled with these functions and when executed, the algorithm can analyze the behavior of the program. This algorithm is fairly well known and well understood. It should not cause any problems and/or interfere with programs. It can be made thread safe and can even scale onto a multiprocessor system.


Good performance with reduction in speed in line with expectations. The code is extremely portable and is also available directly with gcc. The version shipped with gcc is slightly older, but can be upgraded.
    There is no interface - it is difficult to use and requires much effort for it to be useful. Existing systems may not have this compiler configuration and may require some additional work to get it going. In addition, in order for the calls to be trapped, all memory calls (such as malloc() and free() ) have to be replaced with equivalents provided by the garbage collector. One could use a macro, but that is still not very flexible.  Also this approach implicitly requires source code for all pieces that require memory profiling with the ability to shift from the real functions to those provided.


If you need a solution across multiple platforms (architectures, operating systems) where you have control over all relevant source, this could be it.


Memprof is an attractive easy to use package, created by Owen Talyor of Red Hat. This tool is a nice clean GNOME front-end to the Boehm-Demers-Weiser garbage collector.


At the heart of the profiling, memprof is no different from the toolset described above. However, how it implements this functionality is to trap all memory requests from the program and redirect it at runtime to the garbage collector. While not as functional as the gcc alternative on threads and multiprocessors, the program can be asked to follow forks as they happen.


The performance of this tool is pretty good. The GUI was well designed, responsive and informative. This tools works directly with executables, and it works without any changes needed to the source. This tool also graphically displays the memory profile as the program executes which helps in understanding memory requirements of the program during its lifetime.
    This tool is currently available only for the x86 and PPC architecture on Linux. If you need help on other platforms, you will need to look elsewhere. This tool is not a GTK application, it needs the full-blown GNOME environment. This may not be feasible everywhere. Finally, development on this tool appears to be static (version 0.4.1. for a while). While it is possible that it does what it is required to do well, it does not seem that this too will do anything more than just leak detection.


If you like GUI tools and don't mind GNOME and Linux, this is a tool for you.


Valgrind is a program that attempts to solve a whole slew of memory problems, leaks being just one of them. This tool is the product of Julian Seward (of bzip2 and cacheprof fame). It terms itself "an open source memory debugger for x86 linux" and it certainly fits that bill. In addition, it can profile the usage of the CPU cache, something that is fairly unusual.


The technology used in this program is fairly complex and well documented. Each byte of memory allocated by the program is tracked by nine status bits, which are then used for housekeeping purposes to identify what is going on. At the cost of tremendously increasing the memory load of an executing program, this tool enables a much greater set of checks. As all the reads and writes are intercepted, cache profiling of the CPU's various L caches can also be done.


The tool was the slowest of the three detailed here, for obvious reasons. However, for the reduction in speed, this tool provides a wealth of information is probably the most detailed of the three. In addition to the usual suspects, this tool can identify a variety of other memory and even some POSIX pthread issues. Cache information is probably overkill for most applications, but it is an interesting way to look at the performance of an application. The biggest plus for Valgrind is that it is under rapid development with a pro-active developer and an active community. In fact the web page of Valgrind proclaims the following from the author -  "If you have problems with Valgrind, don't suffer in silence. Mail me.".
    The tool however, is very x86 specific. Portability is fairly limited and to x86 Linux. The interface is purely command-line driven and while usable, sometimes the tool gives you too much information for it to be useful. This tool also directly works with binaries, so while recompiles are not required, it will require diligence to go through the output of this tool to find what you are looking for. You can suppress memory profiling for various system libraries by creating suppression files, but writing these files is not easy. In addition, threading support is not complete, although this tool has been used on Mozilla, OpenOffice and such other large threaded programs. If this tool had a GUI front end, it would win hands down.


If you are on x86 and know your code well and do not mind a CLI interface, this program will take you another level.

Other Open Source tools

Before I get sent to the stake for not having mentioned your favorite memory tool, I must confess that few compare in completeness to these three in terms of the data that they provide. A more  comprehensive list of leak detection tools is available here.


These tools are mentioned here only for completeness.


The big daddy of memory tools, does not work on Linux, so you can stop asking that question.


A latecomer to this arena, Geodesic is known most in the Linux community for their Mozilla demo, in which they use their tools to help find memory problems in the Mozilla codebase. How much use this has been to the Mozilla team is yet to be quantified, but their open-source friendliness can't hurt. Works for Solaris/Linux with a fully functional trial. Works on Windows as well.


A C++ specific tool, but still fairly well known, Parasoft's Insure++ is a fairly complete memory profiling / leak detection tool. In addition, it can find some C++ specific errors as well, so that can't hurt. This tool works with a variety of compilers and operating systems, a free trial version is available too.

Miscellaneous Notes:

Secure Programming

Secure programming involves many components, but probably the most significant is the careful use of memory. More details are available here.

OOM killer

Some the newer Linux kernels employ an algorithm which is known as the Out Of Memory (OOM) killer. This code is invoked when the kernel completely runs out of memory, at which point active programs / processes are chosen to be executed (as in killed, end_of_the_road, happy hunting grounds, etc). More details are available here.

Garbage Collectors

One of the other reasons why garbage collection is not always a preferred solution is that it is really tough to implement. They have severe problems with self-referential structures (i.e. structures that link to themselves) as aptly described here.

Madhu Kurup

I'm a CS engineer from Bangalore, India and formerly of the ILUG Bangalore. I've been working and playing with Linux for a while and while programming is my first love, Linux comes a close second. I work at the Data Mining group at Yahoo! Inc and work on algorithms, scalability and APIs there. I moonlight on the Linux messenger client and dabble in various software projects when (if ever) I can find any free time.

And yes, if you want to know, I use C++, vi, mutt, Windowmaker and Mandrake; let the flame wars begin :)

Copyright © 2002, Madhu M Kurup.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Exploring Perl Modules - Part 1: On-The-Fly Graphics with GD

By Pradeep Padala

Welcome To "Exploring Perl Modules" !!!

Perl modules are considered to be one of the strongest points for perl's success. They contain a lot of re-usable code and of course are free. This is an attempt to trap the treasure trove. There are lot of tutorials and even books written on popular modules like CGI, DBI etc.. For less popular modules, users are left with documentation which is cryptic and sometimes incomplete.

I am starting a series of articles that will attempt to explain some of the less popular but useful modules. During the last year, I came across and programmed with numerous perl modules. I will explain the modules with numerous useful examples from my experience. We will take one module at a time and explore its various uses.

Who should be reading these

Well, you should know perl. We won't be delving much into the basics of perl. There are plenty of documentation, articles and books on perl. Learning Perl is often recommended for beginners. Once you gain experience, you can try Programming Perl.

If you are an average perl programmer and haven't used lot of modules, this is the right place. Modules provide a great way to re-use code and write efficient and compact applications. In each article we will graduate from simple examples to complex examples ending in a real-world application, if appropriate.

Introduction to Modules

Modules provide an effective mechanism to import code and use it. The following line imports a module and makes its functions accessible.
    use module;
For example if you want to use GD, you would write
    use GD;

Finding and Installing Modules

Before we plunge into the details of programming, here are some instructions for finding and installing modules. We will be using various modules, and most of them are not installed by default. Some modules require libraries which may or may not have been installed. I will mention the things required whenever appropriate. Here are generic instructions for downloading and installing modules.

An easy way to install the module is by using the CPAN module. Run CPAN in interactive mode as

    perl -MCPAN -e shell

Then you can do various tasks like downloading, decompressing and installing modules. For example, for installing GD you can use

    install GD

If you are like me and and are accustomed to configure, make, make install method, here are the steps to install a module.

  • Find the module in CPAN's list of all modules.
  • Download the latest version of the module. For example, the latest GD module can be downloaded from
  • Unzip the module
        tar zxvf GD-1.40.tar.gz
  • Build the module
        perl Makefile.PL 
        perl Makefile.PL PREFIX=/my/perl/directory 
            (if you want to install in /my/perl/directory)
        make test (optional)
  • Install the module
        make install

Ready to Go ...

So you have installed your favourite module and are raring to learn. In this article we will explore the perl GD module, which provides an interface to GD library. We will also be using the CGI module for the web interface. You don't need to know a great deal of CGI to understand this article. I will explain things where necessary.

Graphics with GD

Let's start the wheels with a simple and effective example

Text version of the file can be found here.
#!/usr/local/bin/perl -w
# Change above line to path to your perl binary

use GD;

# Create a new image
$im = new GD::Image(100,100);

# Allocate some colors
$white = $im->colorAllocate(255,255,255);
$black = $im->colorAllocate(0,0,0);
$red = $im->colorAllocate(255,0,0);
$blue = $im->colorAllocate(0,0,255);

# Make the background transparent and interlaced

# Put a black frame around the picture

# Draw a blue oval

# And fill it with red

# Open a file for writing 
open(PICTURE, ">picture.png") or die("Cannot open file for writing");

# Make sure we are writing to a binary stream
binmode PICTURE;

# Convert the image to PNG and print it to the file PICTURE
print PICTURE $im->png;
close PICTURE;

This is the example given in the GD man page with little modifications. This produces a small rectangle with a red oval with blue border. Let's dissect the program.

One of the first things you do with GD library, is create an image handle to work with. The line

    $im = new GD::Image($width, $height)

creates and image with the specified width and height. You can also create an image from an existing image as well. It is useful for manipulating existing images. We will see an example on this in the later part of the article.

Next we need to allocate some colors. As you can guess, the RGB intensities need to be specified for initializing colors. Since we will be using lots of colors, let's write a small function which will initialize a bunch of colors for use.

Text version of the file can be found here.
# Save this as 
# Other scripts call this function

sub InitColors {
    my($im) = $_[0];
    # Allocate colors
    $white = $im->colorAllocate(255,255,255);
    $black = $im->colorAllocate(0,0,0);
    $red = $im->colorAllocate(255,0,0);
    $blue = $im->colorAllocate(0,0,255);
    $green = $im->colorAllocate(0, 255, 0);

    $brown = $im->colorAllocate(255, 0x99, 0);
    $violet = $im->colorAllocate(255, 0, 255);
    $yellow = $im->colorAllocate(255, 255, 0);

I often refer to this page for some nice rgb combinations.

The next few lines are straightforward and pretty much self-explanatory. The last lines regarding the file creation require special mention. Since we will be writing an image to a file, we need to put the file handle in binary mode with

    binmode MYFILEHANDLE;

This actually is a no-op on most UNIX-like systems.

Then we write to the file with the usual print command. GD can print the image in various formats. For example if you want to print a jpeg image instead of png, all you need to do is

    print MYFILEHANDLE $im->jpeg;

Simple Drawing

GD offers some simple drawing primitives which can be combined to generate complex graphics. Examine the following script that gives a whirlwind tour of all the simple primitives.

Text version of the file can be found here.
# Change above line to path to your perl binary

use GD;
do "";

# Create a new image
$im = new GD::Image(640,400);

# Allocate some colors

# Make the background transparent and interlaced

$x1 = 10;
$y1 = 10;
$x2 = 200;
$y2 = 200;

# Draw a border
$im->rectangle(0, 0, 639, 399, $black);
# A line
# A Dashed Line
$im->dashedLine($x1 + 100, $y1, $x2, $y2, $blue);
# Draw a rectangle
$im->rectangle($x1 + 200, $y1, $x2 + 200, $y2, $green);
# A filled rectangle
$im->filledRectangle($x1 + 400, $y1, $x2 + 400, $y2, $brown);
# A circle
$im->arc($x1 + 100, $y1 + 200 + 100, 50, 50, 0, 360, $violet);

# A polygon
# Make the polygon
$poly = new GD::Polygon;
$poly->addPt($x1 + 200, $y1 + 200);
$poly->addPt($x1 + 250, $y1 + 230);
$poly->addPt($x1 + 300, $y1 + 310);
$poly->addPt($x1 + 400, $y1 + 300);
# Draw it
$im->polygon($poly, $yellow);

# Open a file for writing 
open(PICTURE, ">picture.png") or die("Cannot open file for writing");

# Make sure we are writing to a binary stream
binmode PICTURE;

# Convert the image to PNG and print it to the file PICTURE
print PICTURE $im->png;
close PICTURE;

The output looks like this.

The above script is self-explanatory. The polygon needs a little bit of explanation. In order to draw a polygon, you first have to make the polygon and then draw it. Of course, a polygon must have at least three vertices.

Drawing Text

So what about text? You can draw text in some of the simple fonts provided by GD or use a True Type font available on your system. There are two simple functions available to draw text.

    # Draw the text
    $im->string($font, $x, $y, $string, $color);

    # Print text rotated 90 degrees
    $im->stringUp($font, $x, $y, $string, $color);

The following script shows various simple fonts provided by GD.
Text version of the file can be found here.
# Change above line to path to your perl binary

use GD;
do "";

# Create a new image
$im = new GD::Image(200, 80);

# Allocate some colors

# Make the background transparent and interlaced

# Create a Border around the image
$im->rectangle(0, 0, 199, 79, $black);
$x1 = 2;
$y1 = 2;

# Draw text in small font
$im->string(gdSmallFont, $x1, $y1, "Small font", $blue);
$im->string(gdMediumBoldFont, $x1, $y1 + 20, "Medium Bold Font", $green);
$im->string(gdLargeFont, $x1, $y1 + 40, "Large font", $red);
$im->string(gdGiantFont, $x1, $y1 + 60, "Giant font", $black);

# Open a file for writing 
open(PICTURE, ">picture.png") or die("Cannot open file for writing");

# Make sure we are writing to a binary stream
binmode PICTURE;

# Convert the image to PNG and print it to the file PICTURE
print PICTURE $im->png;
close PICTURE;

The output picture looks like this.
Output image of above script

As you can see, these fonts are quite limited and not so attractive. The following section shows the usage of True Type Fonts with GD

True Type Fonts

You can use the true type fonts available on your system to draw some nice text. The function stringFT is used to draw in TTF font.

    # $fontname is an absolute or relative path to a TrueType font.

Here's an example showing the usage

Text version of the file can be found here.
# Change above line to path to your perl binary

use GD;
do "";

# Create a new image
$im = new GD::Image(270, 80);

# Allocate some colors

# Make the background transparent and interlaced

$im->rectangle(0, 0, 269, 79, $black);

$x1 = 10;
$y1 = 20;

# Draw text in a TTF font
$font = "/usr/X11R6/lib/X11/fonts/TTF/luxisri.ttf";
$im->stringFT($red, $font, 15, 0, $x1, $y1, "A TTF font");

$anotherfont = "/usr/share/fonts/default/TrueType/starbats.ttf";
$im->stringFT($blue, $font, 20, 0, $x1, $y1 + 40, "Another one here !!!");

# Open a file for writing 
open(PICTURE, ">picture.png") or die("Cannot open file for writing");

# Make sure we are writing to a binary stream
binmode PICTURE;

# Convert the image to PNG and print it to the file PICTURE
print PICTURE $im->png;
close PICTURE;

The output looks like this.
Output image for above script

Let's go Online

Now that we have seen some basic uses of GD, let's turn our attention to web graphics. So how do you output an image through CGI? Simple. Add the following lines to the scripts instead of printing to a file.

    # To disable buffering of image content.
    $| = 1;
    undef $/;

    print "Content-type: image/jpeg\n\n";
    print $im->jpeg(100);

This is all you need to know about CGI for now. If you already know CGI, you can enhance your code for handling complex web interaction. Let's write a small program which reads an image and displays a resized version of it. It might be useful for showing thumbnails.

Text version of the file can be found here.
#!/usr/local/bin/perl -wT
# Change above line to path to your perl binary

use CGI ':standard';
use GD;

# create a new image
$image_file = "images/surfing.jpg";
$im = GD::Image->newFromJpeg($image_file);
($width, $height) = $im->getBounds();
$newwidth = $width / 3;
$newheight = $height / 3;
$outim = new GD::Image($newwidth, $newheight);

# make the background transparent and interlaced
$outim->copyResized($im, 0, 0, 0, 0, $newwidth, $newheight, $width, $height);

# make sure we are writing to a binary stream
binmode STDOUT;
$| = 1;
undef $/;
print "Content-type: image/jpeg\n\n";
print $outim->jpeg();

In this example, the function newFromJpeg() reads a jpeg file. Then we then calculated the boundaries and resized it accordingly. A demo of the resizing can be found here

A Photo Album

With this resizing knowledge we can create a small online photo album. In this we use resizing to show smaller images and display the original image when the user clicks on the smaller images.

Text version of the file can be found here.
#!/usr/local/bin/perl -wT
# Change above line to path to your perl binary

use CGI ':standard';
use GD;

$imnum = param('imnum');
if(!defined($imnum)) {
    $imnum = 0;

$orig = param('orig');
if(!defined($imnum)) {
    $orig = 0;

$| = 1;

@images = ("surfing.jpg", "boat.jpg", "boston-view.jpg", "seashore.jpg");

print "Content-type: text/html\n\n";
print "<font color=green>Click on the image to make it bigger or smaller<br>
You can browse through the small images using the buttons or by clicking
on the numbers </font>\n";
print "<table><tr>\n";

if($imnum > 0 && $imnum < @images) {
    printf "<td><a href=album.cgi?imnum=%d><img src=images/prev.gif border=0></a>\n", $imnum-1;

if($imnum >= 0 && $imnum < @images - 1) {
    printf "<td><a href=album.cgi?imnum=%d><img src=images/next.gif border=0></a>\n", $imnum+1;

print "<td>";
for($i = 0; $i < @images; ++$i) {
    print "<a href=album.cgi?imnum=$i>$i|</a>\n";
print "</tr></table>\n";
if($imnum < 0 || $imnum >= @images) {
    print "<b>No such image</b>";

if($orig) {
    print "<a href=album.cgi?imnum=$imnum><img src=images/$images[$imnum] border=0></img></a>\n";
else {
    $im = GD::Image->newFromJpeg("images/$images[$imnum]");
    # create a new image
    ($width, $height) = $im->getBounds();
    $newwidth = 200;
    $newheight = 200;
    $outim = new GD::Image($newwidth, $newheight);

    $outim->copyResized($im, 0, 0, 0, 0, $newwidth, $newheight, $width, $height);
    $tmpfile = "images/tmp$imnum.jpg";
    if ($tmpfile =~ /^([-\@\w.\/]+)$/) {   # For the tainting stuff
        $tmpfile = $1;
    else {
        print "Should never happen";
        exit; # Should never happen
    open(TMP, ">$tmpfile") || die("Cannot open file");
    print TMP $outim->jpeg(100);
    chmod(0644, $tmpfile);
    print "<a href=album.cgi?imnum=$imnum&orig=1><img src=$tmpfile border=0></a>";

This script uses a few CGI features. The function param returns the parameter value, if supplied. This value is used to display the proper image. If the user wants to see an original image, it is displayed. Otherwise a temporary resized image is created and displayed.

A demo of the album is here

A Graphical Hit Counter

Now let us turn our attention to another popular web application "A Hit Counter". There are many counter scripts available on web. Here's our attempt to write one.

The counter works like this. Every time a web-page is accessed, the cgi script records the hit count and creates an image on-the-fly. So why wait? Let's write it.

Text version of the file can be found here.
#!/usr/local/bin/perl -wT
use CGI ':standard';
use GD;
use strict;


$LOCK_SH = 1;
$LOCK_EX = 2;
$LOCK_NB = 4;
$LOCK_UN = 8;

$| = 1;


sub main {
    my($id, $iformat, $show);

    $id = param("id");
    $iformat = param("iformat");

    $counter_value = &update_counter_value($id);

    if($iformat eq "jpg" || $iformat eq "png") {
        &print_counter($iformat, $counter_value);
    else {
        &print_error_image("Image format $iformat not supported");

sub print_counter {
    my($iformat, $counter_value) = @_;
    my($COUNTER_SIZE) = 4;

    my($im) = GD::Image->new("${iformat}s/0.${iformat}");
    if(!defined($im)) {
        &print_error_image("\$im couldn't be initialized");

    my($w, $h) = $im->getBounds();
    undef $im;

    my($printim) = GD::Image->new($w * $COUNTER_SIZE, $h);
    $printim->colorAllocate(255, 255, 255);

    my($pos, $l, $temp, $digit, $x, $srcim);
    $x = 0;
    for($pos = $COUNTER_SIZE - 1; $pos >= 0; $pos--) {
        if($pos > length($counter_value) - 1) {
            $digit = 0;
        else {
            $l = length($counter_value);
            $temp = $l - $pos - 1;
            $digit = substr($counter_value, $temp, 1);
        $srcim = GD::Image->new("${iformat}s/${digit}.${iformat}");
        $printim->copy($srcim, $x, 0, 0, 0, $w, $h);
        $x += $w;
        undef $srcim;
    if($iformat eq "jpg") {
        print "Content-type: image/jpeg\n\n";
        print $printim->jpeg(100);
    else {
        print "Content-type: image/png\n\n";
        print $printim->png;

sub print_error_image {

    my $error_string = $_[0];
    my $im = new GD::Image(
    gdMediumBoldFont->width * length($error_string),

    $im->colorAllocate(255, 255, 255);
    my $red = $im->colorAllocate(255, 0, 0);
    $im->string(gdMediumBoldFont, 0, 0, $error_string, $red);
    print "Content-type: image/jpeg\n\n";
    print $im->jpeg(100);

sub update_counter_value {
    my($file_name, $counter_value);

    $file_name = "$_[0].counter";
    if ($file_name =~ /^([-\@\w.]+)$/) {   # For the tainting stuff
        $file_name = $1;
    else {
        exit; # Should never happen
    if(open(COUNTERFILE, "+<$file_name") == 0) {
        # Getting accessed for the first time
        open(COUNTERFILE, ">$file_name");
        print COUNTERFILE "1";
        return 1;

    $counter_value = <COUNTERFILE>;
    seek(COUNTERFILE, 0, 0);
    print COUNTERFILE $counter_value;

    return($counter_value - 1);

This script can be used by adding a line like this in your web page.

    <img src=counter.cgi?id=my_html_file.html&iformat=jpg>

The id needs to be unique. A sample counter can be seen on my home page.

Now to the innards of the script. The counter script has three important functions.

update_counter_value: This function reads the hit count from a file named
                      html_file.counter and increments it. It creates the
                      counter file, if one already doesn't exist. It also 
                      locks the file to avoid conflicts due to multiple
                      simultaneous accesses.

print_counter:        Prints the counter by attaching the counter digits in a new
                      image. The digits are read from an appropriate directory.

print_error_image:    This is a useful function to show error images. You
                      can use it in your programs, for reporting errors
                      through GD.

You need to have the digits (0-9) in jpg or png format. Sites like Counter Art dot Com provide free counter digits. In my next article, I'll discuss how to generate digits on the fly.

I developed a personal website statistics package woven around this counter concept. It provides much more than a simple counter. It logs the accesses, shows visitor statistics and much more. Check it out at pstats page.

You can also use the File::CounterFile module for managing the counter file.

Coming Up..

I hope you enjoyed reading this article. In the coming months, we will look at GD::Graph and PerlMagick modules. Send me comments at this address.

Have Fun !!!


My best friend ravi has become the official editor for all my writings. I am indebted to him for looking through all the gibberish I write and make sense out of it. Thanks ravi :-)

I thank Benjamin A. Okopnik for reviewing the article and pointing out some nice perl hacks.

Pradeep Padala

I am a master's student at University of Florida. I love hacking and adore Linux. My interests include solving puzzles and playing board games. I can be reached through or my web site.

Copyright © 2002, Pradeep Padala.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Programming in Ruby, part 1

By Hiran Ramankutty


Ruby is an interpreted, pure Object Oriented language designed by Yukihiro Matsumoto of Japan, where it is reported to be more popular than Python and Perl! The first part of this series is meant to be a tutorial introduction, with more advanced stuff in the pipeline.

Of course, I need not go through the ritual of advocating the `advantages of Ruby compared to languages X, Y and Z' - most people realize that each language has a unique flavour and character of its own - whether you choose Python or Ruby for your next open source project depends more on the peculiar affinity which you as an individual feel for one over the other, and the availability of standard library facilities, rather than on arcane technical issues. So let's enjoy that unique Ruby flavour!


I presume that your development environment is Linux and you have Ruby installed on it. Ruby is free software, and there are no restrictions on it's usage. You can get it from the Ruby Home Page .

Hello World

Let's start with the mandatory `Hello, World'.

% cat > hello.rb
print "Hello World\n"
% ruby hello.rb
Hello World


Categorization is done on the basis of the first character of the name of the identifier:

                $                       global variable
                @                       instance variable
                a-z or '_'              local variable
                A-Z                     constant

The two `pseudo-variables' are exceptions to the above stated rule. They are `self' and `nil'.

  • self refers to the currently executing object
  • nil meaningless or FALSE or value assigned to uninitialized variables

Both are named as if they are local variables, but they are not! We will see their real meaning later on.

Global variables

A global variable has it's name starting with $. As such, it can be referred to from anywhere in the program. It is to be noted that a global variable assumes the value 'nil' before initialization. You can test this out:

 % ruby
 print $foo,"\n"
 $foo = 5
 print $foo,"\n"

The interpreter responds


It is possible for us to `bind' procedures to global variables, the procedures being automatically invoked when the variable is changed. More about this later!

Some special kinds of global variables formed with a single character following a '$' sign are, as a collection, interpreted by Ruby as major system variables (made read-only). Some of them are given below along with their meanings.

  • $! latest error message
  • $@ location of error
  • $_ string last read by gets
  • $. line number last read by interpreter
  • $& string last matched by regexp
  • $~ the last regexp match, as an array of subexpressions
  • $n the nth subexpression in the last match (same as $~[n])
  • $= case-insensitivity flag
  • $0 the name of the ruby script file
  • $* the command line arguments
  • $$ interpreter's process ID
  • $? exit status of last executed child process

Local variables

A local variable has it's name starting with a lower case letter or an '_'. Unlike globals and instance variables, they do not assume the value 'nil', but they behave as shown below:

% ruby
print foo

You will get an error message:
      "undefined local variable or method 'foo' for #(object...)".

The scope of a local variable is confined to one of

  • proc {....}
  • loop {....}
  • def .... end
  • class .... end
  • module .... end
  • the entire program (unless one of the above applies)
If we initialize a local variable in any block(or a procedure), then it remains undefined after exiting from the loop. For example:

def foo(n)
	k = 2 * n
	print "\n",k,"\n"
	print defined? k,"\n"

foo 3
print defined? k,"\n"

The output is:


In the above example `defined?' is an operator which checks whether it's argument is defined or not. The results "local-variable" and "nil" (to indicate false) must make it clear.


Any name with characters following an uppercase letter is treated as a constant. But Ruby programmers, to avoid confusion, use names with all uppercase letters. So 'Foo' as well as 'FOO' are constants. As in case of local variables, a constant is defined by a substitution and an access to an undefined constant or alteration of a defined constant causes an error. Check for yourself.


Strings in ruby can be single quoted('...') or double quoted("..."). But both are different. Use double quotes if the string contains backslash-escaped characters. Also results of evaluation are embedded for contained expressions quoted by #{}. See examples:

print "\n"
print '\n'
print "\001","\n"
print '\001',"\n"
print "abcd #{5*3} efg","\n"
var = " abc "
print "1234#{var}567","\n"

abcd 15 efg

We will learn more about strings in the next section, arrays. This is to include the features that are similar and are held by both arrays and strings.


Arrays can be quoted using '[]'. One of the features of Ruby is that arrays are heterogenous.

a = [1,2,"3"]
print a,"\n"

Now, if you write a Ruby program to add all the elements of the array shown in the above program, you will get an error:

         Error!!String cannot be coerced into Fixnum
The `3' in the array is stored as a string. Now, if it is done like this:

a = [1,2,"3"]
b = a[0] + a[1] + a[2].to_i
print b,"\n"

The program will be executed without any errors. The attachment to a[2] i.e. '.to_i' is to convert the content of a[2] to an integer.You can also try '.to_s'.

Operations like concatenation and repetition can be done on arrays.

a = [1,2,"3"]
print a + ["foo","bar"]
print a * 2

We get


It's possible to `slice and dice' arrays. Here are some examples:

a = [1,2,"3","foo","bar"]
print a[0],"\n"
print a[0,2],"\n"
print a[0..3],"\n"
print a[-2..2],"\n"
print a[-3..-1],"\n"

Arrays and strings are inter-convertible. An array can be converted to a string with 'join', and a string is split up into an array with 'split'.

a = [1,2,3]
print a[2],"\n"
a = a.join(":")
print a[2],"\n"
print a,"\n"
a = a.split(":")
print a[2],"\n"
print a,"\n"

The Associative Array is another important data structure - it's also called a `hash' or a `dictionary'. It's basically a name-value mapping, as shown below:

h = {1 => 2, "2" => "4"}
print hash,"\n"
print hash[1],"\n"
print hash["2"],"\n"
print hash[5],"\n"

I hope the results are convincing!

Control structures

If - else

Let us write the factorial function.The mathematical definition is:

      n! = 1			(when n==0)
      n! = n * (n-1)!		(otherwise)
In ruby this can be written as:

def fact(n)
	if n == 0
		n * fact(n-1)
print fact 4,"\n"

You get 24.

Ruby has been called `Algol like' because of the repeated occurrence of `end'. In this recursive call, you may notice the lack of the return statement. In fact, use of return is permissible but unnecessary because a ruby function returns the last evaluated expression (does this sound a wee bit Lispish? If you insist, you sure can do Lisp in Ruby!)

The for loop

for i in 0..4

Here i is the variable and 0..4 is the range.In the case of strings, you can very well write:

for i in "abc"

The while loop

Try this out

while i < 10
	print i+=1,"\n"


We use the case statement to test a sequence of conditions. Try this out.

i = 7
case i
when 1,2..5
	print "i in 2 to 5\n"
when 6..10
	print "i in 6 to 10\n"

You get

i in 6 to 10

2..5 means the range including 2 and 5. It checks whether i falls within that range.

This can be applied to strings as shown below.

case 'abcdef'
when 'aaa','bbb'
	print 'contains aaa or bbb \n"
when /def/
	print "contains def \n"
contains def

Note the slash used with "def". It is used for quoting a regular expression. We shall see it later.

Modifications with control structures

The case statement mentioned just above actually tests for the range (i in 2..5) as

(2..5) === i 

The relationship operator '===' is used by case to check for several conditions at a time. '===' is interpreted by ruby suitably for the object that appears in the when condition.

Thereby in the example with strings equality is tested with first when and an expression is matched with the second when.

Now try using the operator '===' with if structure(try implementing functions like isalnum(),isalpha(),isnum() etc.).

Your code can be shortened when we have to use if and while constructs for individual statements: as shown below

i = 7
print "contained in 5..10\n" if (5..10) === i
print i-=1,"\n" while i > 0
contained in 5..10

You may at times want to negate the test conditions. An unless is a negated if, and an until is a negated while. This is left up to you to experiment with.

There are four ways to interrupt the processing of statements of a loop from inside. First, as in C, break means, to escape from the loop entirely. Second, next skips to beginning of the next iteration of the loop (corresponds to continue statement in C). Third ruby has redo, which restarts the current iteration. The following is C code illustrating the meanings of break, next, and redo:

while(condition) {
	goto label_next;		/* ruby's "next" */
	goto label_break;		/* ruby's "break"*/
	goto label_redo;		/* ruby's "redo" */

The return statement is actually the fourth way to get out of a loop from inside. In fact return causes escape not only from the loop but also from the method that contains the loop.


We have examined some elementary language features - enough to get you started with a bit of `quick-and-dirty' coding. As I learn more about this `gem' of a language, I shall be sharing my experience with you through future articles. Good bye!

Hiran Ramankutty

I am a final year student of Computer Science at Government Engineering College, Trichur. Apart from Linux, I enjoy learning Physics.

Copyright © 2002, Hiran Ramankutty.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Process Tracing Using Ptrace

By Sandeep S

The ptrace system call is crucial to the working of debugger programs like gdb - yet its behaviour is not very well documented - unless you believe that the best documentation is kernel source itself! I shall attempt to demonstrate how ptrace can be used to implement some of the functionality available in tools like gdb.

1. Introduction

ptrace() is a system call that enables one process to control the execution of another. It also enables a process to change the core image of another process. The traced process behaves normally until a signal is caught. When that occurs the process enters stopped state and informs the tracing process by a wait() call. Then tracing process decides how the traced process should respond. The only exception is SIGKILL which surely kills the process.

The traced process may also enter the stopped state in response to some specific events during its course of execution. This happens only if the tracing process has set any event flags in the context of the traced process. The tracing process can even kill the traced one by setting the exit code of the traced process. After tracing, the tracer process may kill the traced one or leave to continue with its execution.

Note: Ptrace() is highly dependent on the architecture of the underlying hardware. Applications using ptrace are not easily portable across different architectures and implementations.

2. More Details

The prototype of ptrace() is as follows.

        #include <sys/ptrace.h>
        long  int ptrace(enum __ptrace_request request, pid_t pid,
                void * addr, void * data)

Of the four arguments, the value of request decides what to be done. Pid is the ID of the process to be traced. Addr is the offset in the user space of the traced process to where the data is written when instructed to do so. It is the offset in user space of the traced process from where a word is read and returned as the result of the call.

The parent can fork a child process and trace it by calling ptrace with request as PTRACE_TRACEME. Parent can also trace an existing process using PTRACE_ATTACH. The different values of request are discussed below.

2.1 How does ptrace() work.

Whenever ptrace is called, what it first does is to lock the kernel. Just before returning it unlocks the kernel. Let's see its working in between this for different values of request.


This is called when the child is to be traced by the parent. As said above, any signals (except SIGKILL), either delivered from outside or from the exec calls made by the process, causes it to stop and lets the parent decide how to proceed. Inside ptrace(), the only thing that is checked is whether the ptrace flag of the current process is set. If not, permission is granted and the flag is set. All the parameters other than request are ignored.


Here a process wants to control another. One thing to remember is that nobody is allowed to trace/control the init process. A process is not allowed to control itself. The current process (caller) becomes the parent of the process with process ID pid. But a getpid() by the child (the one being traced) returns the process ID of the real parent.

What goes behind the scenes is that when a call is made, the usual permission checks are made along with whether the process is init or current or it is already traced. If there is no problem, permission is given and the flag is set. Now the links of the child process are rearranged; e.g., the child is removed from the task queue and its parent process field is changed (the original parent remains the same). It is put to the queue again in such a position that init comes next to it. Finally a SIGSTOP signal is delivered to it. Here addr and data are ignored.


Stop tracing a process. The tracer may decide whether the child should continue to live. This undoes all the effects made by PTRACE_ATTACH/PTRACE_TRACEME. The parent sends the exit code for the child in data. Ptrace flag of the child is reset. Then the child is moved to its original position in the task queue. The pid of real parent is written to the parent field. The single-step bit which might have been set is reset. Finally the child is woken up as nothing had happened to it; addr is ignored.


These options read data from child's memory and user space. PTRACE_PEEKTEXT and PTRACE_PEEKDATA read data from memory and both these options have the same effect. PTRACE_PEEKUSER reads from the user space of child. A word is read and placed into a temporary data structure, and with the help of put_user() (which copies a string from the kernel's memory segment to the process' memory segment) the required data is written to data and returns 0 on success.

In the case of PTRACE_PEEKTEXT/PTRACE_PEEKDATA, addr is the address of the location to be read from child's memory. In PTRACE_PEEKUSER addr is the offset of the word in child's user space; data is ignored.


These options are analogous to the three explained above. The difference is that these are used to write the data to the memory/user space of the process being traced. In PTRACE_POKETEXT and PTRACE_POKEDATA a word from location data is copied to the child's memory location addr.

In PTRACE_POKEUSER we are trying to modify some locations in the task_struct of the process. As the integrity of the kernel has to be maintained, we need to be very careful. After a lot of security checks made by ptrace, only certain portions of the task_struct is allowed to change. Here addr is the offset in child's user area.


Both these wakes up the stopped process. PTRACE_SYSCALL makes the child to stop after the next system call. PTRACE_CONT just allows the child to continue. In both, the exit code of the child process is set by the ptrace() where the exit code is contained in data. All this happens only if the signal/exit code is a valid one. Ptrace() resets the single step bit of the child, sets/resets the syscall trace bit, and wakes up the process; addr is ignored.


Does the same as PTRACE_SYSCALL except that the child is stopped after every instruction. The single step bit of the child is set. As above data contains the exit code for the child; addr is ignored.


When the child is to be terminated, PTRACE_KILL may be used. How the murder occurs is as follows. Ptrace() checks whether the child is already dead or not. If alive, the exit code of the child is set to sigkill. The single step bit of the child is reset. Now the child is woken up and when it starts to work it gets killed as per the exit code.

2.2 More machine-dependent calls

The values of request discussed above were independent on the architecture and implementation of the system. The values discussed below are those that allow the tracing process to get/set (i.e., to read/write) the registers of child process. These register fetching/setting options are more directly dependent on the architecture of the system. The set of registers include general purpose registers, floating point registers and extended floating point registers. These more machine-dependent options are discussed below. When these options are given, a direct interaction between the registers/segments of the system is required.


These values give the value of general purpose, floating point, extended floating point registers of the child process. The registers are read to the location data in the parent. The usual checks for access on the registers are made. Then the register values are copied to the location specified by data with the help of getreg() and __put_user() functions; addr is ignored.


These are values of request that allow the tracing process to set the general purpose, floating point, extended floating point registers of the child respectively. There are some restrictions in the case of setting the registers. Some are not allowed to be changed. The data to be copied to the registers will be taken from the location data of the parent. Here also addr is ignored.

2.3 Return values of ptrace()

A successful ptrace() returns zero. Errors make it return -1 and set errno. Since the return value of a successful PEEKDATA/PEEKTEXT may be -1, it is better to check the errno. The errors are

EPERM : The requested process couldn't be traced. Permission denied.

ESRCH : The requested process doesn't exist or is being traced.

EIO : The request was invalid or read/write was made from/to invalid area of memory.

EFAULT: Read/write was made from/to memory which was not really mapped.

It is really hard to distinguish between the reasons of EIO and EFAULT. These are returned for almost identical errors.

3. A small example.

If you found the parameter description to be a bit dry, don't despair. I shall not attempt anything of that sort again. I will try to write simple programs which illustrate many of the points discussed above.

Here is the first one. The parent process counts the number of instructions executed by the test program run by the child.

Here the test program is listing the entries of the current directory.

#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <syscall.h>
#include <sys/ptrace.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <errno.h>

int main(void)
        long long counter = 0;  /*  machine instruction counter */
        int wait_val;           /*  child's return value        */
        int pid;                /*  child's process id          */

        puts("Please wait");

        switch (pid = fork()) {
        case -1:
        case 0: /*  child process starts        */
                ptrace(PTRACE_TRACEME, 0, 0, 0);
                 *  must be called in order to allow the
                 *  control over the child process
                execl("/bin/ls", "ls", NULL);
                 *  executes the program and causes
                 *  the child to stop and send a signal 
                 *  to the parent, the parent can now
                 *  switch to PTRACE_SINGLESTEP   
                /*  child process ends  */
        default:/*  parent process starts       */
                 *   parent waits for child to stop at next 
                 *   instruction (execl()) 
                while (wait_val == 1407 ) {
                        if (ptrace(PTRACE_SINGLESTEP, pid, 0, 0) != 0)
                         *   switch to singlestep tracing and 
                         *   release child
                         *   if unable call error.
                        /*   wait for next instruction to complete  */
                 * continue to stop, wait and release until
                 * the child is finished; wait_val != 1407
                 * Low=0177L and High=05 (SIGTRAP)
        printf("Number of machine instructions : %lld\n", counter);
        return 0;

open your favourite editor and write the program. Then run it by typing

cc file.c


You can see the number of instructions needed for listing of your current directory. cd to some other directory and run the program from there and see whether there is any difference. (note that it may take some time for the output to appear, if you are using a slow machine).

4. Conclusion

Ptrace() is heavily used for debugging. It is also used for system call tracing. The debugger forks and the child process created is traced by the parent. The program which is to be debugged is exec'd by the child (in the above program it was "ls") and after each instruction the parent can examine the register values of the program being run. I shall demonstrate programs which exploit ptrace's versatility in the next part of this series. Good bye till then.

Sandeep S

I am a final year student of Government Engineering College in Thrissur, Kerala, India. My areas of interests include FreeBSD, Networking and also Theoretical Computer Science.

Copyright © 2002, Sandeep S.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Secure and Robust Computer Systems for Primary and Secondary Schools

By Richard A Sevenich and Michael P Angelo

A wealthy school district will have the option of purchasing new software and hardware at some appropriate interval. It may also have the technical staff to install and maintain the hardware and software. Even in this idealized school district, the computer system environment is a harsh one, with its many student users, some of whom start as relatively computer illiterate and may not have acquired the discipline to follow administrative rules intended to ameliorate system virus infections and other external attacks on the system. The technical staff has the added burden of attempting to maintain system security in this nearly impossible environment, characterized by intermittent, unplanned intense work shifts in response to system security disasters. Providing the necessary level of technical expertise is quite expensive and consequently rather rare in our school systems. In fact, many of the primary and secondary schools in the USA have computer facilities that are in dire straits. Perhaps the major problems are these two:

  • The networks are plagued by viruses etc. and suffer significant down time. Further, loss of files and of attendant work time is routine.

  • Often the computers are a hodgepodge of donated and purchased computers with various versions of Microsoft operating systems and software.

Let's consider each of the two problems in more detail. Cleaning a network of virus infections is a time-consuming thankless job. Making a system in a school environment virus proof in practice is probably not possible currently. Other hostile system attacks (even internal) are quite likely. A teacher who depends on such a system to be consistently available will be routinely disappointed.

If we focus on the second problem, these are its consequences:

  • The various software versions are not always compatible with each other, so work cannot be dependably moved from one computer to another.

  • The original versions of the software and the corresponding licenses are sometimes missing. Microsoft is beginning to seize on this issue, requiring an expensive solution.

In this note we propose a straightforward solution. The idea came to us when we began playing with version 3.0 of Demo Linux ( It provides the start of a solution. When you boot a machine with Demo Linux, you end up with a machine running Linux from the CD. The network will be configured as will X Windows. The old Star Office 5.2 is also included. The hard drive may be mounted. We had remarkable success booting a variety of machines, including laptops, from the Demo Linux CD.

Forgetting about the hard drive for the moment, a school could have such CDs in all their computers and turn them on each morning to start with a virus-free environment, compatible software in all machines, and no licensing problems. Rather than requiring the constant application of security patches, the system is reborn each day. The solution is not expensive and is ultimately robust due to its simplicity. Well, it is almost that simple and convenient, but not quite. Here are three drawbacks:

  1. Some system configuration (e.g. network parameters) is needed at each boot, requiring that somebody knowledgeable make the appropriate entries. This is time intensive when the number of systems is large - assuming that a knowledgeable person is even available.

  2. The hard drive remains a virus target.

  3. Applications running from the CD will run relatively slowly; perhaps unacceptably slowly on some machines.

We next suggest solutions to each of these problems.

  1. Automate system configuration at boot. To implement this we would add a feature to a clone of Demo Linux. In particular, on the very first boot have the system configuration choices made by the appropriate sysadmin or technician and then have the system automatically hard code those choices by producing an ISO image to be burned onto a new boot CD, tailored to that specific machine. The new boot CD would automatically configure the system as desired. Boot CDs could be updated on whatever schedule the administration would deem appropriate (e.g. once a year in August).

  2. We'll assume that the machines at the school do not have an NFS/NIS file sharing setup. If that assumption were wrong, we would do things a different way, We'll further assume that when this new system is first installed, the hard drive is ours; i.e. any files stored on the hard drive have been archived by the owner. We'll propose a severe solution and insist that machine users either save their daily work on a floppy or transmit it (e.g. via scp) to a secure machine serving as a repository. The description of that secure repository machine is outside the scope of this discussion. Copying work to a floppy or transmitting it to a secure repository would be made reasonably convenient and intuitive; e.g. via some GUI interface. The CD boot process would clean all the prior day's files from the hard drive. This is the aforementioned severe solution and more involved and intelligent solutions might be contrived. However, this solution appears to guarantee a virus free environment at each new boot and is simple. Note that the hard drive cleaning is not all that time consuming because it involves only those files created since the previous boot.

  3. Application speed can be enhanced by having the boot CD move the appropriate applications to the hard drive during the boot process, after the hard drive has been cleaned as described in the prior step.

It must be admitted that this approach is not going to produce a well performing system for very dated machines with limited resources. Open Office, for example, would not perform well. A small footprint Linux version and other resource-conserving software could prove viable. Such are available in the embedded Linux world and could be adapted to resource-limited machines. This may be too small a market to pursue, however.

We've explored the preceding ideas for feasibility, tailoring and burning some boot-up CDs and the like. However, we have various other commitments and cannot take the concept to full fruition as a polished, flexible product in a reasonable time frame, although we will continue to work on it. We see this as having the potential to:

  • save school districts a significant amount of money

  • obviate the necessity for occasional audits by Microsoft or other vendors

  • simplify the system administration task

  • make systems much more secure and robust

  • remove the need to respond with unplanned, intense work shifts to repair system security breaches

Pretending that such a product will actually be created, there is the one remaining hurdle - the initial deployment. School districts with technical personnel could easily handle the initial CD boot and the creation of the second, machine-specific boot CD. The cost of the initial installation would be amortized very quickly. Alternatively, the CD provider might supply on site initial installation services at a reasonable cost. Because of the open nature of Linux, other consultants would become available. Finally, financially hard-pressed school districts might get such services free from a nearby Linux User Group.

Teachers, already overburdened, will need to learn enough Linux to function. They will be resistant, because their time is precious. Those of us who switched to Linux at some point in the past had to travel a learning curve. However, Linux has progressed to the point where the learning curve is no longer significant. There are distributions that are configured to look and act rather like the Microsoft interface. Old Microsoft Office files can, in most cases, be imported into something like Open Office and so on. The direct benefits to the teachers should outweigh the slight pain of conversion.

We haven't seen this concept in this form in print before, although all its elements are out there. Hence, we wanted to put it before the Linux community. If the proposal bears up under scrutiny and appears viable, we hope some entity, such as the Demo Linux folks or a Linux distribution, with appropriate expertise and resources, adopts it as a project. We have posed it as a solution to certain difficult problems typically faced by school districts in the USA. Obviously, it could be applied in other areas. To some extent, time is of the essence - the need and opportunity are there now.

Copyright © 2002, Richard A Sevenich and Michael P Angelo.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

Creating Reusable Software Libraries

By Rob Tougher

1. Introduction
2. Making It Easy To Use
2.1 Keeping It Simple
2.2 Being Consistent
2.3 Making It Intuitive
3. Testing Thoroughly
4. Providing Detailed Error Information
5. Conclusion

1. Introduction

Software libraries provide functionality to application developers. They consist of reusable code that developers can utilize in their projects. Software libraries targeted for Linux are usually available in both binary and source code form.

A well-written software library:

  • is easy to use
  • works flawlessly
  • provides detailed error information

This article describes the above principles of library creation, and gives examples in C++.

Is This Article For You?

Create software libraries only when you have to. Ask yourself these questions before proceeding:

  • Will anyone (including you) need functionality X in future applications?
  • If so, does a library implementing functionality X already exist?

If no one will need the functionality you are developing, or a software library implementing it already exists, don't create a new library.

2. Making It Easy To Use

The first step in creating a software library is designing its interface. Interfaces written in procedural languages, like C, contain functions. Interfaces written in object-oriented languages, like C++ and Python, can contain both functions and classes.

Remember this motto when designing your interface:

  • The easier to use, the better

As a library designer, I am constantly faced with finding the right balance between functionality and ease of use. The above motto helps me resist adding too much functionality into my designs.

Stick with the following guidelines, and you'll be fine.

2.1 Keeping It Simple

The more complex a library, the harder it is to use.

  • Keep It Simple, Stupid

I recently encountered a C++ library that consisted of one class. This class contained 150 methods. 150 methods! The designer was most likely a C veteran using C++ - the class acted like a C module. Because this class was so complex, it was very difficult to learn.

Avoid complexity in your designs, and your interfaces will be cleaner and easier to understand.

2.2 Being Consistent

Users learn consistent interfaces more easily. After learning the rules once, they feel confident in applying those rules across all classes and methods, even if they haven't used those classes and methods before.

One example I am guilty of involves public accessors for private variables. I sometimes do the following:

class point
  int get_x() { return m_x; }
  int set_x ( int x ) { m_x = x; }

  int y() { return m_y; }

  int m_x, m_y;

Do you see the problem here? For the m_x member, the public accessor is "get_x()", but for the m_y member, the public accessor is "y()". This inconsistency generates more work for the users - they have to look up the definition of each accessor before using it.

Here's another example of an inconsistent interface:

class DataBase

  recordset get_recordset ( const std::string sql );
  void RunSQLQuery ( std::string query, std::string connection );

  std::string connectionString() { return m_connection_string; }

  long m_sError;


  std::string m_connection_string;

Can you spot its problems? I can think of at least these items:

  • Methods and variables are not named consistently
  • Two different terms, sql and query, are used to denote a SQL string
  • m_sError does not have a public accessor
  • get_recordset() does not have a connection in its argument list

Here is a revised version that solves these problems:

class database

  recordset get_recordset ( const std::string sql );
  void run_sql_query ( std::string sql );

  std::string connection_string() { return m_connection_string; }
  long error() { return m_error; }


  std::string m_connection_string;
  long m_error;

Keep your interfaces as consistent as possible - your users will find them much easier to learn.

2.3 Making It Intuitive

Design an interface how you would expect it to work from a user's point of view - don't design it with the internal implementation in mind.

I find that the easiest way to design an intuitive interface is to write code that will use the library before actually writing the library. This forces me to think about the library from the user's standpoint.

Let's look at an example. I was recently considering writing an encryption library based on OpenSSL. Before thinking about the library design, I wrote some code snippets:

crypto::message msg ( "My data" );
crypto::key k ( "my key" );

// blowfish algorithm
msg.encrypt ( k, crypto::blowfish );
msg.decrypt ( k, crypto::blowfish ):

// rijndael algorithm
msg.encrypt ( k, crypto::rijndael );
msg.decrypt ( k, crypto::rijndael ):

This code helped me think about how I should design the interface - it put me in the user's shoes. If I decide to implement this library, my design will flow from these initial ideas.

3. Testing Thoroughly

A software library should work flawlessly. Well not flawlessly, but as close to flawless as possible. Users of a library need to know that the library is performing its tasks correctly.

  • Why use a software library if it doesn't work correctly?

I test my software libraries using automated scripts. For each library, I create a corresponding application that exercises all features of the library.

For example, say I decided to develop the encryption library I introduced in the previous section. My test application would look like the following:

#include "crypto.hpp"

int main ( int argc, int argv[] )
  // 1. Encrypt, decrypt, and check
  //    message data.
  crypto::message msg ( "Hello there" );
  crypto::key k ( "my key" );

  msg.encrypt ( k, crypto::blowfish );
  msg.decrypt ( k, crypto::blowfish );

  if ( != "Hello there" )
      // Error!

  // 2. Encrypt with one algorithm,
  //    decrypt with another, and check
  //    message data.

  // etc....

I would occasionally run this application to make sure that my software library did not have any major errors.

4. Providing Detailed Error Information

Users need to know when a software library cannot perform its tasks correctly.

  • Alert the user when there is a problem

Software libraries written in C++ use exceptions to pass information to its users. Consider the following example:

#include <string>
#include <iostream>

class car
  void accelerate() { throw error ( "Could not accelerate" ); }

class error
  Error ( std::string text ) : m_text ( text ) {}
  std::string text() { return m_text; }
  std::string m_text;

int main ( int argc, int argv[] )
  car my_car;

  catch ( error& e )
      std::cout << e.text() << "\n";

The car class uses the throw keyword to alert the caller to an erroneous situation. The caller catches this exception with the try and catch keywords, and deals with the problem.

5. Conclusion

In this article I explained the important principles of well-written software libraries. Hopefully I've explained everything clearly enough so that you can incorporate these principles into your own libraries.

Rob Tougher

Rob is a C++ software engineer in the New York City area.

Copyright © 2002, Rob Tougher.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

SysRq: The Process-nuke

By Vikas G P

So you thought you could always kill an offending program with kill -9 ? But what if it's your X server that has crashed, or that nifty svgalib program you wrote ? That's where magic SysRq comes in.

What is it

Magic SysRq is a key combination directly intercepted by the kernel and can be used, among other things, to perform an emergency shutdown. It is described in Documentation/sysrq.txt and implemented in drivers/char/sysrq.c in the kernel source tree. It exists primarily for kernel hackers, but it can be useful to people in user-space also. Since it is implemented as a part of the keyboard driver, it is guaranteed to work most of the time, unless the kernel itself is dead.

A note: In the rest of this article, when I say "SysRq key" I mean the single key beside the Scroll lock key. But when I say "magic SysRq" I mean the combination < Alt+SysRq >.


To do the SysRq magic, your kernel needs to be compiled with CONFIG_MAGIC_SYSRQ. Most distributions would have enabled it by default. If your's hasn't, you'll just have to recompile... :)

After everything is OK with the kernel, check if SysRq is enabled by default.

$ cat /proc/sys/kernel/sysrq
If it shows zero, it's not enabled. Writing any non-zero value to /proc/sys/kernel/sysrq will enable it.
$ echo "1" > /proc/sys/kernel/sysrq
If you want it to be always enabled, append these lines to one of your initialization scripts(preferably rc.local).
#Enable SysRq
echo -e "Enabling SysRq\n"
echo "1" > /proc/sys/kernel/sysrq

Alternatively, you might look for a file called /etc/sysctl or /etc/sysctl.conf which some distributions have(mine, RedHat, does). You can add a line like this to it, and sysrq will be enabled at boot-time.

kernel.sysrq = 1

The magic SysRq combination is a unique one. Now, every key on the keyboard sends a code when pressed or released, called the scan-code. The magic SysRq combination (Alt+SysRq), however, sends only one scan-code(0x54, decimal 84) even though two keys have been pressed. Check this out using showkey -s.

What can I do with it ?

Magic SysRq is invoked as < Alt+SysRq > + < command >. The SysRq key is also labelled as Print Screen. The commands are:

k: Secure access key - This kills all processes running on the current virtual console, so that no snoopy program can grab your keystrokes while you type your password.

u: Attempts to unmount the root device, and remount it read-only. In addition to an emergency shutdown, this command also comes in handy if you have only one partition for Linux and need to do an fsck or low-level filesystem editing(for example, ext2 undeletion. See Ext2fs Undeletion Howto

s: This command syncs the kernel buffers to disk. You should do this before unmounting.

b: Does an immediate reboot, pretty much like pressing the reset button. For a safe shutdown, precede this with a sync and Unmount.

p: Prints the contents of the CPU registers.

m: Shows memory information.

t: Shows information about the tasks currently running.

0-9: Sets the console log-level to the specified number.

e: Sends a SIGTERM to all processes, except init.

i: Sends a SIGKILL(sure kill) to all processes, except init.

l: Sends a SIGKILL to all processes, including init(you won't be able to anything after this).

Getting out

How do you get out of SysRq mode ? There is no definite documentation about this in sysrq.txt. It talks about having to press the left and right control,alt and shift keys, but a simpler thing worked for me. Just press Alt+SysRq once again, and you'll get out of it.

The way I understand this is: The kernel remembers the state of the magic SysRq combination: it's either down or up. When you press the key for the first time, the state is changed to down. And when you press any other key while SysRq's state is down, the kernel interprets it as a command. If you press SysRq again, the state is changed to up, and further keystrokes are handed to whatever program requests it.
(Actually, it's not that simple. Sometimes, the above mentioned method doesn't work. I believe it's because the kernel uses a separate translation table when magic SysRq is down.)

The SysRq key originally meant, as you can guess, "System Request". It was used by early IBM terminals to get the attention of a central computer to execute a command. The key seems to have few uses now, except in the linux kernel.


Leaving magic SysRq enabled on a production machine can be potentially dangerous. Anyone with physical access to the machine can bring it down instantly.
You should also disable SysRq if other people can log in to your system remotely. A < break > sent from a remote console will be interpreted as < Alt+SysRq >, and the consequences can be disastrous. See the Remote-Serial-Console-HOWTO for more details.


The magic SysRq hack can come in very handy at times. However, it must be used with care. It can also give you some insights into the inner workings of the kernel. If you are enterprising, you might even hack the kernel and add new commands !

Vikas G P

I'm in the last year of high school and live in Hassan, Karnataka in India, trying to balance my studies and linuxing.

Copyright © 2002, Vikas G P.
Copying license
Published in Issue 81 of Linux Gazette, August 2002

"Linux Gazette...making Linux just a little more fun!"

The Back Page

LG Reunion


That's me, Didier from Belgium (of The Answer Gang fame) and Mick from Ireland (of News Bytes fame) in Chester, England during my UK trip in July. We spent the day in Chester walking on the medieval wall, looking at Roman columns, and visiting the Roman amphitheater and the Chester Cathedral. More pictures from my trip are at in the 2002 section.

Wacko Topic of the Month

All this talk about England reminded Ben of a pseudo-Beowulf quote his fortune program doth said once:

Meanehwael, baccat meaddehaele, monstaer lurccen;
Fulle few too many drincce, hie luccen for fyht.
[D]en Hreorfneorht[d]hwr, son of Hrwaerow[p]heororthwl,
AEsccen aewful jeork to steop outsyd.
[P]hud!  Bashe!  Crasch!  Beoom!  [D]e bigge gye
Eallum his bon brak, byt his nose offe;
Wicced Godsylla waeld on his asse.
Monstaer moppe fleor wy[p] eallum men in haelle.
Beowulf in bacceroome fonecall bemaccen waes;
Hearen sond of ruccus saed, "Hwaet [d]e helle?"
Graben sheold strang ond swich-blaed scharp
Sond feorth to fyht [d]e grimlic foe.
"Me," Godsylla saed, "mac [d]e minsemete."
Heoro cwyc geten heold wi[p] faemed half-nelson
Ond flyng him lic frisbe bac to fen.
Beowulf belly up to meaddehaele bar,
Saed, "Ne foe beaten mie faersom cung-fu."
Eorderen cocca-colha yce-coeld, [d]e reol [p]yng.
 -- Not Chaucer, for certain

If that's not wacko enough, see this Nancy cartoon.

Happy Linuxing!

Mike ("Iron") Orr
Editor, Linux Gazette,

Copyright © 2002, the Editors of Linux Gazette.
Copying license
Published in Issue 81 of Linux Gazette, August 2002