Linux Gazette

June 1999, Issue 42 Published by Linux Journal

indent

Visit Our Sponsors:

Linux Journal
LinuxToday
Communigate Pro
cyclades
Linux Resources
LinuxMall
Red Hat
SuSE
InfoMagic
indent

Table of Contents:

 
 
 
 
 
 
 
 
 
indent

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
indent
Linux Gazette, http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-99 Specialized Systems Consultants, Inc.
indent

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at gazette@ssc.com

Contents:


Help Wanted -- Article Ideas

Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.


 Date: Thu, 27 May 1999 12:33:42 -0230 (NDT)
From: Neil Zanella, nzanella@cs.mun.ca
Subject: call for article: wireless ethernet

It would be nice if someone wrote an article on wireless ethernet on Linux (eg. WaveLAN). I think it would make a good article.

Best Regards,

--
Neil Zanella


 Date: Mon, 03 May 1999 16:33:32 -0500
From: Pete Nelson, pete.nelson@ci.stpaul.mn.us
Subject: Any inetd wizards out there?

I have been digging for the past several months to try and find any way to bind inetd to one IP / interface. I have a machine with several virtual hosts, and had originally intended for only the main IP / interface to respond to telnet, ftp, etc. The virtuals would only respond via httpd. Unfortunatly, this doesn't seem to be the way it's working - not only can I telnet / ftp to all addresses, it seems like every inetd connection shows up on the LAST IP interface for some reason.

I've looked thru manpages, NAG, websites, and while I know a lot more than when I started looking, I was never able to solve this binding problem.

Anyone have the answer?

--
Pete


 Date: Mon, 3 May 1999 13:07:07 -0700
From: Darrin Mossor, darrinm@Model.com
Subject: LILO Lock

I have a Dell PII-450 with an STB4400 Riva TNT video board, 128M RAM. I dual boot Windows (for the kids and some games) and RedHat5.2. I use LILO to handle the booting, with Windows being the default. Occasionally, Windows will lockup (big surprise), especially when playing more recent, graphics intensive, games. When this happens, a reset is required and the magic reset button is

pressed. Most of the time, on the LILO screen, the boot locks, displaying "LIX". A second reset is required to get things moving again.

I'm looking for two things:

1) Possible explanations for what would cause LILO to hang (I suspect the video drivers, but I've tried the ones that shipped with the PC, the latest and even the Detonator drivers from nVidia - no change in the frequency of lockups or the LILO hang.

2) Where can I find out what (if anything) LILO is trying to tell me by displaying "LIX". I have a feeling it's trying to tell me something useful, if I new how to decode it. And I would like to know the source of this information. I have pretty good luck find the answers myself, but this one has eluded me.

Other possible details: SB16 for sound, 13.6G IDE HD.

Thanks,

--
Darrin Mossor


 Date: Sat, 8 May 1999 18:09:51 -0700 (PDT)
From: Ariel "Leon", a_soul@rocketmail.com
Subject: I need some help here, please!

Hi, I wonder if anyone can help me out here with my partitions. I have a P100 with 16RAM, i recently changed my HDD 'cause it died, i replaced it with a 6.4Gb WesternDigital HDD. When i was installing it using EZ-Drive, the setup program detected that my bios wasn't going to support large drives so it installed EZ-Bios, EZ-Drive also partitioned the drive into four partitions (right now one has win95 and the others are free).....when i tried to install debian 1.3.1 the setup insisted in trying to go through the partitioning process but it detected "bad logical partitions".

What can i do to install linux in two of the existing partitions without losing my data (i'd like to run dual boot). One more thing, the D:, E: and F: partitions have recycle bins and i can't get rid of them even when formating them, what's going on here.

Thanks

--
Ariel

Date: Sun, 9 May 1999 19:03:42 +0100 (BST)
From: "D. Lovecraft", dl19@leicester.ac.uk
Subject: Choosing GUI for users

I have set up my PC to allow all the people in my household (we're students, by the way) to use various accounts in Linux. No problem there.

The thing is the user-interface we use. Everyone uses KDE for their chosen interface, but I prefer Afterstep. I use the kwm login program to allow people to,... well,... login, but it always defaults to using KDE. For the people in my household, this poses no great problem, as that is what they are after. I would like to be able to use Afterstep though.

But try as I might, I cannot get it to load Afterstep just for me. I have tried editing .xinitrc in my directory, and many other things besides, but I cannot get it to go.

Please, oh wise one, what should I do???

--
Dela Lovecraft


 Date: Mon, 10 May 1999 22:11:59 +0100
From: "Michael", michael@cimmj.freeserve.co.uk
Subject: Direct Cable Connection between Win95 and Linux

Just read issue 41 and read the great article about direct cable connections between Win95 and Linux, I tried implementing this method but came across a couple of problems running Windows 98. (4.10.1998)

I can get terminal emulation (using HyperTerminal) running at 38400 baud but 115200 crashes at the password prompt. (115200 works with xon/xoff using kermit as the terminal program).

Can't figure out how to get Windows to dial out over the serial line as in your article. I tried creating a new modem using the modems wizard in the control panel using 'standard serial between 2 PC's' and it goes through the process reporting success at the end but no device appears anywhere.

In control-panel|System Devices|Com Ports another device appears for COM1 so Windows thinks I have 2 COM1's ?

I click on add Dial Up Connection and can't select anything other than the Hayes accura modem I have on COM3.

Any Ideas ?

Thanks in advance for any help you may be able to give.

PS.

I am running RedHat Linux 5.2 and can't find the ftpserver*.rpm. Do you have details on where I can get the sources/binaries (in any package format - I have the alien script and ar) so I can set up an ftp server on this machine.

--
Michael


 Date: Mon, 10 May 99 16:05:05 PDT
From: "Ross Waters", rwaters@tartannet.ns.ca
Subject: Linux and Windows

I am new to the computer world and I only have a 386 laptop running Win3.1. Is there a small linux program I can Install without losing my win3.1. I have 200 meg hard ddrive and 8Megs of RAM.

--
Ross Waters

(Check out the article, "Windows/Linux Dual Boot" by Vince Veselosky in issue 38. --Editor)


 Date: Mon, 17 May 1999 13:52:34 -0600
From: Chris Hirsch, chris@symsystems.com
Subject: Netscape Bookmark Window Width

I'm trying to figure out how to adjust the bookmark window width for netscape 4.51. My problem with the current size is that when looking at bookmarks that have very long descriptions they get truncated in the middle and make the descriptions worthless. Is there some way to dynamically size them? I'll even settle for a static size as long as its bigger than the defautlt.

Any suggestions?

Thanks, Chris


 Date: Thu, 20 May 1999 20:30:54 -0400
From: "Jesse Legg", jesse.legg@axom.com
Subject: Good commercial Terminal Emulation

I'm in need of a good commercial package for Linux and terminal emulation. It needs a *very good* VT320 support and such. Any suggestions?

--
Jesse


 Date: Fri, 21 May 1999 18:01:02 -0500
From: Noel Stoutenburg, mjolnir@ticnet.com
Subject: re: gzipping TWHT-1

I am in the process of switching to Linux, however, I cannot complete the process just yet, in addition, I am in the process of moving, and my linux box is not presently functioning.

I have been downloading and saving the LG issues, using TWDT - 1, and discovered that the last three issues have been gz files, but I cannot figure out how to get these expanded on my win/dos system. Maybe you can point me to a place where I can find out what process to use, and where to get the appropriate software to accomplish the expansion on DOS/WIN. Thanks.

--
Noel


 Date: Fri, 21 May 1999 18:04:06 -0500
From: Noel Stoutenburg, mjolnir@ticnet.com
Subject: PS to re: gzipping TWHT-1

I am in the process of switching to Linux, however, I cannot complete the process just yet, in addition, I am in the process of moving, and my Linux box is not presently functioning.

I have been downloading...[snip]...expansion on DOS/WIN.

Thanks.

P.S. Maybe you could add TWDT 3, which would be an uncompressed file... --
Noel

(Check this month's 2 Cent Tips for ways to uncompress Linux files using Windows. The HTML file is not compressed and for most issues neither is the txt file. I just started compressing it lately at user's request. --Editor)


 Date: Sun, 23 May 1999 20:47:57 -0600
From: "Steven Koch", kochsb@home.com
Subject: How To Make A Bootable Linux (OpenLinux 2.2) Floppy?

Question: How do I make a bootable OpenLinux 2.2 floppy? I have Windows 95 on my PC right now. I already installed OpenLinux 2.2 on my HDD. I put Linux (Root & Swap) on my Second HDD - D: drive. I did a Full install & works great. But I can't seem to boot to the Linux anymore. I boot straight to Windows 95 (with no problems). I don't know if LILO will work? On my PC (Acer Open - P133) I have EZ-Drive installed in theDC BIOS (my BIOS couldn't handle the 6.4MB WD HDD). I've tried PM's BootMagic, it won't work because of the EZ-Drive. That's why I want to know if it's possible to Boot to Linux from a Floppy? I tried these methods from a Web Site:

I have these 3 files in my C: root drive:
-> Loadlin.exe
-> Vmlinux
-> Linux.bat (Below is what's inside of LINUX.BAT file)....

@echo off
cls
echo.
echo.
echo.
echo.
c:\windows\command\choice /t:y,5 "Do you wish to boot Linux?"
if errorlevel 2 goto End
c:\loadlin.exe c:\vmlinuz root=3D/dev/hdb4 ro
:End
I also made this Boot Floppy (According to the Web Site) & it has these 2 files:
-> autoexec.bat (Below is what's inside of AUTOEXEC.BAT file)....
 
goto %config%
:win95=20
SET CTCM=3DC:\WINDOWS
SET SOUND=3DC:\PROGRA~1\CREATIVE\CTSND
SET MIDI=3DSYNTH:1 MAP:E
SET BLASTER=3DA220 I10 D3 H3 P300 T6
-> config.sys (Below is what's inside of CONFIG.SYS file)....
 
[menu]=20
menuitem=3DLinux, Boot to Linux
menuitem=3DWin95, Boot to Windows 95=20
menucolor=3D15,1
menudefault=3DLinux, 15

[linux]
shell=3Dc:\loadlin.exe c:\vmlinuz root=3D/dev/hdb4 ro

[win95]
When I Do Boot With The Floppy In The A: Drive, I Receive This Error Message:
Invalid System Disk
Replace The Disk, And Then Press Any Key
I Take Out Floppy & It Boots To Windows 95. Am I Doing Something Wrong Here? I Did Exactly What The Web Site Said To Do. I Know It's Something In One OF These Files Or There All Wrong? Or do you know a better alternative? Thanks,

--
Steve


 Date: Mon, 24 May 1999 12:58:40 -0400
From: Steve Ickes, stevei@paonline.com
Subject: Help wanted

I am currently trying to install Star Office so that I may finally do away with my Microsloth products. However, when running ./setup, I get a script error. I have searched and posted but to no avail. I did find reference to using 'ldd' instead of 'exec' when running ./setup.bin. However, being relatively new to Linux, this means very little to me.

Any ideas, help or suggestions? I wouldn't think that this is a big issue. Yes, I am running the appropriate versions of glib and lib and running Red Hat v5.2 with the GNOME desktop and FVWM.

--
Steve


 Date: Sun, 9 May 1999 20:48:19 -0400
From: "Timothy Gray", timgray@geocities.com
Subject: CAD on Linux and X

I have a CAD station that is currently windows crippled. I have a summagraphics tablet and a hp plotter which both work great under WIN95/98 (both are old by most everyones standards.. circa 1989-1990). But, I cannot find anything on the net about using a tablet with X windows, or a plotter. Xfree86's sites all mention mice and never say anything about any other input device. Both items have win/dos/cad drivers along with SCO and VMS drivers.

Is there anything I can find about serious CAD under linux and using my hardware on the net? If I can get this running under xfig I can save thousands and give me one more reason to use my windows CD's as coasters.

--
Timothy


 Date: Tue, 11 May 1999 10:36:39 +0200 From: Matthias Mikuletz, matthias@theo2.physik.uni-stuttgart.de Subject: Corrupt partition table

I need urgent HELP.

After having deleted a 8gig primary FAT32 partition and reinstalled a 4gig primary

and a 4gig extended FAT32 partition on a 13.5 gig drive the linux partition on the last 5gig

isn't accessible anymore.

Dos Fdisk works properly, doesn't show up anything unusual, but linux fdisk complains about

different logical/physical beginnings/endings and overlapping. Also PartitionMagic 3.0 only tells me about a partition table error #116.

Windows95 works properly on the first two partitions.

Can anyone tell me about a tool to fix the partition table (to scan the disk and guess correct cylinder/head values)?

Maybe the reassigning of the extended FAT32 partition has destroyed the linux partition.

Thanks a lot in advance.

--
matthias


 Date: Wed, 26 May 1999 23:37:13 EDT
From: Robert8005@aol.com
Subject: Video Problems

I new to linux and learning fast. I just got one problem when I use startx or kde my screen just shows Black and gray stripes. I have a Diamond SpeedStar A50 AGP card and a ViewSonic 17EA Monitor. I tried the optiond Caldera said and none worked. ANy help would be great.

--
Robert


 Date: Wed, 2 Jun 1999 02:43:24 -0700 (PDT)
From: kenneth kenneth, monkeydrum_98@yahoo.com
Subject: Red Hat

Can you tell me where can i find the step to install Linux Red Hat 5.2 ....

--
Kenneth


General Mail


 Date: Fri, 04 Jun 1999 01:31:14 +1000 From: peter, marshypj@ozemail.com.au Subject: netled article issue 41, by larry ayers

Zee correct address for Matthew Bevan site and NetLed Program is :

http://mars.ark.com/~mbevan/products/netled.shtml


 Date: Tue, 4 May 1999 00:54:19 -0700 (PDT)
From: Felix Morley Finch, felix@crowfix.com
Subject: Conversation with Craig Burton

I think Mr Burton has a lack of imagination on how Linux can take over a lot of desktops. He claims

Windows growth would have to go to zero and Linux would have to grow exponentially for the next eight to ten years before it would even begin to gain on Microsoft. And until Linux is at 20% market share, no serious developer is going to give it any respect.
It might be so if the hundreds of millions of Windows PCs in use now would still be in use eight to ten years from now. But PCs will be replaced several times during that period. Each replacement is another opportunity for Linux.

Most people use Windows for Office file compatibility and games. StarOffice, ApplixWare, and Word Perfect already offer almost complete Word compatibility, and games are beginning to appear. In a year or two, Linux will be reasonable for a majority of uses. A few early adaptors will smuggle Linux into offices, its viability will become evident under practical conditions, and managers will realize they can save money, downtime, and headaches by installing Linux.

Internet compatibility requirements, and resentment over expensive upgrades, will prevent MS from force feeding many more incompatible Office file format "upgrades". Cheaper and cheaper hardware will make the cost of MS software more apparent. Just as MS Works was developed as a cheaper alternative to Office, people will "settle" for Linux for their kids.

Linux doesn't have to replace existing Windows machines. It only has to be a proven viable alternative when people replace old PCs. Faced with forced upgrades by MS's short sighted policies, people will choose inexpensive compatible standards-friendly Linux over expensive incompatible Redmond-protocols Microsft.

--
Felix


 Date: Mon, 3 May 1999 18:30:31 EDT
From: Robbo0119@aol.com
Subject: Linux and W98

I use W98 for most of my essential tasks and also use it for "GAMES". I own a lot of games.

HOWEVER i recently started to use Linux as an alternative operating system. It has a steep learning curve , ( at least for me, because I don't seem to own the hardware that it comes ready for and have had to hunt down drivers on the net, and also learn to install them properly.)

The current state of Linux reminds me of OS/2 when it first came out. I liked OS/2 (I had the 3.0 ). BUT . I stopped using it because there were very few (almost none) programs for OS/2 at the time. I considered it a superior OS to Windoze. It actually worked.But you had to learn how to make it work.

I will be really glad if Linux makes it in the market, Be assured that Bill Gates with all of his money is not going to let an Operating System that's essentially FREE take over his market share ( probably he thinks of it as his domain). Good Luck Linux!!

--
robbo


 Date: Mon, 03 May 1999 23:58:12 -0500
From: cbbrowne@godel.brownes.org
Subject: LinuxCAD Reviewz

I think that it is a very good thing that you presented the Official Reaction of Software Forge Inc to the previous "LinuxCAD" Review; the quality of the response as well as the advertising material speaks as loudly as any review could. (Including the one claimed to be "fraudulent.")

It is clearly important for Linux Gazette to remain editorially objective; in this case that has been quite successfully done. However badly you may have wanted to use a spell-checker, the community will always remain grateful for your self-control in throttling that impulse. :-).

--
cb


 Date: Tue, 4 May 1999 20:41:47 +0200
From: Craig Schlenter, craig@qualica.com
Subject: NetLED security problem?

I read an article in Linux Gazette about netled and the comment about not prepending /dev/ to any of the command line arguments intrigued me so I thought I'd look at the source code:

From netled.c:

char tty[10] = "/dev/";
[snip]
strcat(tty,argv[1]);
if((ttyfd = open(tty,O_RDWR)) < 0) {
    fprintf(stderr,"Error opening keyboard %s\n ",tty); 
    exit(1);
}
[snip]

I'm not an expert in these matters but this would appear to be prone to a buffer-overflow/stack-smashing attack. The fact that it's part of main() and not some subroutine might have some bearing on the matter as I'm not too sure whether exit() will look for some sort of return address on the stack (and no libc source handy to check) but either way it looks like something that needs fixing ...

I'd recommend a

if (strlen(argv[1]) >= 5) {
	fprintf(stderr, "argument too long");
	exit (1);
}
be added before the strcat. This is especially relevant since you recommend running the program SUID root. Actually a size of 10 for tty is too low as a size since you want argv[1] to be "console" ...

I've cc'ed the author of the article, linux-gazette too and one of the security mailing lists maintainers who is probably far more knowledgeable than me about stack overflows to shed some light on the matter. Thank you,

--
Craig


 Date: Tue, 11 May 1999 11:12:05 -0600
From: njg@itmin.com
Subject: Desktop Users

I wish to make a request to the editor of the LG and hope others in my category will support me. I was prompted to do this after reading mail in your journal. The letter in April
From: "Michael J. Hammel", mjhammel@graphics-muse.org
Subject: Re: a newbie's grief : Erik Refner & Clara Lundqvist: "
is one such example. (I must admit that in my debut I created a partition with FIPS and installed RedHat Linux ver 2.1 on my PC in 1995 with only few problems. So it is not THAT bad really..But I could not get my modem to work!)

Linux is more than a BIG OS for developers and programmers. It has a great future for ordinary PC DESKTOP users like me. Many people in the world cannot afford Microsoft software. The OS and their Offfice suite is very expensive. The restrictions of a single PC means if you have more than one PC the cost increases. Linux is affordable. One copy of the latest version in a library can be shared by many. In poorer countries this will be a great boon. People will learn to manage with the free software that is there to use. Going on the internet will be easy as Netscape, familiar to everyone is available. A simple x-based email client allowing multiple users will be all that is needed, as Netscape does not allow multiple addresses on the same PC. Also viruses are not a problem in linux, as yet!!! :-) I read in the news in lg that Corel was going to build a desktop PC version for ordinary PC users in MAY lg news...

"Ottawa, Canada - April 21, 1999 - Corel Corporation (NASDAQ: COSFF, TSE: COS) today announced an alliance with two major Open Source developer communities to advance the development of its proposed Linux distribution; a user-friendly Linux installation and graphical user interface (GUI) for the desktop PC."

But this may be costly. In the April news there was some hope... "Project Independence: Linux for the Masses, http://independence.seul.org/distribution/ "

Therefore my request. Could you please reserve a little section of your lg for simple desktop uses of Linux, as opposed to programmers, LAN users, Server users etc.? News as well as software reviews specially of value to us could be great! Thanks

--
Nandalal Gunaratne

(I'd be happy to have deskopt uses included. Anyone who submits this type of article can be assured that we will post it. --Editor)


 Date: Wed, 5 May 1999 16:53:53 -0400
From: Larry Kollar, lkollar@my-dejanews.com
Subject: Re: KDE is bloated and slow (not)

I keep hearing all this stuff about KDE is bloated, KDE is slow, KDE put a nasty stain on my favorite T-shirt and I can't get it clean, you get the idea....

I run Linux part-time on a Mac G3/266 (the beige box, "only" 32MB of RAM), with KDE as my standard GUI, and I don't see what people are complaining about. Maybe I'd feel different if I had to run it on a Pentium, or on a Mac IIsi running NetBSD or Linux-68k, but KDE responds well to decent hardware. I recently updated from a beta to 1.1, and it does feel a bit snappier.

I'll admit to shutting down X to compile large projects, but only because of my current RAM limits. Once I add more RAM, I'll probably change the runlevel to 5 and have X + KDE running all the time.

Besides, my wife would kill -9 me if I removed KDE -- she learned how to boot into Linux & start X just so she can play kmahjongg and a couple of the other games. This by itself is a reason to have KDE available; you can spend a few minutes showing newbies a comfortable interface and blunt the irrational fear of not-Windows.

Looking for a 3-button ADB mouse,

--
Larry


 Date: Mon, 24 May 1999 14:52:06 +0200
From: Roger Subject: MTBF for Craig Burton

Craig Burton said "Show me the MTBF figures"

I am used to a hardware background, where we calculate MTBF figures before releasing systems. If nothing else, they give a rougth guide to how many spares you need;-)

BUT, basicly speaking, this calculation is done by taking an MTBF figure for each element (This type of component employed in this manner has this MTBF), which are text book figures derived from statistical analysis, and then you add them all together.

This means if system A has 10 widgets and 6 doofas, whilst system B has 15 widgets and 12 doofas, then system A will have a much lower MTBF.

It may seem a harsh way to calculate reliability, but generaly speaking it works, and one always regards system reliability as being inversely proportional to system complexity. Most of us are not able to review the NT source, but it is believed to be far more complex than Linux, which would suggest that the MTBF is proportionatlely lowwer.

Of course in software there are many other parameters, but nontheless complexity is a major parameter. Another biggie is the language used for development, C programming is far more vulnerable than higher level languages for obscure bugs such as memory leaks, but for performance reasons so low level languages are considered essential for OS work, and so both have the same vulnerability (in fact one can easily find disaster tales of e.g. memory leaks on both platforms).

Another major factor is using tried and trusted methods (or re-using well proven code). Much of the reason for NT's additional complexity is that it has to support so many MS invented protocols designed to render it incompatible with the rest of the world. This is particularly so when one get's out of kernel space into userland, Linux makes heavy (re)use of legacy *nix software such as sendmail which has a very long history.

In a nutshell, there are sound scientific arguments as to why Linux may be more relaible than NT, indeed one of Linus's rallying cries is to keep things simple, and he resists attempts to over complicate the kernel. MS (IMHO) appear to have tied themselves in knots with all thier attempts to do things in a proprietry manner.

I think Craigs comments that imply that people who say Linux never goes down are talking shit and are just Linux worshippers are a bit excessive. Of course Linux does go down, but these people are just reflecting a common appearence that Linux boxes do seem to go months before re-boots (so one forgets when one last re-booted), wheras NT reboots tend to be common enougth to be frustrating (...but we re-booted just a couple of weeks ago). It is a subtle difference, but Linux by being a little better appears to cross the memory threshold.

All I will add is that at work I use both a Linux and NT server, neither are particularly loaded, and both are doing file and print sharing (allthougth the Linux box does handle a mega printer which often has 100's of megabytes in the queue, it was moved from the NT box because it did not work there). The Linux box has only ever gone down during power outages (no UPS), whilst the NT box (which does has a UPS), has gone down several times in the two year period I have been in this environment. Note that the Linux server was just loaded and set up on the fly by ourselves, whilst the NT box was set up, and is maintained, by an outside firm with MS certified personel.

Am I a religious nut for pointing this out?

I think Craigs comments that imply that people who say Linux never goes down are talking shit and are just Linux worshippers are a bit excessive. Of course Linux does go down, but these people are just reflecting a common appearence that Linux boxes do seem to go months before re-boots (so one forgets when one last re-booted), wheras NT reboots tend to be common enougth to be frustrating (...but we re-booted just a couple of weeks ago). It is a subtle difference, but Linux by being a little better appears to cross the memory threshold.

-- Bye for now, And watch out for those low flying Penguins.......

Roger


Published in Linux Gazette Issue 42, June 1999

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


News in General


 July 1999 Linux Journal

The July issue of Linux Journal will be hitting the newsstands June 11. This issue focuses on Science and Engineering. Feature articles include "Archaeology and GIS", "SCEPTRE: Simulation of Nonliear Electric Circuits", Stuttgart Neural Network Simulator" and "Real-Time Geophysics Using Linux". Also included are an article by Dan York on "Building a Linux Certification Program, one by Jon "maddog" Hall about his visit to Fermi Labs at Spring COMDEX and an interview with Dev Mazumdar and Hannu Savolainen of 4Front Technolgies. Linux Journal now has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue62/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/ljsubsorder.html.

For Subcribers Only: Linux Journal archives are now available on-line at http://interactive.linuxjournal.com/


 1999 USENIX Annual Technical Conference

June 6-11, 1999 -- Monterey Conference Center, Monterey, California

The Keynote will be by John Ousterhout, creator of Tcl/TK, speaking on a fundamental shift in software development to applications created by extending existing applications, protocols, frameworks, and devices.

The FREENIX track is devoted to high-level technical discussion of open source software. Peer-refereed papers, expert talks, and evening sessions will be led by leading OSS developers including Linus Torvalds, Kirk McKusick, Theodore Ts'o, Theo de Raadt, and Robert J. Chassell for Free Software Foundation/GNU. (Richard Stallman had planned to lead a BoF but will be in Turkey on FSF business.)

Web site: http://www.usenix.org/events/usenix99


 2000 USENIX Annual Technical Conference: Call For Papers

June 18-23, 2000 -- San Diego, California

The Program Chair, Christopher Small, Lucent Technologies-Bell Labs, and the Program Committee seeks to bring together the broad advanced computing community under a single roof to share the results of the latest and best work, find points of common interest and perspective, and develop new ideas that cross and break boundaries. They invite your submission of original and innovative papers. Invited Talk proposals and suggestions and proposals of tutorials are also very welcome.

Paper submissions are due November 29, 1999.

See http://www.usenix.org/events/sec99/cfp.html.


 Linux support in Indonesia

PT Cakram DataLingga Duaribu has announced it's first commercial Linux support in Bogor, West Java, INDONESIA. The support includes Linux consultation service, Home PC pre-installed with RedHat Linux, Linux Servers with special configurations.

For more information, contact http://cdl2000.or.id/linux.html or linux-support@cdl2000.or.id.


 Linux 3D Gaming Initiative looking for volunteers

The Linux 3D Gaming Initiative ( http://www.linux3d.net) is a pro-bono community resource project initiated by Full On 3D (http://www.fullon3d.com). It is open to and depending on contributors from all sorts of hardware and gaming websites..

Voluteers needed:


 Linux Administrators Security Guide 0.1.0

https://www.seifried.org/lasg/
150+ pages, Adobe Acrobat format. An https:-capable browser is required for download (This means a browser that can view secure webpages, such as recent versions of Netscape or Internet Explorer.)

There is an LASG FAQ in HTML format, but https: is still required.
https://www.seifried.org/lasg/lasg-faq.html.


 sourceXchange: Software-Development Model of the Future

More than just a job-posting or recruiting Web site, sourceXchange is the industry's first vehicle to manage the open-source development process that protects the interests of both corporate sponsors and open-source developers.

The sourceXchange is a Web site that maintains a database of all published project RFPs posted by corporate sponsors, registers open-source developers and their teams, manages RFP responses from the developer community, and manages payment. It also will incorporate peer review and project milestones to ensure quality and reliability of each development project.

SourceXchange, an affiliate of O'Reilly & Associates, was founded in conjunction with HP, the founding sponsor. The two companies plan to launch the service in early summer with an array of open-source development projects from HP that expand its commitment to open-source technologies. Pending a successful beta launch in July, sourceXchange will accept projects from other enterprise sponsors.

See www.sourcexchange.com for details.


 Cosource.com: another service to fund Open Source development

Redmond, WA -- Veriteam, Inc., today announced the launch of their web-based service, Cosource.com ( www.cosource.com), which will enable users of Open Source Software to directly influence the development of Open Source Projects.

Cosource.com will launch the beta-testing phase of their service on June 1, 1999. During the beta-testing phase, registered sponsors will nominate seed projects for development by Open Source developers, while programmers will register as potential developers of sponsored projects. After the beta phase, Cosource.com will begin accepting sponsorships for specific projects from consumers of Open Source Software.

Cosource.com allows individuals to offer financial rewards to developers of Open Source Projects in exchange for creating software that meets the individuals' needs. On the web site, a database records the specifications and initial sponsorship amount offered for a project. After the initial sponsorship, other sponsors can easily add their sponsorship amounts to the project, thus increasing the bounty offered for the project.

Once a significant bounty has accrued, developers bid for the right to produce the software according to the specifications detailed by the project's sponsors. The Staff at Cosource.com coordinate the interface between the sponsors and developers, making sure the needs of the sponsors are met and the developers are paid for their efforts. Sponsors make their payments via a secure credit card payment system, and the developer is paid with one check issued by Cosource.com.


 O'Reilly "Open-Sources"

Sebastopol, CA-O'Reilly & Associates announced today that they a are making the entire new book, ?OpenSources: Voices From the Open Source Revolution freely available (or "open-sourced") on their web site. Open Sources is a collection of essays that offer insight into how the Open Source movement works, why it succeeds, and where it is going.

OpenSources, published in January 1999, has earned considerable critical acclaim. In "OpenSources", Open Source pioneers such as Brian Belendorf (Apache), Scott Bradner (Internet Engineering Task Force), Jim Hamerly (Netscape), Kirk McKusick (Berkeley Unix), Tim O'Reilly (O'Reilly & Associates), Tom Paquin (mozilla.org.), Bruce Peren (Open Source Initiative), Eric Raymond (Open Source Initiative), Richard Stallman (Free Software Foundation), Michael Tiemann (Cygnus Solutions), Linus Torvalds (Linux), Paul Vixie (Bind), Larry Wall (Perl), and Bob Young (Red Hat) share their vision of the Open Source movement.


 Pacific HiTech and Computer Associates announce Linux partnership

ISLANDIA, N.Y., and TOKYO, JAPAN, May 18, 1999--Computer Associates International, Inc. (CA) and Pacific HiTech today announced a partnership to broaden the acceptance of Linux and Linux-based applications by corporate users across the Pacific Rim and worldwide.

Under terms of the agreement, CA and Pacific HiTech will create a unique, high-value operating system solution that incorporates both Pacific HiTech's TurboLinux and CA's industry-leading Unicenter TNG management technology. CA will develop versions of Unicenter TNG and Unicenter TNG Framework to support TurboLinux, while Pacific HiTech will promote the use of Unicenter TNG as the premier management solution for its Linux customer base. The companies have also agreed to collaborate closely on engineering multiprocessor clustering and failover support for their respective solutions.

Here's the full press release.


 Pacific HiTech and IBM

Pacific HiTech and IBM announced an industry first partnership whereby Pacific HiTech will ship IBM's DB2 Universal Database with its Linux Operating Suite, TurboLinux.

Pacific HiTech will sell its TurboLinux products integrated with IBM middleware - beginning with DB2 Universal Database - through its channels in Asia and North America.

Also announced today was the largest deployment to date of IBM NetFinity servers running Linux. The deployment, which took place at Kyoto Sangyo University, a leading university based in Kyoto, Japan, involves more than 600 IBM NetFinity 3000 servers running on Pacific HiTech's TurboLinux workstation. The installation of this technology will enable the university's students, faculty and researchers to run both the TurboLinux workstation and Microsoft Windows NT operating systems on a single network.


 Intel and H-P: Linux on Merced

Intel and HP have announced that the Merced program has included Linux as one of the Operating Systems the chip will be certified on at its release date.

The announcement is on Intel's website.


 USALogin web site revamp (pre-configured Linux systems)

USALogin specializes in pre-configured Linux solutions designed to snap into your existing corporate network.

USALogin's solution will

The system is complete and installed into your office with a single low monthly cost.

USALogin's web site is www.usalogin.net.


 CTiTEK replaced Windows NT with Linux on a client's webserver

Chesterfield, MO - May 18, 1999 - CTiTEK Inc.

"This is the fourth Linux installation in two months. Others consisted of firewalls and Email servers.

An estimated $2,000 - $10,000 annual savings can be realized when switching to a Linux server. (Includes labor, hardware, and software savings).

A Microsoft FrontPage error on an Email form was the last straw that caused this conversion to Linux.

Instead of consistent errors and copious amounts of time spent on Microsoft's software undocumented 'issues', it was decided to rebuild the system into a Linux machine.

It all started by using Windows NT with Option Pack 4 to run multiple web sites one year ago. The customer wanted to run several websites on one machine, so Windows NT with Option Pack 4 was used.

Today it became necessary to run an Email Form (an area on the website that one can fill-in and the info is sent by email to someone in the company) on the website, and FrontPage was used to keep everything in the MS 'family'. Unfortunately the FrontPage Email Form did not work properly with the webserver. After spending countless hours trying to solve the problem, including several calls to Microsoft, we realized that the Windows NT Operating system will have to be rebuilt with the latest version of the Management Console (An uninstall and installation of the latest option pack did not work).

We selected Linux because it is a robust, free Operating System (benchmark tests with reputable magazines indicate a minimum 75% higher performance).

TRADEMARKS. Microsoft, Windows, Windows NT, and/or other Microsoft products referenced herein are either trademarks or registered trademarks of Microsoft."

CiTEK's website is www.citek.com.


 Alpha Processor, Inc. joins Linux International

LINUX EXPO, Raleigh, NC, May 19, 1999 - Alpha Processor, Inc. (API), the leading provider of the world's fastest 64-bit microprocessor and related technologies, today announced it has joined the non-profit Linux International organization, formally pledging its continued commitment to support application development for the Linux operating system.

"In becoming a member of Linux International, API joins industry forerunners dedicated to the mass acceptance of Linux," said Jon "Maddog" Hall, executive director of Linux International. "Offering today's leading high-performance platform for Linux, API is an ideal candidate for membership. This symbol of API's commitment to growing this market undoubtedly will inspire innovations throughout the Linux community."

API is committed to developing enabling technologies to speed adoption and growth of applications built on the Alpha Linux platform. Alpha's superior speed, performance and reliability make it a natural environment for Linux. API's marketing and engineering partnerships and industry standard platform price points are expected to expand Alpha's share in this growing market.

The company's website is www.alpha-processor.com.


 Magic Software announces the "Magic for Linux Really Cool Contest"

IRVINE, CA (May 20, 1999) -- Magic Software Enterprises (NASDAQ: MGIC) announced today that it will award a free 10-day cruise for two to Antarctica to the developer who builds the best e-commerce solution for the Linux platform using Magic, the company's highly productive development technology. The contest, titled "The Magic for Linux Really Cool Conte st", runs from May 20, 1999 through October 15, 1999, with all entry forms d ue no later than September 30, 1999. Complete details on the contest can be obtained through the company's web site, www.magic-sw.com.


 Ardent Software delivers key data management software for Red Hat Linux

WESTBORO, Mass., May 20, 1999 - Ardent Software, Inc. (Nasdaq: ARDT), a leading global data management software company, today announced a partnership with Red Hat Software, the market leading Linux distributor and service provider. In partnership with Red Hat, Ardent will port key data management software tools to Red Hat Linux (RHL), allowing Ardent's extensive channel of resellers and distributors to make their business applications available to Red Hat Linux users. Among the Ardent products to be available on the Red Hat Linux platform are its UniVerse and UniData relational databases and related development tools, including the System Builder multi-tier 4GL and RedBack Web OLTP environment.

Ardent's web site is www.ardentsoftware.com.


 IACT's Freedom of Choice Petition

Join us in IACT's Freedom of Choice Petition, to stop the exclusive pre-installation [bundling or tying] of a single company's software on the computers sold, bought and used across the world. To bring real choice and innovation to the PC market, we should be able to buy and sell new computer systems compatible with Linux and a wide range of software programs, in any combination.

Help us send that direct message to the PC companies, by signing and supporting IACT's Freedom of Choice Petition! We're already getting great support from the Internet community and from users, programmers and resellers of Linux, OS/2, Unix, DOS, BeOS, BSD and yes, Windows, too. To add your name to all of theirs, just use either our on-line form or regular e-mail. Details are at http://pages.cthome.net/iact/iaction-freechoice.html.


 Linux Links

Rasterman explains his departure from Red Hat: http://slashdot.org/article.pl?sid=99/05/31/1917240&mode=thread

LuCAS: Spanish-language Linux documentation: http://lucas.hispalinux.es

IBM announces support of four Linux distributions: http://www.theregister.co.uk/990525-000006.html.

SCO's views of Linux and comments on recent press articles


Software Announcements


 Proven dk bookkeeping program

May 3, 1999 -- Proven Software,Inc. today released Proven dk, Small Business Edition. Proven dk is a single-user quick entry bookkeeping package written specifically for the Linux desktop. The Small Business Edition is priced at $99 (US). An evaluation copy is available on the company's website.

Despite its low price, Proven dk, Small Business Edition is a comprehensive accounting system which includes Sales Invoicing, Accounts Receivable, CheckWriter, Accounts Payable, General Ledger, and Financial Report Generator. This new product provides the general bookkeeping and accounting essentials for most small businesses and organizations.

The company's web site is www.provenacct.com.


 EasyCopy: printing and scanning prorgrams for CAD-related industries

SAN JOSE, Calif., April, 1999 - AutoGraph International (AGI) debuted EasyCopy 6.0 at the COE TechniFair with a scheduled late May release to the marketplace. EasyCopy 6.0 is a major rewrite of AGI's flagship, EasyCopy/X, which has an installed base of more than 150,000 users worldwide. With this new generation EasyCopy has taken a major step from a printing solution to a flexible set of image communication tools.

The company says EasyCopy, EasyConvert, EasyCopy/Page, EasyCopy/Scan and EasyCopy/Graphics run on Linux. Pricing of EasyCopy begins at $395.

The company's URL is http://www.augrin.dk.


 Other Products

Harlequin Lispworks Beta for Red Hat on Intel (Common Lisp implementation): http://www.harlequin.com/devtools/lisp.

/BriefCase 3.0 Released as OpenSource (Software Configuration Management solution): http://www.applied-cs-inc.com/.

Sylvan Prometric to Deliver New Linux Certification Tests:
Information about the Sair Linux training and certification program: www.linuxcertification.org
Locations of Sylvan APTCs: www.sylvanprometric.com

Integrated Computer Solutions, Inc. (ICS) has released its flagship product, Builder Xcessory (BX PRO(tm)), is now available for SuSE Linux. The press release is at http://www.ics.com/about/whatshot/press_releases/bxlinux-suse.html. This is a WSYWIG integrated development environment.

Metroworks Code Warrior software development tool has been ported to Red Hat. http://www.metroworks.com.

Web-4M(tm) 2.5 provides a comprehensive collaboration/groupware environment for Linux. The Web-4M server supports email, news, phone, the Browseable Document Library(tm), the Interactive Slide Show(tm), audio conferencing, chat, a white board, a calendar, scheduler and more. The Web-4M server runs under Linux and other platforms in conjunction with the Apache web server. Clients can be Linux or any platform that supports a Java-compliant web browser. http://www.jdhtech.com.

SuperAnt releases Linux Security CD-ROM: http://www.superant.com.

VariCAD professional CAD system: www.varicad.com.


Published in Linux Gazette Issue 42, June 1999


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Back  Next


This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1999 Specialized Systems Consultants, Inc.

Contents:

(!)Greetings From Jim Dennis

(?)Setting up a Loopback Mount --or--
Loopback (localhost) NFS Mounting for FTP
(?)sites for general disk info? --or--
General HD Info and Boot Code
(?)TCP Sockets --or--
SYN, SYN/ACK, ACK, ACK, ACK: TCP Handshaking "Pleased to meet you!"
(?)cvs tree for pam --or--
PAM chroot Wherein Jim rants about PAM
(?)Resizing partitions --or--
Filesystem Management: What must be "resident" at all times?
(?)Hubs --or--
Ethernet Switches vs. Hubs
(?)procmail and saved variables. --or--
MATCH and Replaceable Parameters in procmail
(?)RMA for Video Card
(?)Unix Internal --or--
Inodes Numbering: An Academic Question
(?)One Bad Sector thats gettin on my nerves! --or--
One Bad Sector It Doesn't Ruin the Whole Disk
(?)Server shutdown/restart: 2-key keyboard --or--
Server Shutdown Button
(?)hal91 --or--
HAL91 (Floppy Based Linux Distribution)
(?)ping at a differnt port --or--
Ping a Port: NOT
(?)Hey answer guy!!! --or--
Linux as a Job! Hobbies become fun and profit
(?)New Kernel Loses Ether Driver; Dial on Demand and Masquerading
A grabbag of user questions.
(?)pcmcia install on debian
(?)work-around for gdi printer? --or--
WinPrinter Work-around
(?)Question about 2 GB max? --or--
Maximum Filesize vs. Maximum Filesystem Size
(?)Advanced ipfwadm question. icmp forwarding. --or--
ICMP Masquerading
(?)RedHat 5.2 Kernel 2.0.36 --or--
Upgrade Breaks Several Programs, /proc Problems, BogoMIPS Discrepancies
A visit to "Library Hell"
(?)Pls spare a minute: --or--
Spare a Minute to Provide "Some Info"
(?)HELP!!!!!!!!!! --or--
Data "Losted" (sic)
(?)"Network Neighborhood" --or--
Network Neighborhood: Heterogenous File Sharing
(?)AOL

(!) Greetings from Jim Dennis

Lies, Damn Lies and Benchmarks

Those of you who read slashdot (http://www.slashdot.org), the Linux Weekly News (http://www.lwn.net), or other common Linux webazines and forums have undoubtedly tired of reading about the Mindcraft fiasco. If so, maybe you'll skip this and go unto the usual collection of "Answer Guy" questions.

The Mindcraft story has been interesting. As some of my colleagues have pointed out their "attack" on Linux serves more to legitimize Linux as a choice for business servers than to undermine it. In addition it appears that the methodology they used has uncovered some legitimate opportunities for improvement in the Linux process scheduling facilities.

I'm referring to the "thundering herd" issue that results from a large number of processes all doing a select() call on a given socket for file resource -- such as having a 150 Apache servers listening on port 80. However that is not a new issue; Richard Gooch (a significant contributor to the Linux kernel mailing list and code base) discussed similar issues and possible patches almost a year ago:

I/O Event Handling Under Linux
http://wwwatnf.atnf.csiro.au/people/rgooch/linux/docs/io-events.html

It looks like some work will go into the Linux kernel and into Apache to resolve some of those issues. In addition I know that Andrew Tridgell and Jeremy Allison (a couple of the principal members of the Samba development team) have been been continuing thier work on Samba.

So the Linux/Apache/Samba combination will show improvement for the general case. Samba 2.0.4 just shipped and already has some of these enhancements. Some of the interesting changes to the Linux kernel might already be present in the 2.3.3 developmental kernel (and might be easily pack ported as a set of 2.2.9 patches). So we could see some of the improvements within a couple of weeks.

Some of these improvements may give Linux a better showing in any "Mindcraft III" or similar benchmark. Maybe they won't. The improvements will be for the general case --- and I don't see much chance that open source developers will sneak in special case code that will only improve "benchmark" performance without being of real benefit.

That's one of the problems with closed source vendors. There's great temptation to put in code that isn't of real value to real customers but will be great for benchmarks and magazine reviewers. This has been detected on several occassions by several vendors; but it would be completely blatant in any open source project.

Frankly, I don't care if we improve our Mindcraft results. I prefer to question the very premises on which the whole discussion is based.

There are three I'd like to mention:

The fallacy of the whole Mindcraft mindset is that we should have "big servers" to provide file and web services. Let's ask about that.

Why?

The reason Microsoft wants to push big servers should be relatively obvious. Microsoft's customers are the hardware vendors and VARs. Most end customers, even the IT departments at large corporations, don't install their own OS. They order a system with the OS and major services pre-installed (or order systems and pay contractors and/or consultants to perform the installation and initial configurations).

So, it is in Microsoft's vested interest to encourage the sale of high end and expensive systems. The cost of NT itself is then a tinier fraction of the overall outlay. One or two grand for the OS seems less outrageous when expressed as a percentage of 10 to 20 thousand dollars.

So, how many customers really need 4-way SMP systems? Are 4-way SMP systems EVER really a better choice for web and file services than a set of four or more similar quality separate systems?

Big 4 or 8 CPU SMP servers are probably the best choice for some applications. It's even possible that such systems are optimal for SOME web and file servers. What's really important, however, is whether such systems are appropriate to YOUR situation.

Back when NT was first starting to emerge as a real threat to Netware it was interesting that the press harped on the lack of "scaleable SMP" support in Netware 3.x and 4.x. I'm sure there are analysts today who would continue to argue that this was the primary reason for Netware's loss of marketshare during the early to mid '90s.

Personally I suspect that the bigger factors in Netware's woes were from three other causes:

Client support:
MS shipped Win '95 and WfW with support for SMB. Novell never adapted their servers to work with the support that was shipped with the clients. By all accounts SMB is a vastly inferior suite of protocols to Netware's NCP. However, IT managers are often eager to save a penny on every client by not having their sysadmins and help desk people visit every new system to install network client drivers.
TCP/IP:
Novell provided TCP/IP early on --- in the form of expensive addons to their main servers, and a relatively expensive suite of client tools for MS-DOS. They didn't adapt to the emergence of the Internet in corporate circles by including TCP/IP as standard features in their base packages. Meanwhile IPX's SAP (service advertising protocols) were sucking up a noticable portion of the available bandwidth as more companies put MANY more devices on their LANs and WANs. Novell had the technology, but they failed to rethink their pricing model, probably in a doomed effort to protect some of their revenue streams.
Pricing:
Microsoft had a huge advantage over Novell. They could afford to practically give away NT server for a few years (and perhaps turn a blind eye to some amount of piracy, temporarily) so long as that would cost Novell some server licenses.

Of course, I could be wrong. I'm not an industry analyst. However, I do know that the considered opinion of the Netware specialists I knew back around '93 was that Netware didn't need SMP support. It was plenty fast enough without additional processors. NT, on the other hand, has so much overhead that it needs about 4 CPUs to get going.

So, if we're not going to use "big servers" how do we "scale?"

Replication and Distribution.

Look at how the whole Internet scales. We have the DNS system which distributes (and delegates) the management of a huge database over millions of domains. We don't even bat an eye that an average DNS lookup takes less than a second. The SMTP mail system also has proven scalability. It handles untold millions of messages a day (some of which isn't even spam).

Of course some people are already chomping at the bit to write to me and explain what an idiot I am. There are problems with replicating files and HTML across multiple servers. Some applications are very sensitive to concurrency issues and race conditions. There are cases where the accessor of a file must have the absolute latest version and must be able to retain a lock on it. There are cases where we want to lock just portions of files, etc.

However, these are not the most common cases. Going for the "big server" approach is often a sign of laziness. Rather than identify the specific sets of applications that require centralized control and access, they try to toss everything on the "one size stomps all" server.

In the degenerate case of the Mindcraft benchmarks it would be amusing to pit four low cost PCs running Linux against one "big server" running NT. I say "degenerate case" since the benchmarks used there don't seem to have any concurrency or locking issues (at least not for the HTTP portions of the test).

Needless to say we'd also seem some advantages beyond the scalability of our "hoard of cheap servers" approach. For example we could use dynamic DNS and failover scripts to ensure that transparent availability was maintained even through the loss of three of the four servers. There's certainly some robustness to this approach. In addition we can perform tests and upgrades to one or more systems in these loose clusters without any service down time.

Because these use commodity components it's also possible to keep shelf spares in an on site depot. Thus reducing the downtime for individual nodes and providing the flexibility to rapidly increase the clusters capacity in the face of exceptional demands.

All that --- and it's usually CHEAPER, too.

Naturally there are some challenges to this approach. As I mentioned, we have to configure these systems with some sort of replication software (rdist, rsync) and test regularly to ensure that the replication process isn't introducing errors and/or corruption. There are also the problems with writable access and the needs for the nodes in a cluster to communicate about file locking and application (i.e. CGI) state.

The point is not so much to promote the "hoard of thin servers" approach as to question the premise. Do we really need a "big server" for OUR task?

I've talked about the fundamental disconnect between mass marketing and customer requirements before. "Mass marketing" sells features in the hopes that masses will will buy them. Customers must consider the "benefits" of each "feature" before accepting any arguments about the superiority of one product's implementation of a given "feature" over another.

As an example let's consider Linux' much vaunted "multi-user" feature. To many people this is not a benefit. Many people will never have anyone else "logged into" their system. To people like my mom "multi-user" is just an inconvenience that requires her to "login" and means that she sometimes needs to 'su' to get at something she wants. (Granted there are ways around those). In some way Linux' "multi-user" features (and those of NT, for that matter) are actually a detriment to some people. The represent a cost (albeit a small and easily surmounted one) to some users.

This leads us to the other two issues that I would question.

Apache is not necessarily the best package for providing high speed, low-latency, HTTP of simple, static HTML files.

There are lightweight micro web servers that can do this better. I've also heard of people who use a small cluster of Squid proxy servers interposed between their Apache servers and their routers. Thus the end users are transparently access an organizations Squid caches rather than directly accessing it's web servers. This is a strange twist on the usual case where the squid caches are located at the client's network.

By all accounts SMB is a horrid filesharing protocol. The authors of Samba take a certain amount of wretched glee in describing all of the misfeatures of this protocol. Its sole "advantage" is that it's included, preconfigured with 98% of the the client systems that are shipped by hardware vendors today.

Note: I'm NOT saying that NFS is any better. Its main advantage is that almost all UNIX systems support it.

Personally I have high hopes for CODA. Its about time we deployed better filesystems for the more common requirements of a new millennia.

I'm not the first to say it:

"There are lies, damned lies, and benchmarks"

However, the important thing about any statistic or benchmark is to understand the presenter. Look behind the numbers and even the methodology and ask: "Who says?" "What do they want from this?"

Alternatively you can just reject statistics and benchmarks from others, and make your decisions based on your own criteria and as a result of your own tests.

The scientific method should not be used solely by scientists. It has application for each of us.

-- Jim Dennis


(?) Loopback (localhost) NFS Mounting for FTP

From Mark S. Turczan on Sun, 02 May 1999

(?) James,

Would you know of a way to setup a loopback mount within a /home/ftp hierarchy?

Or could you provide a better method to achieve the following?

I've got a set of disks setup under software raid, and I've mounted them under /mnt/raid. What I'd like to do is include a link from a directory under /home/pub/Archive to the actual files under /mnt/raid/Archive. I've tried doing this with a symbolic link, but it doesn't seem to resolve it when I connect through ftp.

(!) When you connect as "anonymous" or "ftp" through the conventionally configure FTP service, or as any member of a "guestgroup" to a WU-FTP daemon, you are in a chroot jail. This is intended to prevent you (an FTP client) from wandering around the filesystem peeking into things where you don't belong (as an anonymous or guest user).
Naturally symbolic links don't pierce through a chroot wall.
It's possible to configure your system to act as an NFS server and client (concurrrently) and to export a directory tree (presumably in read-only mode) to yourself.
This is one of several tricks that is referred to as a "loopback mount" (not to be confused with the mount -o loop=... option which is a way of mounting a file image as a filesystem). In this case you're doing a perfectly normal NFS export, and a perfectly normal NFS mount. The only oddity is that the export and mount are on the same machine and are going through the loopback network interface.
So you put a line in your /etc/exports file like:
/mnt/raid/ftparea 127.0.0.1(ro,insecure)
... and possibly some lines like:
/mnt/raid/ftparea/no/ (noaccess)
(to define a set of subdirectories under the exported directory tree to which you want to deny access).
... and then you use a command like:
mount -t nfs 127.0.0.1:/mnt/raid/ftparea /home/ftp/home
... or whatever.
Personally I think it's a horrible kludge. But I've done things sort of like this and it does work.

(?) Thanks for any help you can offer.
--
Mark Turczan

(!) Hope that makes sense.

(?) General HD Info and Boot Code

From Erik Bryer on Sun, 02 May 1999

(?) Hi,

Got your email address from:
http://www.linuxvalley.com/mirror/lg/issue36/tag/79.html

(!) Wow! Someone who actually tells me where their they found me! I've always thought that any e-mail to someone you've never met should include some passing reference of this sort.
Of course, there are cases where it might be superfluous. If you were to e-mail Linux Torvalds he'd have a pretty good idea where you got his address; it's in the /usr/src/linux tree on millions of computers.
Anyway, linuxvalley.com looks like an interesting site --- if you read Italian. I've seen quotes of myself translated into Italian, Portugese and a couple of other languages --- it's amusing. (I just feel sorry for the interpreters --- any of you out there? I owe you each a beer!).

(?) Do you know of any websites with general hard drive info. More specifically, and I'm quite happy just with a web page reference if you like, I wonder if, like dos, unix requires executable code in the boot sector, if it even has a boot sector. I've tried alta vista, but found mostly junk. Thanks.

Erik Bryer Calgary

(!) Well I don't know about general hard drive info. Many of the hard drive manufacturers put technical information about their drives up on the web. Of course you usually have to hunt through quite alot of marketing fluff that clogs many corporate sites to get to the good stuff.
However, I can answer the question regarding boot code.
The PC BIOS requires that your OS, any OS be loaded from somewhere. Your mainstream choices are: hard drive, floppy, network and (most recently) CD-ROM. There are some devices which emulate drives (sold under names like "ROMDisk" et al.).
When loading from a hard drive the BIOS loads the first sector (512 bytes) on track zero. This is called the MBR. It contains two parts: some boot loader code and a partition table. The partition table is in the last 66 bytes of the MBR. Actually there are 4 primary partition entries of 16 bytes each, and there's a pair of "signature" bytes which indicate whether or not the drive has ever been initialized. The other 446 bytes of the MBR contains the primary bootloader code.
As you mentioned, MS-DOS provides its own bootloader. That just looks for the active partition and loads a secondary bootloader from the first sector of that partition.
OS/2, NT, and the various PC implementations of UNIX each provide their own bootloaders. These load code from a "boot manager" (usually a one track partition somewhere on the primary drive).
Linux offers a number of alternatives for loading the kernel. The most common is to use the LILO package. This consists of a program, /sbin/lilo, that reads a configuration file (/etc/lilo.conf, by default), and builds a set of primary and secondary boot blocks, and a set of "maps" and writes the primary boot code and the pointers to the secondary blocks and maps into the MBR. LILO is a very flexible utility. You can store information on up to 16 different boot images, you can pass parameters to the Linux kernel (which can set various boot time options in the kernel, or be passed along to init, and thence to the master environment and to the rc startup scripts). You can password restrict some or all of your LILO boot stanzas, define messages to be displayed at boot time, issue a command that sets an automatic "one time" set of boot parameters (/sbin/lilo -R), etc.
Another option is GRUB, the GNU "grand unified bootloader." This is slated to be the bootloader for the GNU HURD (a free microkernel based operating system which has been under development since before Linus started on the Linux kernel). I've heard that GRUB can be be used now with the HURD betas and with Linux.
One thing that's interesting about Linux, in contrast to other operating systems, is that you can load it in alternative ways. So you can load the PC Linux kernel using LOADLIN.EXE (an MS-DOS program) or directly from Win '9x using the updated LinLoad '95 (??? derived from LOADLIN?). So you can have copies of your kernel in any MS-DOS directory and "run them" from MS-DOS. You can put a Linux kernel straight on a floppy (starting at the first block thereon) and it will be directly loaded.
You can also use SYSLINUX to put a Linux kernel on an MS-DOS formatted floppy and load it from there. (If you mount up a Red Hat installation floppy you'll see a copy of the SYSLINUX.CFG file that the SYSLINUX boot loader reads).
It's also possible to load Linux over a network (given a suitable bootp PROM, installed in a NIC, for example). There is nothing to prevent a computer manufacturer from installing a Linux kernel in their own ROMs --- loading it with initrd (initialization RAM disk) support. There are some people doing this for "embedded" systems already (seems to be primarily in specialized systems, not in commodity PCs).
Igel has been making Linux based Xterminal/etherterminal systems using "Disk on a Chip" drivers for years. (http://www.igelusa.com)
As for finding "mostly junk" .... Yeah! I get that, too. However, a big part of "The Answer Guy's" success is that I sift through enough of that junk to (usually) come up with what I'm looking for. (Sometimes it's even what my correspondents were asking about!)
I hope that helps.

(?) SYN, SYN/ACK, ACK, ACK, ACK: TCP Handshaking

"Pleased to meet you!"

From Kent S on Sun, 02 May 1999

(?) I need help in finding information regarding how sockets are established (not how to code them). In other words, I know that there is a standard procedure followed (SYN,SYN/ACK,ACK) in getting a device talking with a server.

(!) This is referred to as a "three way handshake." The "SYN" flags are requests by the TCP stack at one end of a socket to synchronize themselves to the sequence numbering for this new sessions. The ACK flags acknowlege earlier packets in this session. Obviously only the initial packet has no ACK flag, since there are no previous packets to acknowlege. Only the second packet (the first response from a server to a client) has both the SYN and the ACK bits set.

(?) I am more curious in determining how, where, and who actually handles this on the Linux server.

(!) The kernel.

(?) As an example - I have inetd looking at port 226 for me that will start a program that will read from the socket. If this program terminates (kill,alarm,etc...) then the device attempts to re-establish (sends a SYN). Then one of two things happens depending on how the program was stopped. Either the server never responds until the device sends a reset or the server sends a SYN/ACK and then sends a packets saying that it is finished sending data. My questions are on the level of does RESET reset a port or a socket, and why would a server send a finish sending data flag if the device is requesting a connection. I have been unable to find info about the protocols of communications that should be taking place. Any help would be appreciated!

Kenneth Scott

(!) I don't really understand what you're asking or what situation you are trying to describe. Giving examples of what you see and the specific diagnostic commands you're using to gather your data on the problem (ps, netstat, lsof, etc) would probably help.
However, I can take a guess at what you might be seeing.
There is also a three way handshake at the termination of a TCP session. Either side sends a packet with the FIN (final) flag set, and waits for the other side to acknowlege that with another FIN packet.
After the local process as attempted to close the socket (and the TCP stack has sent the FIN packet to the remote system) the process will be listed as being in the FIN_WAIT stat when you do a 'netstat' command. Buggy TCP clients may just close their end of the connection without completing the three way session termination. This seems to be mostly from certain MS Windows FTP clients.
There seems to be no "timeout" for how long a processes will sit in FIN_WAIT. When I managed a busy FTP server farm for McAfee Associates (a shareware company with lots of MS-DOS and Windows products) I used to see alot of zombies which were children of FTP daemon processes that were in FIN_WAIT. I had a skulker script that would find the parents of the zombies, check their age and argument list and summarily kill them.
I don't know the details about the TCP RST (reset) process. I've at the extreme edge of my knowlege of TCP in this message --- so I can't go into any greater detail on this.
However, I've heard that the best sources of information about TCP protocols are a couple of books. One would be the O'Reilly volume by Craig Hunt (the crab book), Understanding TCP/IP [ Actually, the "crab book" is TCP/IP Network Administration, now in its 2nd edition. -- Heather ], the other would be a three volume set by Comer and Stevens Internetworking With Tcp/Ip: Principles, Protocols, and Architecture.
As you've suggested these are written more with the programmer in mind. However the O'Reilly book seems to be more suitable for sysadmins and users (besides being a paperback, and therefore much less expensive than the three volume hardcover text books from Prentice Hall).
One of these days I'll get around to reading that one. I'd been holding out for one that covered IPv6 in the hope that IPv6 would be deployed more widely by the time I got around to learning all the gory details. However, it looks like we'll still be dealing with IPv4 (the current suite of protocols) for the foreseeable future.

(?) PAM chroot

Wherein Jim rants about PAM

From Terrell larson on Sun, 02 May 1999

(?) I'm interested in a CHROOT option probably in pam-pwdb and I've been unable to find it. If it does not exist I may be willing to implement it IF I can find the current source tree and IF I can find out where to forward it for general use.

Info will be appreciated...

Thanx
Terrell Larson

(!) Terrell,
It's an interesting question. I presume you're talking about implementing/re-implementing PAM support for an old convention among SVR4 UNIX implementations where specific accounts can be marked for special chroot handling by giving them a '*' as the "login shell"
This is described in O'Reilly & Associates' Practical Internet and Unix Security, p232, Garfinkel and Spafford and most other books on UNIX security.
(For our readers that are unfamiliar with the trick: the login program; upon seeing that the login shell for a given account is set to '*' does a chroot() system call to the directory that's listed as that account's "home" directory. Therein 'login' exec()'s the appropriate copy of 'login' thereunder. This normally would then exec() a normal shell, as listed in the /...(chroot top).../etc/passwd file.)
I was doing some research on a paper (that I still plan on submitting to USENIX, one of these days) when I first read about this convention. My paper was on a completely different use of chroot(), but I was doing a literature search.
Naturally I tried this particular trick on one of my Linux systems. It worked fine. In fact I just tested it, as I write this, on a new Debian 2.1 installation that I've been playing with and it works there.
However on PAM based systems (using pluggable authentication modules) --- notably on Red Hat 4.x, 5.x and presumably the new 6.0 system as well as any where the admins have added Linux PAM after-the-fact --- it doesn't work.
I mentioned this in e-mail to Andrew Morgan, the maintainer and co-ordinator of the PAM development project. There is, of course a listing for a pam_chroot module in the PAM administrator's guide. However, that doesn't do the same sort of thing --- and there's no example of how you'd use it to accomplish the same job. It's also listed as "unwritten." I did run across a file at the following URL that you might want to look at:
http://www.us.kernel.org/pub/linux/libs/pam/pre/forgotten/changeroot.tar.gz
It's from late 1997 and is only about 3K. All it contains is source to a simple command, a man page and a sample configuration file. It seems to be an alternative implementation of the chrootuid program that Weitse Venema wrote years ago (part of his 'logdaemon' package).
This particular program (changeroot) seems to have nothing to do with PAM. I'd also guess (from the parent directory name) that the code is not under active development.
Obviously, you could use something like chrootuid, or this changeroot program or you could write a simple C program (or even a PERL script) that would implement this procedure and use a reference to that in lieu of the '*' that I've been talking about. In other words instead of an entry like:
guest:x:65533:65534:Jailed Guest:/usr/local/jail:*
... where 'login' spots the the '*', performs the chroot() to /usr/local/jail, and exec()'s the copy of /bin/login thereunder; we'd see something like:
guest:x:65533:65534:Jailed Guest:/usr/local/jail:/usr/local/sbin/jailsh
... where jailsh is a hypothetical SUID root program that performs these same steps.
This approach will work with any version of UNIX (so its more portable). Another advantage for Linux under a 2.2 kernel is that this hypothetical jailsh program could be written to use the new "privileges" model (which are listed in the sources under the "capabilities" misnomer --- but let's not get into that peeve).
The disadvantage of this approach is that we have to write a custom program (which I'm calling jailsh). It has to run as 'root' (or with several rootly privileges, setuid(), and chroot() at least). I might toss together something for use on one of my systems (I have in the past) --- but I'd be very reluctant to publish those as solutions that anyone else would trust. I simply don't consider myself a sufficiently experienced and skilled programmer to be writing SUID root code for public consumption.
So, this brings us back to your message. chroot() jails are not used much. You'd expect them to see more widespread use, but they they are a bit of a hassle to initially configure (creating a suitable skeleton tree under the target chroot point, getting the requisite shared libraries and device nodes in place for your applications, etc.). In addition there are ongoing concerns that chroot jails are too easy to break out of. In cases where you want to isolate a root/privileged program --- it's too easy for them to chroot back out of the jail. This concern may be addressed by clever use of the new "privileges" features in the 2.2 kernels. However, since you're asking, I presume you already have your application well considered.
It sounds like you are willing to contribute some code to this. So you might start with a small standalone program (based on chrootuid or the changelog program listed above, if their licenses are amenable to your needs).
You can find chrootuid at:
ftp://ftp.porcupine.org/pub/security/index.html
... and there's some some of GNU package called g2s
http://freshmeat.net/appindex/1998/05/11/894932721.html
... that's listed as "an interesting alternative to inetd/tcpwrapper/chrootuid/relay/tcp-env/antispam/etc."
PAM pwdb is maintained by Christian Gafton. The canonical forum for discussions relating to PAM development is the pam-list (pam-list@redhat.com). The canonical web site is at:
http://www.kernel.org/pub/libs/pam
... which is generally inaccessible (as kernel.org is the master site for the Linux kernel --- which gets too much traffic for a reasonable Internet connection). So it should be accessed through one of the mirrors. The Linux kernel crowd use a relatively simple and innovative DNS trick to maintain a list of mirrors that we can use without having to strain our memories. Basically you can use URLs of the form:
http://www.us.kernel.org
... to access a DNS round-robin collection of U.S. mirrors. There are mirrors in many other countries and regions, from Afghanistan (http://www.af.kernel.org) to Zimbabwe (http://www.zw.kernel.org). (Yes, they just use the ISO two letter country codes as a subdomain prefix). Most of these sites mirror the whole kernel.org FTP and web trees. If you have trouble connecting to one of the sites, try again. A check with 'dig' lists about a dozen U.S. mirror sites for www.us.kernel.org. Any decent resolver libraries will cycle through the available addresses until one works (upon successive access attempts). That's part of what allows the whole DNS round robin scheme to work).
These carry sources and links to the many ongoing PAM module projects.
But I digress. Getting back to PAM. Personally I'm somewhat disappointed in the Linux PAM project. I've expressed this to the list and I've discussed it with Christian Gafton in person. He and Andrew will probably be irritated to see this published in Linux Gazette --- and they are invited to compose and submit a rebuttal, or anything they like, to the editors here. (I've courtesy copied them on this e-mail).
My principal complaint is that the PAM project seems to be permanently stuck near version 0.6x. It was at 0.57 about two years ago. The response on the mailing list (and direction from Christian) when I raised this concern was: "So what, it's just an arbitrary version number."
Of course I'm not a programmer or a distribution integrator; I'm just a dumb user, admin and support guy and writer --- so my opinion doesn't count for much. However, it does get published, so others can beat up on me when they disagree. It seems to be that a version number of 0.x still connotes "beta" --- not ready for production use to most people. Red Hat and Caldera are the only distributions that include integrated PAM support. Many authentication dependent packages, like ssh, don't include PAM support "out-of-the-box" and it is non-trivial (read: "scary and difficult") for an "average" Linux user or junior sysadmin to install the PAM suite into an existing system.
As one example if you'd been using Debian, S.u.S.E. or Slackware for your application (with the chroot'd users) and you installed PAM, you'd probably be pretty distressed to find it suddenly broken. [ hint: don't log out until you've attempted to access all your desirable services via the localhost interface and gotten them visible again, minimally telnet or ssh. Yes, I've been there. -- Heather ] Granted, this whole '*' shell chroot business is pretty obscure to the "average" user or the "junior" sysadmin. However, it is documented in most books on Unix security (I reviewed about twenty books at a couple of books stores with the words UNIX and security in their titles --- over half of them described this mechanism and several gave examples).
Another complaint that I have is that the existing PAM deployment doesn't include S/Key or OPIE support, and doesn't even include clear examples of how to add-in and configure any form of pluggable OTP. Given that network password sniffing is one of the most common problems that one might want to solve with PAM this seems like a pretty significant omission.
The response to this on the list and in personal discussion amounted to:
"that's crypto --- and the U.S. government black helicopters are hovering over our heads ready to bomb Red Hat's offices if they include anything like it."
(Yes, I'm paraphrasing). Personally I think this is absurd. Yes, the U.S. federal government's restrictions regarding the "export" of cryptography software are an embarassment to free people everywhere. I'm personally ashamed of our entire political process as a result of the ways in which "my" government was repeatedly thwarted the popular will of the people vis a vis cryptography. However, S/Key and OPIE are not cryptography. They use hashes, fancy checksums, as the basis for their authentication. Specifically OPIE uses MD5 by default. (I guess that the spec for S/Key -- OPIE allows for one to use alternative hash algorithms, MD2, maybe SHA-1, etc. I don't know the details on that). Ironically the code for the standard UNIX password hashing method, use your password and some "salt" as a 56-bit DES key to "encrypt" a string of NULs, is far more easily subverted into true cryptographic use than MD5. Of course both the conventional DES hashing and the MD5 code are already in every major Linux distribution, and always have been!
One compromise would be to include DOCUMENTATION. Give us a URL that points to a script. Have the script walk one through the processing of fetching, installing, and configuring pam_opie. Granted it's not THAT difficult. I was able to perform the task by hand in about an hour. However, it would probably take an "average" sysadmin about twice that and it would probably take an "average" Linux user about four times that. Consequently it probably won't happen in any significant number of sites. So it just doesn't get done at all.
(The argument that OPIE and other OTP, one-time-password schemes, is an incomplete solution is also well considered. It doesn't secure the connection so sniffing will still reveal other confidential data, etc. ssh IS a much better solution. The new FreeS/WAN ipsec implementation is also a much better approach. However, there are enough people out there that can't or won't install strong cryptographic support that some stop gap is indicated. Providing smooth easy installation and configuration of OTP is one thing that PAM could do to address this problem).
By far my biggest complaint about PAM is that it hasn't delivered on its most important promise. It doesn't put Linux on par with FreeBSD, NetBSD, and OpenBSD for authentication.
FreeBSD has supported S/Key compatible OTP "out-of-the-box" for YEARS. (Note: Walnut Creek, the largest distributor of FreeBSD CDs and books and the major sponsor for FreeBSD development hasn't been hit by the "black helicopters").
Beyond just this discussion of OTP, FreeBSD's libraries have provided seams shadow and MD5 password hashing for years. Regardless of PAM I still bump into Linux applications that fail to authenticate because they don't properly handle some aspect of shadowing and MD5 checksums. Just last week one of my fellow techs at Linuxcare was fighting for a couple of hours with that on a Yellow Dog (Linux for PowerPC) installation at the office.
That was the whole idea of the PAM project. However, PAM can't deliver on that promise until it attracts widespread support from the application/utility writers that perform authentication. FreeBSD hides most of the details behind their implemention of the standard library functions that most programmers were already using to perform their authentication (getpwent(), etc.). We can't do that with PAM and glibc --- but we need to straighten out this mess eventually.
So, I would welcome any new blood that got involved in the PAM project. I realize that Andrew will probably say: "Quit your whining and turn in some code!" That's fair enough. (However, as I've said before, you don't want to see any C code from me, yet).
PAM is an ambitious project. It goes beyond Linux (in an effort to implement standards that have been proposed to the IETF by Sun and other vendors). I realize that there is some delay because these proposed standards are in draft form and are still in flux (the XSSO, single-sign-on stuff also seems to be languishing). However, I'd still like to see it deliver more in the near term.

(?) Filesystem Management: What must be "resident" at all times?

From peter on Sun, 02 May 1999

(?) I'm familiar with moving a portion of a UNIX file system that doesn't need to be resident at all times to a larger partition. What's the safest way to do this for a portion of the file system (/usr ?) that needs to be resident at all times?

Thanks for your help,
Peter

(!) The "resident" is not a "term of art" for Unix systems administration. Also /usr doesn't have to be mounted at all times. In particular you should be able to bring the system up in single user mode and peform most maintenance operations without /usr being mounted.
That's why we have a /sbin directory. Originally we had /bin, which was intended to contain just those files that were necessary to bring the rest of the system online. However, as UNIX systems developed shared libraries a number of the items which were traditionally located in /bin (such as sh --- the shell) came to depend on /usr/lib which was the traditional location of the .so (shared object) files.
So some vendors started creating a /sbin ('s' for "statically linked" --- which theoretically allows one to replace /bin with a symlink or use it as a mount point for its own filesystem. Of course most Linux distributions don't put statically linked binaries in /sbin --- we've moved many of the shared libraries into /lib.
Personally I think the whole arrangement is a bit ugly. The idea of having duplicate but statically linked versions of many commands in /sbin is feasible. Having /bin contain a set of symlinks to the /sbin command is fine (since they will work while nothing is mounted over /bin and the mount of any other filesystem over /bin will then make those symlinks "disappear"). I don't like this insistence on dynamically linked everything since that means that you can't even run ldconfig to fix the /etc/ld.so.cache file if it gets corrupted. You have to boot from a floppy to get anything done.
In any event: let's look at a typical Linux root directory
drwxr-xr-x   2 root     root         1024 Apr 16 12:52 bin
drwxr-xr-x   2 root     root         1024 Apr 16 05:20 boot
drwxr-xr-x   1 root     root         3072 Apr 25 11:11 cdrom
drwxr-xr-x   2 root     root        17408 Apr 25 07:00 dev
drwxr-xr-x  41 root     root         3072 Apr 25 11:11 etc
drwxrwsr-x   5 root     staff        1024 Apr 19 01:58 home
drwxrwsr-x   2 root     floppy       1024 Feb  1 04:42 floppy
drwxr-xr-x   2 root     root         1024 Feb  1 04:42 initrd
drwxr-xr-x   3 root     root         2048 Apr 16 12:38 lib
drwxr-xr-x   2 root     root        12288 Apr 16 04:46 lost+found
drwxr-xr-x   4 root     root         1024 Apr 19 03:41 mnt
dr-xr-xr-x   6 root     root            0 Apr 18 08:10 proc
drwx------   4 root     root         1024 Apr 22 15:42 root
drwxr-xr-x   2 root     root         2048 Apr 16 12:53 sbin
drwxrwxrwt   2 root     root         1024 Apr 25 12:41 tmp
drwxr-xr-x  15 root     root         1024 Apr 16 05:17 usr
drwxr-xr-x  17 root     root         1024 Apr 17 11:01 var
This is from a fairly new Debian 2.1 installation. Here's the same list with some commentary:
bin
contains many common commands. Should be able to put this on a mounted fs. Ironically the mount command is in this directory and is dynamically linked! That's just WRONG. (And I don't care what the FHS says about it).
boot
contains kernels and associates System.map files and backups of the boot sector, as created by /sbin/lilo. Oddly enough this can be a mounted filesystem. As I've described many times, Linux doesn't require that its kernel be located on its root filesystem. The System.map file isn't needed during the boot cycle (and isn't "needed" by much of anything --- 'lsof' seems to complain if I don't have one or if it's mismatched to my kernel version but that's about it).
dev
contains device nodes. MUST be on root fs. (Richard Gooch has written a special devfs --- sort of like /proc for device nodes. That would allow this to be a mounted filesystem)
etc
contains passwd, group files, startup scripts and the mtab (which tracks all of the mounted filesystems).
floppy
this is stupid. It's just a mount point. I prefer to put most of my mount points under /mnt --- so I have a /mnt/cdrom, a /mnt/floppy, /mnt/a (DOS floppy), and others.
home
This should be either a mount point or a symlink to some directory on a mounted fs. I sometimes use -> /usr/local/home if I have a small number of filesystems to work with.
initrd
I'd have put this under /boot. Anyway, mine is empty. This is intended to remount any "initial RAM disk" that was used. (I might do a kernel patch to move this) When a kernel has initrd support enabled (compiled in) then a compressed image of the initrd filesystem is appended to the kernel. The kernel then automatically creates the RAM disk, decompresses and copies the image into it, and runs the /linuxrc program that it should find there. (See /usr/src/linux/Documentation/initrd.txt for details). This doesn't have to be here if you don't want/need access to the initrd after boot.
lib
This MUST be on /; it contains your libc.so and other shared libraries on which almost ALL programs on your system depend.
lost+found
This must be at the top of every filesystem. fsck will link any "lost clusters" into nodes under this directory; giving you an opportunity to fix them. Indeed, you should probably have a script that periodically checks this and warns the sysadmin any time any of these directories are non-empty.
mnt
This is conventionally used as a mount point or as a directory containing a list of mount points. It's where you mount "temporary" and "removable" filesystems.
opt
This is a place to store large "optional" packages like WordPerfect, StarOffice, etc. I usually make this a symlink to /usr/local/opt
proc
This is a "virtual filesystem" a representation of the system's process state as a set of file nodes. The BSD systems that implement the proc filesystem typically do so much different than Linux. Under Linux you can read much more info from /proc entries, and more of it is represented a plain text. The idea of /proc is that we can have the kernel provide a filesystem/directory abstraction of its state and we can write programs like 'ps' and 'top' to use normal UNIX file semantics to read that information. Linux is unique in that you can also modify many proc entries to changed the system state. The most common case of this is to enable kernel routing using 'echo 1 > /proc/sys/net/ipv4/ip_forward'
root
this is the root user's home directory. Handy if you have any scripts or data/configuration files that you want to access during boot or single-user mode when /home will not be mounted.
sbin
as I've noted, this should contain statically linked versions of the files that you absolutely need to fix a broken system. Linux, like Solaris and other modern versions of UNIX has gone to the dark side of practically requiring shared libraries for EVERYTHING. While shared libraries are very useful for conversing disk space and memory and offer huge performance benefits --- they are just one extra thing to break (for robustness and security). So a decent compromise is to have a subset of statically linked programs for use when everything is broken. (Having a kernel module or patch that could automatically detect and repair a corrupt /etc/ld.so.cache file would be a pretty good idea, too).
tmp
this can be a mounted filesystem or a symlink to a directory on one.
usr
this normally should be a mounted filesystem
var
this can be mounted or a symlink.
Of course the preceding is all must my opinion. The most authoritative commentary on what Linux filesystems should look like is the FHS --- the Linux Filesystem Hierarchy Standard (co-ordinated by Dan Quinlan), homepage http://www.pathname.com/fhs/.

(?) Ethernet Switches vs. Hubs

From Louan Handke on Sat, 01 May 1999

(?) What is the difference between switch hub and unswitched hubs

(!) The traditional ethernet hub (concentrator, repeater, etc) is a relatively simple device which just amplifies the signals on any of its ports out to all of the other ports. A "switch" or "intelligent" hub is more of a multiport bridge. It "learns" which MAC addresses (ethernet hardware assignments) are on each of its ports and only "repeats" (rebroadcasts) data frames to the appropriate port.
In a traditional hub only one system on a given network segment can be "talking" at any given time. The whole network segment is virtually a single wire. Any time two or more systems attempt to send packets at close to the same time there is a "collision." This is called CSMA/CD --- carrier sense (listen for quiet), multiple access (any card and "speak up"), with collision detection.
Whenever a collision occurs the cards involved send a short jamming signal, and then they perform a psuedo random "backoff" delay before attempting to re-broadcast. Since it is incredibly unlikely that two cards will choose the same amount of backoff delay one of them will usually "win" and get to send first. That's fine with only a couple of cards in contention. However, as utilization approaches 20% or more, the number of collisions skyrockets and the overall average throughput drags to a crawl.
The traditional answer was to segment the systems --- putting servers in close proximity to their clients (work groups), put routers between segments, and put lots of interfaces in your workgroup servers (four to eight ethernet interfaces was not unusual for big netware servers).
Etherswitches are used to alleviate some of these problems. On a 24 port etherswitch its theoretically possible for 12 pairs of systems to be concurrently exchanging data frames. This allows for much larger segments (called VLANs --- virtual local area networks).
On the downside, etherswitches are typically much more expensive than their more passive cousins. They have to contain processors, memory, and firmware. In addition their processors have to be pretty quick (usually quick RISC chips with a mess of ASICs I guess). Also there are degenerate cases. If all of your servers are located on one or two legs of an etherswitch then it won't help much. All of the clients will be waiting for that one (or those couple of) port(s) to be clear --- a classic bottleneck.
Again the solution is to have lots of smaller servers --- segment the network, and replicate the data and services so that they clients tend to use local copies of everythings. Hierarchies scale!
(Not to say that etherswitches don't have their place --- its just to say that their deployment should be based on an understanding of the situation and the benefits vs. the costs of the technology. Most vendors have little interest in your needs --- they want to sell you the shiny expensive toy).

(?) MATCH and Replaceable Parameters in procmail

From Nick Moffitt on Sat, 01 May 1999

(?) So, here's one for the answer guy.

I have a mhonarc user that creates drop points for a mhonarc script to walk by every night and process. Thing is, I don't want to have to edit the mhonarc user's .procmailrc every single time. That is, let's say that I have the following:


:0:
* ^Sender: owner-potato-peelers
spool/potato-peelers

:0:
* ^Sender: owner-onion-skinners
spool/onion-skinners

Is there some way that I can automate this format? e.g.:


:0:
* ^Sender: owner-\([^@]+\)
spool/$1

This likely breaks procmail's own regex syntax, but you get the point. "Anything that has an owner-foo Sender header should go to spool/foo."

(!) Nick,
You have the right idea but, as you've guessed, the wrong syntax. The answer is to use the MATCH variable and the \/ (fencepost) operator as described in this excerpt from the procmailrc(5) man page:
       MATCH       This variable is assigned to by procmail when-
                   ever  it is told to extract text from a match-
                   ing regular expression.  It will  contain  all
                   text  matching the regular expression past the
                   `\/' token.
So, your recipe would look something like:
:0:
* ^Sender: owner-\/.*
spool/$MATCH
(though I haven't tested this specifically).

(?) procmail and saved variables.

From Nick Moffitt on Sun, 2 May 1999

[Jim Dennis said] So, your recipe would look something like:


:0:
* ^Sender: owner-\/.*
spool/$MATCH

(though I haven't tested this specifically).

(!) I have! It works like a charm.

(?) RMA for Video Card

From Siddhartha Bezbaruah on Sat, 01 May 1999

(?) I am mailing this on behalf of Software Decsions Inc., Houton, TX 77036. [ normally, I etch out personalized information like this. But this person seems to want their name up, so what the heck, I'll leave it in. -- Heather ]

I have been calling at 541-967-2450 to get an RMA number for one of my company's VIPER V330 video card. The customer service connected me to the technical support or RMA department and they hunged up two times. I also faxed the required information at 254-750-9051 on April 21, 1999. But there is no reaponse.

Please, let me know how to get an RMA from Diamond Multimedia.
Sincerely

(!) I don't know. I suppose I could try to dig up the phone number for the one guy that I know that works there (in their QA department). I had lunch with him last Saturday.
However, Diamond is big company. I'd probably have to do exactly the same sorts of things you're doing. Call them up, go through some labyrinthine voice menu system, explain my problem --- at least twice, and feel like a supplicant at the high temple doing pennance for my ill-informed purchasing decision.
What's odd is that I was under the impression that Diamond was releasing programming specs for most of their recent video cards so I'm curious why you're having a problem. I'm not familiar with this particular model (manufacturers churn through video chipsets and models so fast that I've just given up on tracking any names or model numbers).
In any event I'm not the customer service department or the "consumer watchdog" so you can't sick ME on them. You'll have to go fight that battle yourself.

(?) Inodes Numbering: An Academic Question

From mcp on Sat, 01 May 1999

(?) Hello,
Could u pls explain me As the inodes of unix file system are store in disk in the form of linear arrays,the index value doesn't start from zero.But generally in 'C' the array index starts from zero.What is the reason
Thanx
Prakash

(!) Hmm. This is one of those questions where it's hard to start on an answer. The set of premises upon which you're basing your question is so shaky that the question itself is hard to grasp.
First, it seems to be a question about Unix internals.
"Why isn't there an inode 0?"
Because the programmers of the Unix implementation that you're looking at may not have wanted to start numbering at zero. Of course, I'm not sure that there is no inode number zero. I'm not sure how you can be sure, either.
It may be that the badblocks list is stored in inode zero (on some filesystems). At least in some filesystems the list of bad blocks is managed by "allocating" them to a special system inode. They effectively become part of the "bad blocks" file. Since this is done during the creation of the filesystem (before even a base directory is created) it would logically follow that this would have the lowest numbered inode on a given fs.
I wouldn't say that inodes are "store in disk in the form of linear arrays." Inodes are stored in a filesystem in whatever manner the designer of a fileystem chooses to store them. They may be represented as arrays in most see programs --- though they are probably more often managed as linked lists of structs. They might be doubly linked, hashed/btrees of structs. I'm not a C programmer so I don't really know. Of course we could go look at the code --- but I'm not even enough of a programmer to infer the overall design from a worm's eye perusal of the that.
I've heard that some filesystems (like those in LVM --- logical volume management systems) put different ranges of inodes on each PV (physical volume). Thus they don't start numbering the inodes for a given fs at 0 or one or anything even close. There is no particular reason why they should. The inode is just an arbitrary unique identifier for all information about a file, sans it's "names" (links).
The greater questions is: "Why?"
Why do you ask?

(?) One Bad Sector

It Doesn't Ruin the Whole Disk

From John Gilbert on Tue, 04 May 1999

(?) I cant believe that its not possible to re-use or dispose of a bad sector on a hard drive!!!

Please tell me its possible to do something!

I only have one bad sector - but its really pissing me off! Isn't there something I can do?

Awaiting your response,
JB.

(!) Hmm. You can "dispose" of a bad sector by adding it to the bad blocks list. The easiest way to do this is to allow the mke2fs and e2fsck tools "check" the portions of the disk that underlie a given filesystem by using the -c options to each of them.
Thus, when you first create an ext2 filesystem you should always add the -c option so that it will (transparently) call the 'badblocks' command and account for those that are detected. (The installation front ends to most Linux and GNU suite distributions, such as Red Hat, Caldera, etc. have a checkbox on their menu/dialogs to enable this).
When you suspect that additional sectors have gone bad you should run 'e2fsck -c' to add any newly bad sectors to the bad blocks list that is maintained as part of the the filesystem's metadata.
There are similar features for other filesystem types --- although in some cases you'll have to build the badblocks table to a file and run the filesytem formatting utility separately (I won't go into details about feeding a badblocks list to each of the alternative Linux filesystem types as I don't know them off hand and they'd only be of interest to a tiny percentage of LG reader --- much less than 1% by my guess).
If the sector that goes bad is sector number one on track zero --- then you have a paperweight. That one sector is a single point of failure (SPOF) in the whole PC drive management architecture. This is a limitation of the architecture that lies below the OS level as it is imposed by the BIOS. Certainly someone could write a BIOS to overcome the problem. It's also possible that your hard drive has quite a bit of built in redundancy to prevent the problem from ever being visible to the BIOS.
Modern hard drives are sophisticated pieces of electronics.
They have embedded microprocessors running programs that map their own arrangements of data blocks into an abstraction that's compatible with the BIOS representation of a hard disk. A BIOS "thinks" of a hard disk as a flat three dimensional array of head and tracks (cylinders) and sectors. In reality modern drives are almost always more complex and far less regular.
Most modern drives store more sectors on their outer tracks than they do on the inner ones. This is referred to as ZBR (zone-bit recording).
Most drives have "extra" sectors on each track --- and they'll automatically map the "extras" in for any sector that they detect as bad or "weak."
All hard drives have always implemented some error detection into their electronics. All recent drives (the last decade or so) have also implemented at least rudimentary ECC, error correction coding. When a drive's electronics detect errors they automatically try several re-reads to "get it right." Many drives are programmed to move the successfully read data into one of the "extras" on that track when this occurs. Likewise if they detect "correctable" errors through their ECC mechanisms. Some drives might even migrate data to extra sectors on adjacent tracks or heads.
So, you generally won't see bad sectors on a modern drive until there are enough of them that all of the available extras on a given track, cylinder, or within a given zone, are all in use.
Most drives have a "hidden" extra cylinder on which they store some of the persistent data for these low level mapping and remapping operations. This is the "diagnostics cylinder." I think that they also have at least one sector per track or cylinder devoted to maintaining the bad block remappings for that track. (Some drives might implement this as an additional surface --- so that one drive head is devoted to all diagnostics).
Most modern hard drives also have quite a bit of RAM on them. A half meg is minimal, and two to four meg is common on larger, high performance SCSI drives. I don't keep up on these things so they may have drives with 8 or 16 Mb onboard.
I've often wondered if it wouldn't make more sense for drive manufacturers to support a small (socketed?) bit of NVRAM to store the MBR and the location of their diagnostics data map. Of course it's possible that some of them ARE doing this --- since I wouldn't know.
Of course I'm just speculating here. I've never designed hard drives and my discussions with hardware engineers from Seagate, Quantum and other aquaintances in the field have been far less detailed than my preceding speculations.
The key point here is that these drives are not just simple arrays of heads, sectors and tracks. I think I read a message from Linus recently (on the kernel-list, in reference to discussions about implementing "elevator-seeking" and similar tricks in the low level disk drivers) that basically said: 'anyone who treats a modern hard drive as anything other than a linear list of storage blocks is a fool.'
As for "re-using" a bad sector: you shouldn't have to worry about that. If you drive hasn't already done it automatically and transparently then your best strategy is to mark it as bad and let the OS work AROUND that spot. Occasional surface defects and wear and tear are to be expected in any mechanical equipment --- and hard drives are fundamentally mechanical.

(?) Server Shutdown Button


This follows up on "Secure Shutdown from the Console", http://www.linuxgazette.com/issue39/tag/21.html.


From Scot E. Wilcoxon on Sun, 02 May 1999

(?) About the "Answer Guy" comments in the April 1999 LG about shutting down a server, perhaps with a special login:

I have also created Linux servers without monitors, but I used a two-key keyboard: a cheap serial mouse. See `man gpm` for the "SPECIAL COMMANDS" and "-S" instructions. With this option enabled in /etc/rc.d/init.d/gpm you can triple-click the mouse to initiate either a shutdown or a reboot. This gives operators a safe way to shut down a server without having to have a monitor or keyboard on the server. The BIOS does have to allow booting without those plugged in, but many BIOSes can be configured to continue despite those errors.

(!) Cool! There's a man page I hadn't read recently and thoroughly enough! The best part is you can configure it with a set of three custom commmands instead of the defaults (shutdown -r, shutdown -h, and an internal init signalling routine). I don't know what I'll do with that, but it sure sounds useful.

(?) HAL91 (Floppy Based Linux Distribution)

From twager on Wed, 05 May 1999

(?) Hi..

I am trying to get hal91 going...I have the bootdisk running ok but cannot get the data disk to load. with the init.disk2 command..It tell me it cannot find and ext2 file system on /dev/fd0 ...I got the system off a cheapbytes cdrom I thought this might be faulty so I downloaded the file using lynx from the author's site but the same result occurred...I then mounted a floppy and cp'd the data file across This time the floppy was seen but it told me it could not find usr.tar.gz I mv'd the file to usr.tar.gz and it mounted but all that was there was Lost+Found...

(!) Did you really 'cp' the second image unto a mounted filesystem on the floppy?
I have no experience with HAL91, although I've heard that it is one of several floppy based mini-distributions) like Tom's Root/Boot, MuLinux, MiniLinux, LOAF (Linux On A Floppy), etc.
It looks like the canonical home page for HAL91 is at:
http://home.sol.no/~okolaas/hal91.html
I found that by following one of the many related links at the bottom of Tom Oehser's page (Tom's Root/Boot) at:
http://www.toms.net/rb
Images for most of these would be written to the raw floppy device using dd rather than copied onto some filesystem that you've put thereon. In other words normally you wouldn't use the 'cp' command to create boot floppies for any mini-distribution. Usually you'd use a command like:
dd bs=18k if=image.dat of=/dev/fd0
(while there is no MOUNTED fs residing on that flopppy drive!).
It's also possible to use a command like:
dd < image.dat > /dev/fd0
... though it is more efficient and reliable to set dd's block size (18K is size of one one track on a 1.44 Mb diskette: 1440 blocks of 1K each is 1474560 bytes, as is 18K * 80 --- and most HD floppies support 80 tracks). The use of dd's if= and of= parameters vs. the redirection operators is relatively inconsequential.
The HAL91 pages don't explicitly say how you should create the "datadisk" (supplemental diskette --- which can be unpacked to a second RAM disk under /usr to provide some additional programs and utilities). I presume that it is supposed to be be written to the raw floppy device in the same way that the boot diskette is prepared.

(?) I have done a strings on the file and looked through that but there is not mention of either of the readouts I got.....I have written to the author but no reply..I have written to all the linux lists but no reply.... I would like to get this running as I am giving a talk on Linux to the local ham radio club and would like to take this prog as well as Mandrake and RedHat as I am hoping it might get a few interested..If you have any tips or help where else I could look a cc reply would be greatly appreciated.....I struggled with Stampede from the cdrom but this has me beat :>(

(!) Why have you selected HAL91 for this case? There are several other choices (look at the bottom of Tom Oehser's page, as listed above, for a few of them).
I'm not saying that HAL91 is "the wrong choice" --- what I'm suggesting is that you try a couple of these so that you can form a basis for comparison. So far my personal favorite is Tom's Root/Boot --- though I like Trinux for other work.
As for your ham radio group: consider looking at the Linux Speaker's Bureau web site at:
http://www.linuxresources.com/lsb/index.html
You might find someone in your area (or someone who will be in your area on other business) who can give a slick presentation about Linux and can help people during an installfest.

(?) Regards.
Ted
Beware of geeks bearing gifs

(!) Be even more wary of geeks wearing GIFs!

(?) Ping a Port: NOT

From Derek Leung on Wed, 05 May 1999

(?) Hi,

I am just wondering if there is any way that I can code to ping a destination if it is alive or not. However, the server is known to be behind a firewall, and only one port is open to public. So, is there anyone know how to code a "ping" program that could ping on a certain port? I will greatly appreciate for any ideas.

(!) The 'ping' command generates an ICMP echo request packet. ICMP is a protocol over IP that implements "control messages" (flow control, routing, etc.). At level the very concept of TCP/UDP ports is completely irrelevant. (TCP and UDP are other protocols that ride over IP, they are orthogonal to ICMP).
There are a number of programs that you can use for port scanning (and your application seems to call for testing a single port on a single host --- which is a very short list ports to scan). I'd recommend that you look at netcat (sometimes installed as 'nc' on some Linux systems) and nmap.
A quick place to find these and many other interesting tools would be the Trinux web site: http://www.trinux.org

(?) Derek
SDU TEAM

PS. I use PERL to code. If there are any available C module, please let me know too. Thanks.

(!) There are some sophisticated PERL sockets and RAW::IP tools --- you'd want to look at CPAN (http://www.cpan.org) for those. There are numerous modules to allow easy PERL coding for specific network protocols and services --- and there are many sample scripts there.

(?) Linux as a Job!

Hobbies become fun and profit

From Nate Brazell on Fri, 07 May 1999

(?) I am new to Linux and have a definate need to learn it. It is now my job! Here are a couple of questions???

1. I need to establish a dial up server? How?

(!) mgetty. Install mgetty and follow the directions in its info file (using the emacs/xemacs 'info' package or the standalone 'info' command). You can also read the manual in HTML at:
http://www.leo.org/~doering/mgetty/index.html
mgetty is included with many Linux distributions.

(?) 2. I need to install a new drive and mount an existing file system to the new drive. This one I know how to do, however I haven't messed with UNIX in a while and want to make sure my plan will work.

(!) The hard part is the hardware. Once that's done you just run 'fdisk' then 'mke2fs' (mkfs.ext2) and 'mount' Finally you simply added the new filesystem and mountpoint to your /etc/fstab (so that the system will mount the new filesystem automatically after the next reboot).
Here's a couple of sample commands assuming that your adding an IDE drive to a system's secondary controller. The new drive will be /dev/hdc. I'm assuming that /dev/hda is your existing OS installation and that /dev/hdb is a CD-ROM slaved off of the same controller; that's best since CDs are accessed relatively infrequently and most often just to copy things to your local volume. Thus putting the new drive on the other IDE chain in a typical modern system gives a performance boost. Only one drive per IDE chain can be accessed at any given modem by the kernel. SCSI allows commands to the drives to be handled in parallel (the request is issued, the drive is "disconnected" from the bus and it issues an interrupt when it is ready to provide or fetch more data).
So you use commands like:
fdisk /dev/hdc
 
# menu interface to configure new filesystems
 
for i in 1 3 5 6 7 8; do
mke2fs -c /dev/hdc$i
done
 
# -c enables automatic 'badblock' checking
# This example assumes you created six filesystems
# on the new drive, perhaps leaving partition two
# as a swap partition and number 4 is used to house
# the extension which contains 5 through 8
# I use a bash/sh for loop to save typing and to
# give me longer to sip my coffee while it works
# unattended
 
mount /dev/hdc1 /home
mount /dev/hdc3 /usr/local
mount /dev/hdc5 /u1
mount /dev/hdc6 /var/log
mount -o sync /dev/hdc7 /var/spool
mount -o noatime /dev/hdc8 /var/spool/news
 
vi /etc/fstab
 
# add the new filesystem(s) as appropriate to
# the fstab file format. See the appropriate
# man page from manual section 5 (i.e. man 5 fstab)
In this (admittedly complicated) example I've put the new filesystems on a few mount points that often need to "grow" or are otherwise good candidates for having their own filesystems.
I've glossed completely over the details of mount each of these on a temporary mount point (I use /mnt/tmp) and copying/moving/migrating all the data from the extent directories to their new filesytems. The short form of that is (for each filesystem):
mount $NEWFS /mnt/tmp
cp -pax $OLDDIR /mnt/tmp
umount /mnt/tmp
mv $OLDDIR $OLDDIR.old
mkdir $OLDDIR
chmod $OLD_DIR_PERMS $OLDDIR
mount $NEWFS $OLDDIR
.... and there are many variations. Once you've well and truly confirmed that your copies are good you can then rm -fr each of the $OLDDIR.old directories. One way to compare two directory trees and ensure that the data and the metadata (ownership and permissions) have been faithfully replicated is to use a command like:
(cd $OLDDIR.old && tar cf - . ) |
( cd $NEWDIR && tar df - )
(note the need for line continuation on this example.)
Note: In all of these preceding examples I've only give the basic idea. You should NOT just cut and paste these commands without understanding them and editing them to suit your actual needs and situation.
One other note: I've shown a couple of mount examples with options (sync for our spool fs, and noatime for /var/spool/news). One of the key advantages to using smaller, more focused filesystems is that you can then apply mount options that are appropriate to them. You can greatly increase the performce of a newspool by preventng the kernel's fs drivers from updating the "Access Time" (atime) stamps on each file each time it is read. You can greatly reduce the risk of data damage to your mail spools and queue using the sync option (so that a catastrophic power supply failure or bump of the "off" switch is less likely to mangle the filesystem.
Such options can trade off performance for features or integrity assurance. Tune to taste and serve to your users.

(?) Can you help me?

(!) Yes.

(?) Will you help me?

(!) I hope I already have.

(?) New Kernel Loses Ether Driver; Dial on Demand and Masquerading

A grabbag of user questions.

From Adams, James on Sun, 02 May 1999

(?) Answer Guy,

I know you are extrmely busy and such, I hope you can point me in

the right direction. I am trying to find the tell all instructions for recompiling a new kernel for RH5.2 (Mandrake 5.3). I have tried repeatedly to do this but still no luck.

The main problem I run into is that my ethernet no longer works

after booting into the "new" setup. Something about SCIOFLAGS (I think), and the network is not working.

(!) That means that you haven't successfully included the driver for your ethernet adapter. You have to know what sort of driver it takes. I realize that this is the problem. There is no easy way to tell this from a running kernel --- none the entries under /proc seem to say which ether driver is active. You might find your ethernet card mentioned in /proc/pci (a list of PCI devices recognized by your kernel). Otherwise just open the case and look at the actual card hardware.
One trick I've occasionally used during installfests is an ugly hack. I cd to /lib/modules/preferred/net (or thereabouts) and do something like:
for i in ./*; do insmod $i && echo $i; done
... which tries to load EVERY available module in that directory. This could hang the system, but usually it just spits out the name(s) of any modules that successfully detect a card that they can drive.

(?) If you could point me in the right direction I would forever be in your debt (sort of). I also want to be able to have dial on demand, I have a small home network and want to use it with ipfwadm.

(!) There is a program called 'diald' which used to be the main "dial on demand" daemon (driver). However, I've read that the latest versions of PPP have some built in "on demand" features.
I must admit that I haven't been using modem PPP for the last several months. I'm spoiled rotten by my DSL line (which as only been down once since I got it). I'd only been using POTS PPP occasionally in the last couple of years since I was using ISDN (with its own dial-on-demand in my Trancell/WebRamp ISDN router) --- so I was only using diald/pppd when that was being flaky.
However, I've been meaning to play with the new pppd options at some point. So I'll look into it.
I presume that you mean that you want to use your PPP link through IP masquerading (when you say "with ipfwadm"). There are numerous HOWTOs and numerous back issues of my column where I've discussed masquerading. The short form is to use the following commands on your router (the Linux box with the ethernet the PPP links on it):
echo 1 > /proc/sys/net/ipv4/ip_forward
(to enable routing)
and:
ipfwadm -F -a acc -m -D 0.0.0.0/0 -S 192.168.0.0/16 ipfwadm -F -a acc -m -D 0.0.0.0/0 -S 10.0.0.0/8 ipfwadm -F -a acc -m -D 0.0.0.0/0 -S 172.16.0.0/12
(you only need one of these, but all of them won't hurt).
This last set of commands adds a set of rules to the Linux packet filtering tables to masquerade any source addresses in the 192.168.*.*, the 10.*.*.* and the 172.16.*.* through 172.31.*.* ranges. Those are all of the addressed reserved in RFC 1918 for "private" use.
As I've discussed before you should also put in some packet filtering and anti-spoofing rules to protect your home network from outside attack. Crackers and script-kiddies are not a myth --- I see probes on my network all the time and I've just recently let one of my system get cracked into (I was being sloppy with that one --- it's part of why my mail was down for a couple of weeks; though only a small part).

(?) Thanks in Advance Jim Adams


(?) pcmcia install on debian

After every Bay Area Linux User Group (BALUG) meeting, we head to a local deli named Max's to continue chatting until about midnight. Some of the Debian folk are becoming regulars, so Jim and I had a chance to ask a few questions.


From Joey Hess on Sun, 02 May 1999

Hi Jim, I don't know you're proper email address, so I'm using this one. At Max's tonight, your S.O. asked me about installing debian on a system that needs PCMCIA to use the cd drive and how to enable that. Well I dug around and the info she needs is at

(!) That was resourceful. Our addresses are:
Jim Dennis< jimd@starshine.org>, Heather Stern <star@starshine.org>

(?) http://debian.org/releases/stable/i386/ch-init-config.en.html#s7.11

(!) I'm copying Heather on this reply.

(?) If the version of debian she's installing doesn't have a pcmcia entry on the menu, she should install the most recent one, it's documented to have it.

And in respone to your own query about installing debian without rebooting, another possibility would be to grab


ftp://ftp.debian.org/debian/dists/stable/main/disks-i386/2.1.9-1999-03-03/base2_1.tgz

This is a basic debian system, tarred. After you unpack that you should be able to run dpkg --root=/wherever -i foo.deb and install additional deb's if necessary. And you can chroot into it and play around. If you "sh root/pkgsel" in there, you'll get to the package group selection menu debian normally displays after the install from booth floppies.

(!) Sounds interesting. I'll try that soon.
Do you know of a project to complete a more complete system integrity auditing tool than debsums? I like that the rpm -Va command tells me about changes to permissions, ownership, and other metadata. debsums seems to be a light wrapper around md5sum.
On slackware systems I can just use 'tar df ...' to get this sort of info. When I met Patrick Volkerding at LinuxWorld I suggested he write a script to do that for a whole CD full of of tar files --- sort of an "auditors workbench." (I also suggested that he make this an option from the boot menu on the CD and that he make a custom boot floppy for system auditing; so that the Slackware sysadmin is encouraged to do proper audits of their system, from a clean boot off of a write protected floppy).
I'd like to encourage the Debian team to also come up with such a beast (and I'll try to devote some time and Linuxcare resources to actually DOING it). However, it occurs to me that the existing Debian hacks can probably do something like this practically overnight. (I'll be fighting a much longer learning curve before I'm ready to contribute a production quality package to this effort).
I've heard that Debian packages encapsulate .tar files. Is that true? Are they tar or tar.gz? (no problem, 'tar dzf ...' works). I suppose I could use alien to extract tar files from .deb files (one at a time) and then use tar df on each of those.

(?) -- see shy jo


(?) WinPrinter Work-around

From harmbehrens on Sat, 01 May 1999

(?) Hello, is there any work-around to get a gdi printer (Star Wintype 4000) to work with Linux :-? Harm

(!) Save the file to some supported format (something that MS Windows can read), copy the file to a Windows system (to which your GDI printer is attached) and/or reboot your system into MS Windows, and use that to print.
--- Alternatively I suppose you could configure a Windows system to share its printer and use Samba (smbclient) to print to that.
Those are WORKAROUNDS.
I presume that these are NOT what you wanted to hear. However, there is no way that I know of to support a Winprinter without running drivers that are native to MS Windows (and its GDI --- graphics device interface --- APIs).
I think that the development of GDI printers was a devious and clever trick by Microsoft. Tie the customer AND the printer manufacturers inextricably to Microsoft's SOFTWARE and leave them both vulnerable to MS upgrades for the lives of their products. It's diabolically clever.

(?) Maximum Filesize vs. Maximum Filesystem Size

(From LinuxPPC Mailing List)

From Charlie Romero on Thu, 27 May 1999

(?) I'm a little confused on the 2 GB thing. Is it ok to have each partition ie. /usr , /etc, /home at 2GB each or can the whole file system not exceed 2 GB.

If I have a 10 GB drive, is this ok.

/usr 2GB /etc 2GB /opt 2GB /home 2GB /swap 2GB

or do I have to keep the total under 2GB?

Thanks, Charlie

(!) Actually 2Gb is the maximum FILESIZE under 32-bit versions of Linux. (Alpha, and presumably UltraSPARC ports are not hampered by this).
Linux ext2 filesystems can be much larger than 2Gb --- and can be much larger than any available consumer hard drives or common arrays (although the lack of journaling/logging means that fsck may take a prohibitively long time on larger filesystems).

(?) ICMP Masquerading

From Abraham S. Lin on Thu, 27 May 1999

(?) Hi, jim, This is your e-mail address on Linuxgazette, so I tried. Hope this is not your personal mailbox.

After reading all the docs, howtow, and the docs from www.xos.nl (supposedly original ipfwadm site), there are little mention of icmp forwarding, and no examples of it.

(!) So far it all goes to the same mailbox eventually.

(?) My setup is: 1. Redhat 5.2(full install), machine name ken. one interface to internet,

the other to localnet. 2. localnet machines with 10.1.1.x private addresses. (kolya and brian)

I did deny on all in/out/forward rules of ipfwadm. And then enabling them one by one. It's a tuff job but seems like all is well.

Until I figured that ping and traceroute doesn't work from localnet. Not even from the linux gateway to internet.

(!) You don't mention which version of the kernel you're using. That's important because most versions of the 2.0 kernel series didn't support ICMP masquerading. It's still listed as an experimental feature.

(?) Thanks for this in advance. After this is fixed I think we'll have to make ipfirewalling HOWTO better than it is now. It didn't do on icmp forwarding.

Thanks again, abe

P.S. Here's the digest of my /etc/rc.d/rc.local on the icmp part:

(!)
Your problem has to do with MASQUERADING of ICMP. It has nothing to do with forwarding them.
You probably have to compile a later 2.0.36 kernel to add this support. You could also consider trying the 2.2.9 or later kernels and switching to the newer IP Chains model.
IP Masquerading through IPChains is not well explained in their HOWTO. I just had to figure that one out while teaching Linux classes at SGI (one of my Linuxcare roles).
I don't have my example handy but the key is to understand that the -j option to the ipchains command is used both for "jumping" to a chain (that you've created) and for declaring a disposition to a given packet. Thus ACCEPT, DENY, REJECT, REDIR, RETURN and MASQ are sorta treated like chains (you use tham as targets to the -j option) but they will not be listed with the -L and are not "flushed" with -F, etc.
When you want to masquerade for a network all you really need is:
ipchains -A forward -s $INTERNALNET -d 0.0.0.0/0 -j MASQ
... add a rule to the (pre-defined) forwarding chain so that any package with a source address (-s) matching our internal address and a destination (-d) of anywhere (else) is "just" MASQueraded.
I've successfully configured masquerading with just that rule (and the usual routes and ip_forwarding enabled). It doesn't seem to need any special rule to match addresses going from an internal address to another internal address. So we don't need to do something like:
ipchains -A forward -s $INTERNALNET  -d ! $INTERNALNET -j MASQ
... where the ! sign negates our address mask and comes to refer to any destination that is NOT in our internal network.
This second variation of the rule is more precise and probably more correct. However it doesn't seem to be necessary.
I have also been successful in setting up bidirectional masqeurading with just two fowarding rules:
ipchains -A forward -s $MYNET -d 0.0.0.0/0 -j MASQ
ipchains -A forward -s $HISNET -d 0.0.0.0/0 -j MASQ
... again this seems to work although:
ipchains -A forward -s $MYNET -d $HISNET -j MASQ
ipchains -A forward -s $HISNET -d $MYNET -j MASQ
... would seem to be more precise and probably better.
The examples in the HOWTOs seem to insist on creating a separate chain for our masquerading rules using something like:
ipchains -N mymasq
... and then using various rules to jump (-j) into that chain (which then just does a MASQ anyway, also using the -j option). This added level of indirection seems to be completely unnecessary for the simple case and is far too confusing from the examples. I suggest that people start with my simpler examples and only add the additional chains of rules as their needs demand it.
Your excerpts:
> extip=_EXTERNAL_INTERFACE_IP
> intip=_INTERNAL_INTERFACE_IP
> localnet=10.1.1.0/24
> any=0.0.0.0/0
> # A ping from kolya to 132.206.1.11 Not right still...........
> /sbin/ipfwadm -I -a accept -V $intip -P icmp -S $localnet 8 -D $any
> /sbin/ipfwadm -O -a accept -V $extip -P icmp -S $extip 8 -D $any
> /sbin/ipfwadm -I -a accept -V $extip -P icmp -S $any 0 -D $extip
> /sbin/ipfwadm -O -a accept -V $intip -P icmp -S $any 0 -D $localnet
>
> # A traceroute from kolya to 132.206.1.11 Not right still.......
> /sbin/ipfwadm -I -a accept -V $intip -P icmp -S $localnet 8 -D $any
> /sbin/ipfwadm -O -a accept -V $intip -P icmp -S $intip 11 -D $localnet
>
> /sbin/ipfwadm -I -a accept -V $intip -P icmp -S $localnet 8 -D $any
> /sbin/ipfwadm -O -a accept -V $extip -P icmp -S $extip 8 -D $any
> /sbin/ipfwadm -I -a accept -V $extip -P icmp -S $any 11 -D $extip
> /sbin/ipfwadm -O -a accept -V $intip -P icmp -S $any 11 -D $localnet
>
> # This line just produces error message. Don't know the syntax for icmp.
> /sbin/ipfwadm -F -a accept -P icmp -S $localnet 3:11 -D $any
I think you probably actually want something more like
/sbin/ipfwadm -F -a accept -m -P icmp -S $localnet 3 11 -D $any
... "port ranges" (the 3:11 syntax) aren't meaningful for ICMP. I presume you are trying to limit the packet filters to accepting/relaying "echo request" and "echo reply" packets in this example. I don't have a handy list of ICMP packet types but you definitely also want to allow some other packet types to get through (for MTU path discovery)!
Actually I'm not sure that you need it when masquerading since the ICMP message that informs a TCP stack that a "Don't Fragment" packet was dropped might only need to reach our router/gateway (the system performing the masquerading). I'm not sure if it needs to get all the way back to our host.
In any event I'd suggest that you adopt the opposite strategy with regards to ICMP packets. There are only a few of them that need to be filtered out (redirects mainly). So far it seems to be safe to let most other ICMP message types through. (Well, about as safe as letting any sort of IP traffic through, masqueraded or otherwise. Naturally you should consider proxying with SOCKS, Dante or DeleGate to tighten security even further).

(?) Upgrade Breaks Several Programs, /proc Problems, BogoMIPS Discrepancies

A visit to "Library Hell"

From Pete Caffall on Thu, 27 May 1999

(?) Jim:

I'm kind of going nuts here trying to figure this one out. I

have just recently upgraded my system at home with RedHat 5.2. It previously had RedHat 5.0 on it. Since then I have been unable to get Netscape (4.5.1 or 4.06) to work, as well as Word Perfect 7.0 , and not to be outdone - arena. All break on start up with a segmentation fault (SYSSEGV) and a core dump. Prior to the upgrade, all worked (well - don't know about arena - just tried that to see if it might also break). I did not change my hardware configuration and is the same as prior to the update. ASUS 5ab MB, AMD K 6 II 3D 333mhz processor, 64 mb 100mhz sdimm, 5 gb WD IDE hd drive where Linux is installed. XFree86, Afterstep that came with the distribution, and ATI Xpert 98 video with 8mb.

(!) It sounds like the upgrade replaced your shared libraries with versions that aren't quite compatible with these applications. Netscape Navigator and Communicator have both historically been pretty picky about their shared libraries.
There are occasions in the past where a GNU bug fix to their libc has broken versions of Navigator and Communicator that relied upon these bugs. You can find out which libraries these files are linked against with the 'ldd' command.
You can also rebuild your /etc/ld.so.cache file by running the 'ldconfig' command (a good idea to try that any time you suspect shared library problems).
You say (later) that you tried using libc.so.5.4.33. However you don't say how you tried to use it. There are a couple of magic environment variables LD_PRELOAD and LD_LIBRARY_PATH which can be used to override the order in which shared libraries are loaded (and thus can control which library gets loaded to supply a given set of functions).
You can read the ld.so(8) man page for some details about how the linker/dynamic for shared objects works under Linux and other GNU systems.
Typically you'd copy your old libraries (with which these programs were working) into some directory (/usr/local/lib-special for example) and then replace your links to Navigator, Communicator, WordPerfect, and any other affected programs with a short wrapper script that sets and exports the appropriate environment variables and then launches the original program. There are examples of such scripts on the web (from people who've had to hack them up to run earlier versions of Navigator after earlier upgrades of glibc).
Most of this nonsense should be unnecessary under Linux. First, programs should be written to rely on the exposed/documented characteristics of the libraries against which they are linked. Also it's supposed to be possible to linked to more specific library versions in cases where the more general version won't work with your application. That's why we have so many symlinks like these:
-rwxr-xr-x   libc-2.0.7.so
lrwxrwxrwx   libc.so.4 -> libc.so.4.7.6
-rwxr-xr-x   libc.so.4.7.6
lrwxrwxrwx   libc.so.5 -> libc.so.5.4.46
-rwxr-xr-x   libc.so.5.4.46
-rwxr-xr-x   libc.so.5.4.7
lrwxrwxrwx   libc.so.6 -> libc-2.0.7.so
(from /lib on one of my S.u.S.E. systems, actually).
The idea is that we should be able to have a libc.so.4.6.XX with a symlink to libc.so.4.6. libc.so.4.7.6 (from my example) would still be the "default" for libc.so.4, but programs that were linked to libc.so.4.6 would use our libc.so.4.6.XX. Thus they would be more specifically bound to the 4.6 versions of the libraries than to the 4.7 or 4.5.
This is far more flexible than the implementation of DLLs we see in MS Windows and NT. It can automatically tolerate multiple concurrent versions of each library. The LD_PRELOAD and LD_LIBRARY_PATH environment variables give us even more flexibility since we can override the linkage at run time for a specific process (or family or processes).
However, it can be a pain to manage for mere mortals such as myself. Oddly enough it also seems to be more of a problem for the commercial packages than for other Linux software. I have to cope with far less "library hell" resulting from typical binary packages than I do with the big commercial ones.
I supposed one thing that WordPerfect, Netscape, Applix, and Star Division could all do is to include all of the libraries that they require (with which they are linked and tested) on their CDs. They could then have an installation and/or configuration option to install those in a special directory (/opt/${PACKAGE}/lib) and to automatically invoke their programs in a "compatability mode" where they set their own LD_PRELOAD variables properly and launch thier binaries.
Such a scheme could allow these companies to be more robust in the face of distribution updates (such as your transition from Red Hat 5.0 to 5.2 and the more disastrous change from 5.x to 6.0 that has broken StarOffice and other packages for so many Red Hat users in recent weeks).
(At the same time these packages would not need to take up the additional disk space and memory footprint when running on a system whose default libraries are suited to the situation).
In any event you might consider upgrading to RH 6.0 and WP 8.0. I personally suggest letting the distribution maintainers do as much of the work of getting you out of "library hell" as possible.
(?)
One thing I have noticed -it had problems trying to mount the

/proc file system - boot message indicated that it couldn't find /proc in the fstab or mtab. I didn't notice it the last time I booted, and looking in /proc, it shows as a file system, although df -a doesn't show it. I tried using the libc.so.5.4.33 but this did not resolve the problem.

(!) I presume that your reference to libc.so.5.4.33 relates back to your problems with certain applications since the /proc issues are very unlikely to relate to your shared libraries.
Classically the 'mount' command should be statically linked. I notice that S.u.S.E. 5.2 and Debian 2.1 and Red Hat 5.2 all have it dynamically linked against libc. This is BAD (since a corrupt ls.so.cache or a damaged libc.so will then prevent you from even mounting up an alternative filesystem). There is an alarming trend to configure whole systems to practically require dynamic linking for everything. This makes the whole system less robust with greater and more critical interdependencies. (In this regard we are following in Sun's footsteps; it's practically impossible to create a statically linked program under Solaris).
However, I doubt that your problem with /proc has anything to do with shared libraries.
Your mtab file should initially be empty when your reboot. Your /etc/fstab file should have an entry from /proc that looks something like:
none     /proc        proc       defaults   	0 0
... If it doesn't, add one.

(?) Any suggestions (nice ones) would be appreciated.

One curiosity question: The system at home with the AMD K6 II

3D reports 663 bogomips. The system I have at work is a Pentium II (not Celeron) 400 Mhz, and it reports 397 bogomips. What gives. Thanks in advance for any help you can give. Pete Caffall

(!) I wouldn't worry about it. BogoMIPS are a measure of how fast your processor executes a fast idle loop. That is to say, how fast can your processor do NOTHING. In most cases a stock Pentium CPU will have a BogoMIPS value that's reasonably close to its clock speed. However Pentium MMX and various clone CPUs with MMX like extensions (like your AMD K6) will have higher BogoMIPS values.
My Pentium 150 shows:
cpu MHz		: 167.050963
bogomips	: 66.56
(excerpts from /proc/cpuinfo) while my Omnibook 800 (Pentium 166) shows a BogoMIPS of 328.50. My old 386DX/33 shows a BogoMIPS of about 6.6
There's a reason why Linus named these "BOGO"-MIPS
(I guess they're actually used in the kernel for certain types of short idle loops).

(?) Spare a Minute to Provide "Some Info"

From fujairah on Thu, 20 May 1999

(?) To william smith

I have read about your answers on different linux sites. We are interested in LINUX and wish to see how it works, If it is fine we will be interested to go for it.

If you could kindly give us some info about LINUX it will be really useful to us.

- P.GOWRI SHANKAR

(!) Um. That's pretty open-ended. It took me over a minute to format this replay and trim up the portions of your message that I'm quoting for context.
Linux is an independent and free implementation of the POSIX and UNIX programming APIs and conventions. Most UNIX software can be ported to Linux with ease.
Because all of the core parts of a Linux/GNU system are open source) it should be possible to port any UNIX software to it.
What else is there to say on the subject. You found back issues of the "Answer Guy" so you must have found numerous other links to Linux web sites. It is one of the most popular subjects on the Internet.
You are obviously already "interested" in it (enough to look around on the web to find me and enough to write to me). So you best strategy is to get a copy of Linux to play with (it's free after all, though you're best option is to pay for a copy of one of the popular distributions, to save you the trouble, time and expense of downloading a whole suite for yourself).
As for matching Linux features to your needs: Only you can do that. Certainly you can, and probably should, hire a consultant (or contract with a consulting firm) to perform a requirements analysis process.
From what you've said here, all I can say is:
Well, I like it.
As for how it works.
Well you install it, usually by booting from your CD or a specially written floppy, then you follow a number of installation dialogs, answering mysterious questions with obscure parameters. A bit later after much disk activity, often accompanied by "informative" progress indicator dialogs, you reboot. Then you log in to either a text or graphic session and you issues commands by typing them at a command prompt, selecting them from text menus or using a mouse.
In other words, Linux works just like any other microcomputer operating system. I can't be more specific (because your minute is up, but also because there are many different distributions of Linux and most of them have many different options --- so you can choose almost any aspect of your user interface, for yourself).

(?) Data "Losted" (sic)

From Andres Martinez Pintor on Thu, 20 May 1999

(?) I lost all information in my drive using diskedit.The parameter block BPB was losted.I don't have anything in diskedit ...it's all in 00.All I want is the disk drive operating...the information and programs not needed .Is there a program for this? to get to c:\

Samsung VA 34324A 4.3G Please help me.

(!) First this doesn't sound like a Linux question. There isn't any Linux program named "diskedit" that I've ever heard of, and we don't designate out filesystems using names like "C:"
Next, what were you doing in DiskEdit if you don't know how to use it, and you didn't have a current backup?
Last, if you don't care about the data (information and programs not needed) then why not simply repartition the drive and put new filesystems on it?
(If you do care about the data you could always try Norton utilities "UNFORMAT.EXE" program. One trick we used to do with that, when I was on the Norton tech support team, is to FORMAT a drive, then immediately UNFORMAT it. This often recovered most of the data, relatively quickly and painlessly. However, I don't work for Norton, or Symantec, anymore. so if you want their help, on the legacy DOS derived platforms that I've abandoned, call them).

(?) Network Neighborhood: Heterogenous File Sharing

From azlan on Thu, 20 May 1999

(?) How do I configure the Network setting so that I can access other PCs in a LAN regardless of what other PCs have as their OSs(Linux/Windows/Macs) -- Network File Sharing.

Thank you,

AZLAN Ipoh, Malaysia

(!) Using Samba on your Linux and other UNIX systems will allow them to act as file and print servers to NT, Win '9x, WfW, and OS/2 LANMan clients.
Netatalk will allow the same Linux and UNIX systems provide file and print services to MacOS clients (although MacOS X will probably be even better in this role --- if you want to pay for it).
While we're on the subject it's possible to run Novell Netware under Linux (available through Caldera). There's also the free mars_nwe (Netware emulator).
So, the obvious answer to your question is to install the appropriate software on your Linux and other UNIX systems. This will allow them to communicate with your Windows and MacOS systems using the protocols that are native to those systems.
Naturally you could try installing NFS on the "other" operating systems. However, NFS is a pretty lame protocol (particularly in versions 1 and 2). Linux support for NFS is still not sterling, though the new kernel driver is getting better and we are seeing some preliminary v3 and NFS over TCP support. More importantly we find that the various NFS implementations for NT, Win '9x, MacOS, etc. are very bad. These take lots of resources from these non-UNIX operating systems, cause conflicts and make these systems even less robust and stable (which is very bad considering how often we have to reboot our NT, '9x and Macs already).
I should point out that Samba and Netatalk aren't a bed of roses. Actually, to question that old idiom a bit perhaps I should say that they ARE a bed of roses, complete with thorns!
Presumably you'd like seamless filesharing with robust file and record locking, security, and high reliability.
The problems that come with this are often subtle. If you took a given directory tree (say the home directories for your users) and shared/exported it out over NFS, AppleTalk, and SMB protocols you'd probably find numerous problems with file corruptions and horrible concurrency issues. The low level locking semantics and, in many cases, the file formatting characteristics, even the file naming syntax, are all just different enough to cause irreconcilable differences.
Frankly your best bet for heterogenous file sharing to this day is probably Netware. Naturally this means getting Netware clients for you MacOS, Win '9x, and even OS/2, and MS-DOS systems. Native Linux drivers for accessing Netware servers are available. There are the ncpfs (free) drivers (aren't those in the stock 2.2 kernels these days?) and the clients from Caldera (non-free, based on code licensed from Novell).
Sometimes I wonder why FTP is still such a widespread and popular protocol. Other times I look at the issues like these, and I know.

(?) AOL

From Ydoc10 on Mon, 17 May 1999

Is there any way to access AOL E-mail without knowing the password and being undetected?? As in showing mail read, etc.

Also...same question for AOL IM's....

Really need to know Thanks

(!) Not that I know of. In any event it sounds like such activities would be illegal and petty. In addition this question has nothing to do with Linux.
So, where do you idiots get my address and why do you ask me these silly, irrelevant and juvenile questions?
Actually you could "access" AOL E-mail by sniffing any network connection over which it was travelling. Of course you'd have get access to said network segment while the victim was access his or her e-mail.
You could probably also get it from the victim's computer if you surrepticiously gained access to that (it probably appears in temporary files and possibly in removed/deleted file fragments.
Naturally you could send the victim a hacked up "upgrade" to their AOL software and hope that he or she is stupid enough to install it.
Then again you could walk into one of the AOL offices and try to gain direct access to their servers. That should be entertaining.
If you actually try any of these moronic schemes I hope you get promptly arrested and suffer a humiliating time in the press.

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


New Tips:

Answers to Mail Bag Questions:


IP address in Linux Gazette

Date: Wed, 5 May 1999 08:39:07 -0400
From: "F. D. Jones", mrj@magicnet.net

FYI, the current implementaion of pppd automatically runs the script /etc/ppp/ip-up when the ppp connetion is established. One of the variables created in the script is $IPLOCAL which is your assigned ip address. No perl or awk required, and it's reliable because it's reported by the daemon handling your ppp! There are a few other conditional scripts invoked by pppd, also. Check them out.

From the man page for pppd:

SCRIPTS Pppd invokes scripts at various stages in its processing which can be used to perform site-specific ancillary processing. These scripts are usually shell scripts, but could be executable code files instead. Pppd does not wait for the scripts to finish. The scripts are executed as root (with the real and effective user-id set to 0), so that they can do things such as update routing tables or run privileged daemons. Be careful that the contents of these scripts do not compromise your system's security. Pppd runs the scripts with standard input, output and error redirected to /dev/null, and with an environment that is empty except for some environment variables that give information about the link. The environment variables that pppd sets are:

DEVICE The name of the serial tty device being used.

IFNAME The name of the network interface being used.

IPLOCAL The IP address for the local end of the link. This is only set when IPCP has come up.

IPREMOTE The IP address for the remote end of the link. This is only set when IPCP has come up.

PEERNAME The authenticated name of the peer. This is only set if the peer authenticates itself.

SPEED The baud rate of the tty device.

UID The real user-id of the user who invoked pppd.

Pppd invokes the following scripts, if they exist. It is not an error if they don't exist.


root Password

Date: Wed, 05 May 1999 23:40:19 -0400
From: David Bestor, dab@indenial.com

Forgot root password?
Sendmail hangs on boot?
That new startup script locks up your system..

Instead of breaking into your system just boot up into single user mode.

At the lilo prompt just type: linux -s

Example:

 
LILO:linux -s 
This will boot up into single user mode and it doesnt even ask for a password...

Much easier than trying to break in..

Thanks

--
David


Shutting Up Yer Modem

Date: Thu, 06 May 1999 06:54:30 PDT primes@hotmail.com

This shuts up or quietens noisy modems during dialup. recall the following AT commands.

controls speaker volume:

operates speaker:

you now have to modify the expect-send pairs in your chat script. note that the above commands should be prefixed by AT. the following chat script turns off the modem speaker.

/usr/sbin/chat                                     \
   ABORT           '\nBUSY\r'                      \
   ABORT           '\nNO ANSWER\r'                 \
   ABORT           '\nRINGING\r\n\r\nRINGING\r'    \
   ''              ATZ                             \
   'OK'            ATM0                            \
   'OK'            ATDT1234567                     \
   CONNECT         ''                              \
   ogin:--ogin:    mylogin                         \
   assword?        mypassword
after specifying the ABORT strings, the sequence will first expect nothing; and then send the string ATZ to reset the modem. the expected response to this is the string OK, after which the string of ATM0 is sent to turn off the modem speaker. when it receives OK, the phone number 1234567 is dialed out, after which the login procedure begins.

to vary the speaker's volume, just suitably modify the expect-send pairs before the ATDT command, eg. replacing ATM0 by ATL0 uses low volume for the speaker.

--
primes


PPP IP address

Date: Tue, 11 May 1999 10:53:27 -0400
From: Robert Jones, rjones@chaotika.net

In response to the recent couple tips about finding the IP address of a PPP interface, I offer the following script, which will tell you the IP address of any properly configured network interface on your system.

#!/bin/sh

if [ -z $1 ]; then
    echo "You must specify a network interface."
elif [ -z "`grep $1: /proc/net/dev`" ]; then
    echo "Interface '$1' does not exist on this system."
else
        IPADDR=`ifconfig $1 | grep inet | cut -d: -f2 | cut -d\  -f1`
        echo "The IP address for $1 is $IPADDR."
fi
--
Robert


IP adress from ifconfig

Date: Wed, 12 May 1999 00:43:31 +0100 (GMT+0100)
From: Magnus & Tina, magnus@gol.com

I couldn't resist. Why use perl or awk?

Here is another one of those "How to get an IP adress from pppd" tips. In this specific case for the gateway. It needs to be changed depending on the printout format of ifconfig though.

ifconfig | fgrep P-t-P | cut -c42-56 > /tmp/gw_ip
--
Magnus


Modems

Date: Sun, 16 May 1999 10:40:34 -0600 From: "Joey Stanford", rescue@telebot.net

Just wanted to let you know that, contrary to the lists circulationg around, ASKEY V1433VQH-X modems are Linux Compatable! You have to disable PNP and set it up as COM3 (their terms) and IRQ4. Took me almost a week to figure that out. =3D) Thought it would be helpful to others!

--
Joey


Tips in the following section are answers to questions printed in the Mail Bag column of previous issues.


ANSWER: Re: [SLL] Basic Question: Gnome Panel?

Date: Tue, 25 May 1999 21:53:30 -0700
From: Michael Leary, leary@nwlink.com

My thanks to Bradley Willson. For anyone else, here's what was wrong:

my .xinitrc was:

exec /usr/bin/enlightenment
as per the enlightenment docs (I think?), anyway, when it should have been:
exec gnome-session
as per the gnome FAQ on gnome.org. This works great now, and all is well.

--
Michael


ANSWER: tar.gz on Windows, ZIP on Linux--1

Date: Mon, 03 May 1999 00:52:17 +0000
From: Heather Stern, star@starshine.org

I'll try to keep this short enough to stay at the Two Cent Tips level :)

'zip and 'unzip' are the Linux tools to deal with .ZIP files. According to the man page, it's been around since 1990, so it predates Linux itself by several years, it exists on other UNIX and UNIX-like systems too. (And it's been ported to MSwin - see http://www.itribe.net/virtunix/ for an unzipper that handles long names and fixes line ends.)

At any rate, I can see your point -- MSwin users are used to finding apps and documentation wrapped in .ZIP files, but true newvies don't know what a .TGZ is, and assume it's useless to them.

It's not. tough! WinZIP (http://www.winzip.com/) handles tar-gzipped files with no hassle whatsoever. I have two laptops, one w95 and one Linux - and with these two utilities I really don't have to think about file formats when I toss archives back and forth.

--
Heather


ANSWER: Linux Gazette Format--2

Date: Mon, 3 May 1999 19:54:46 -0700
From: "Nichole Murphy", Nichole-Murphy@worldnet.att.net

You can Download the text or HTML files and untar them with WinZip. Winzip will tell you that the tar.gz files contain a compressed file and ask if you would like to uncompress the second file. Chose "yes" then read the files with any program you want.

--
Nichole


ANSWER: Reading Linux Files from Win95--3

Date: Mon, 03 May 1999 22:10:53 -0400
From: Peter Inskeep, pinskeep@iglou.com

I know of at least one program that will read Linux files from your Windows 95 operating system. It is called "explore2fs.exe." It is about 486K long, and is self contained, if I recall correctly. I know it is a long note, but I have copied the author's Readme.txt file below. The author, John Newbigin, is from Australia. The program opens an explorer like window showing your Linux file systems, ie., /dev/hda5, /dev/hdb6, or whatever. You can read most files, if I recall correctly. I also use a program called Windows Commander, by Deiter Prissing, I believe, in Germany. It may be that program that enables me to read individual files, but it is John's program that makes it possible to read and scan the Linux file systems.

--
Pete


ANSWER: a.out binaries not working--1

Date: Mon, 03 May 1999 21:59:04 -0400
From: Peter Inskeep, pinskeep@iglou.com

Darren,
Try running the a.out binary with the command line: ./a.out I recently installed RedHat 5.2 and found that its $PATH statement does not include a path of " ./: " ./ is the path of the current directory that you are in. Remarkably, RedHat does not set up paths so that your current path is looked at to execute a file.

Pete


ANSWER: Re: Linux partitions from Windows--1

Date: Mon, 3 May 1999 23:44:59 +0200 (CEST)
From: rsmith@xs4all.nl

You can read (but not write) ext2 filesystems ubder Windows with the FSDEXT2 driver. See http://www.yipton.demon.co.uk/

--
Roland


ANSWER: Re: a.out binaries not working--2

Date: Mon, 3 May 1999 23:32:30 +0200 (CEST)
From: rsmith@xs4all.nl

You should probably install a different C development package. Most distributions have two available: one for a.out and one for ELF binaries. Install the package that makes ELF binaries.

If you want to run the a.out binaries, you have to have support for that in the kernel. Either compiled in or as a module (binfmt_aout).

Roland


ANSWER: Re: FTP access methods

Date: Mon, 3 May 1999 23:27:08 +0200 (CEST)
From: rsmith@xs4all.nl

Regarding your question about mounting FTP sites...

I do not think you can mount an ftp site directly.

But you might want to check if the FTP site also supports NFS (Network File System) access. That should do what you want.

If you just want to synchronize a remote with a local directory, rsync might be a better choice.

Hope this helps,

--
Roland


ANSWER: Windows NT and Linux hate each other?

Date: Tue, 11 May 1999 06:49:10 -0500
From: "Daniel J. Bodony", bodony@purdue.edu

Dear Pepijn,

Just recently, my roomate and I installed RH5.2 and NT/4.0 on the same machine and had a problem very similar to the one you describe.  The network card used was a 3C509b that had been in an 486 running RH5.0 and was known to work.  In the new machine, NT/4.0 could see and use the network card fine but RH5.2 could not.  More specifically, RH5.2 could find the network card but could not initialize it and Tx/Rx packets.  The answer turned out be that NT/4.0 would Reset and Change the card's IO port and interupt values to a different set of numbers each time the computer was rebooted so that the card was on a different IO/IRQ each time RH5.2 started.   Perhaps Win95 is doing something similar.

The fix was to force NT/4.0 to choose one and only one IO/IRQ combination.  Then all worked fine.

--
Dan


ANSWER: Re: Linux partitions from Windows--2

Date: Tue, 4 May 1999 19:42:35 +0200 (CEST)
From: Der Guru, frank@di-net.de

Mark,
I use fsdext2. It gives you read access to your Linux partitions. You can find the URL at freshmeat. Regards

--
Frank Maloschytzki


ANSWER: Problems running your a.out executable--3

Date: Thu, 6 May 1999 16:38:27 +0100
From: "D.McMurray", d.mcmurray@dccl.net

This sounds to me like a very common mistake with newcomers to this type = of operating system, especially those with a DOS/Windows background. I = am assuming you are merely typing:

a.out
at the command prompt, which in DOS would normally execute the program. = You may find that the mishap is due to the path enviroment variable, = Which usually has the current directory '.' added by default in DOS but = not so in Linux, well RedHat in particular, I cannot speak for the other = distributions.

When you type 'a.out' the directories in your path variable are = searched, if the file a.out is not found you will receive an error = message along the lines of 'command not found' or similar. Try = specifying the location of the executable, assuming you are in the = directory containing the file try typing:

./a.out
This specifies that the file can be found in the current directory. If = this works you can save having to type the './' each time by adding the = directory '.' to your user profiles path enviroment.

Hope this helps, regards,

--
David McMurray


ANSWER: Linux Partitions on Windows--3

Date: Sat, 8 May 1999 17:39:32 +0200 (CEST)
From: Torben Dam Jensen, tdj@hco.kol.ou.dk
I got a question for ya... is there a driver or application I can use to get at least read access to my Linux partitions from Windows? Thanks,

Yep !

http://www.yipton.demon.co.uk/

ftp://sunsite.unc.edu/pub/Linux/system/filesystems/ext2

http://www.diskwarez.com/other/ext2tool.zip http://www.diskwarez.com/other/ext2tool.txt (for a description)

Above links should get you going...

--
Torben


ANSWER: Need help on Internet connection with Linux

Date: Sat, 08 May 1999 12:17:37 -0400
From: Laurin Killian, lek@uconect.net
First I used kppp (script based)
You sure you want script based? I don't think you do. I went to the homepage for what appears to be your internet provider (http://www.asiaaccess.net.th/), and I see no mention of the logins requiring a script. Try "PAP" under kppp - it stands for Password Authentication Protocol. Since you're using kppp this should be easy to set up.

Might as well go all the way since I'm trying to help: When you create a new account under kppp, all you should need to fill in is:

(Dial Tab):
	Connection Name and Phone Number
	Authentication:		 PAP!
(DNS Tab):
	Primary and secondary DNS #'s from ISP.
It is advisable to backup/delete ppp.options (as you originally did), since kppp generates its own options (but may get confused if you added conflicting options in that file).

It looks like AsiaAccess does have an option for a script, but using PAP is easier and supposedly faster

Let me know if this works.

--
Laurin


ANSWER: Re: Linux Gazette Format--4

Date: Sun, 9 May 1999 13:57:00 -0600 (MDT)
From: Michal Jaegermann, michal@ellpspace.math.ualberta.ca

this is a comment to a complaint in Mail Bag, LG#41, from jcclemen@SHERWIN.RMC.com that LG is not available in a format accessible to Windows user and your response to that.

Even under 'doze there are tools available which handle .tar.gz files. WinZip, for example, has even pointy-clicky interface and accepts many different formats beyond zip; tar.gz is one of these.

--
Michal


ANSWER: Network boot disk for i386 without hd

Date: Wed, 12 May 1999 17:41:31 +0200
From: BROWN Nick, Nick.BROWN@coe.int

Several distributions exist for floppy-only machines. Check out mulinux or tomsrtbt (http://www.toms.net/rb/).

--
Nick


ANSWER: Subject: Linux partitions from Windows--4

Date: Wed, 12 May 1999 17:43:12 +0200
From: BROWN Nick, Nick.BROWN@coe.int

Check out e2fs at http://uranus.it.swin.edu.au/~jn/linux/

--
Nick


ANSWER: Linux partitions from Windows--6

Date: Fri, 14 May 1999 14:27:36 +0200
From: Hadess, hadess@infonie.fr

Try-out Fsdext2 that you can grab at http://www.yipton.demon.co.uk/ It's only a read-only driver, but would you want to break your linux drive, hum ? Ciao --
Hadess


ANSWER: FTP access methods...

Date: Fri, 14 May 1999 14:34:36 +0200
From: Hadess, hadess@infonie.fr

If you want to have Unix-like access on a remote machine, you'd better use NFS. It's not possible to mount FTP shares (if it's, mail-me =).

Your Windows apps were only hiding you the whole FTP thingy. They did the same thing KFM does: show folders _as if_ they were local, no more.

NFS is in stock distro now and 2.2.x NFS performances are rather good, but worse than FTP of course. Compile a new kernel for your rescue disk w/ NFS included. Ciao

--
Hadess


ANSWER: a.out--4

Date: Fri, 14 May 1999 14:50:33 -0400
From: Help Desk, help-desk@utc.edu

>Is your current directory in your PATH? If not try ./a.out to execute. You can get a more meaningful name w/ gcc -o myprogram myprogram.c. This gives the executable the name myprogram. To set the path edit your bash_profile and add '.' w/o the quotes to the path statement. Similiarly, for other shells.

--
Charles Baker


ANSWER: Re: Linux Gazette Format--6

Date: Sun, 16 May 1999 04:30:55 +0200
From: "Anthony E. Greene", agreene@pobox.com

In Linux Gazette #41, you wrote:

Many of us have and use Linux, but are still bound by the need to standardize in the corporate world to the Windows environment. As a Linux newbie, I still do not have a high speed modem bought and installed on my Linux box at home so have no way to get the tar, etc formats, and, of course, many of us have a higher speed lines at work. Is there a way to download the file in a format that can be unzipped, etc. on a Windows workstation then print out? Also, a format easily read by Windows machines would make publicizing the Linux system possible to others now using Windows. For example, our local PCC users club(Coastal Areas PC Users Group, www.caug-pc.org http://www.caug-pc.org) has a web site. I guess what I'm saying, is if we only publish Linux documents in formats that Linux users and/or Linux gurus can use, how do we grow the user base?
There are many native Win95/98 freeware and shareware compression utilities listed at Winfiles:

http://www.winfiles.com/

--
Anthony


ANSWER: The PPP problem, followup.

Date: Sun, 16 May 1999 10:19:15 +0700
From: Ruangvith Tantibhaedhyangkul, ruangvith@linuxfan.com

To all generous "Linuxians",

I've solved my problem about Internet connection with Linux, as I called for help in Linux Gazette. After that, tens of suggestions have come from almost all parts of the world. I've tried almost all as well. One of these sent me to the right solution. It was from document "How to Hook up PPP in Linux" by W.G. Unruh, recommended to me by Jerry Boyd. Thanks for all your kindness.

The best of this is not the solution itself, but the way it comes, such generosity which has been bringing about this wonderful OS. It's a true Linux way!

--
Ruangvith


ANSWER: Re: ANSWER: Word to PostScript

Date: Sun, 16 May 1999 10:27:23 -0700
From: Mike Berkley, Mike.Berkley@ec.gc.ca

From: "Asle Aursand", asle@sentinel.no
It is possible to print to file. The resulting *.prn file is really a Postscript file, at least if you are using a postscript printer. Anyway, this *.prn file you can import into Ghostview.

I read your tip, Asle, and there is one problem with it. Some Microsoft printer drivers do not yield legal Postscript documents when printing to a file.

I have experienced the following two misfeatures:

1. HP Postscript drivers add a single line of HP printer codes to the top of the file, to force the printer to use Postscript mode. Since the first line of the file no longer begins with %!PS, Ghostscript and non-HP printers will give an error and refuse to display the file. You have to hand edit the file to get real Postscript.

2. Some NT print drivers mix up a few of the %Page comments, so that Ghostview cannot digest the file. Ghostscript can indeed display the Postscript, but Ghostview cannot be used.

--
Mike Berkley


ANSWER: Uninstalling Software

Date: Tue, 18 May 1999 17:54:01 +0000
From: Tom Walsh, tom@mytoys.com

You can uninstall a binary distribution of software that was unpacked from a tarball by using: "tar ztf some-name.tgz | xargs -ikillit echo ' rm -f killit' | /bin/sh". Try the command without using the final pipe into the shell to see what the resulting list of commands will be to the shell. BTW, I use this technique a lot to reduce the tedium of repetitious operations on files/directories.

--
Tom


ANSWER: HD-less

Date: Wed, 19 May 1999 22:38:45 -0500
From: "Charles W. Powell MD", cwpowell@kuhub.cc.ukans.edu

Slackware (and other distributions, I suppose) has a "rescue disk" that becomes the "root disk" of the boot-root floppy disk pair. If one uses the "boot" floppy called 'net.i' and the "root" disk called 'rescue.gz' one finds a highly funtional system at hand. Using this pair I recently initialized the network card (an Intel EtherExpress Pro) when the standard kernel failed at the task. Using the 'ifconfig' and 'route' commands I put the machine on my network in minutes.

Just about any site that handles various distributions will have this disk set available. Create the floppies using:

	dd if=/path/to/boot/or/root/image of=/dev/fd0
under Linux, or with 'rawrite.exe' under DOS.

Network initialization is accomplished with the following commands:

	ifconfig eth0 192.168.x.y
	route add -net 192.168.x.0
or with appropriate addresses for your network. I hope this helps.

--
Charles


ANSWER: 3COM cards

Date: Mon, 24 May 1999 14:25:33 +0100
From: Wood Alan, WOOD_A@admiral.co.uk
I've installed Linux Red Hat 5.2 on my friend's computer, and now for some extremely odd reason the Red Hat machine and the NT Server 4.0 that's on his other machine can't see each other over the network. At all. They don't even respond to each other's ARP requests. The link is alive (judging from the lights on his switching hub), and the machines can see each other fine when he runs Windows 95 on the machine where I've installed Red Hat. TCP/IP is installed and configured correctly on both machines. The Red Hat machine has a 10 Mbps 3COM card, the NT machine a 100 Mbps 3COM, and the lights on the hub say that it's using those speeds on the interfaces that the machines are on, regardless of whether the one machine is running Linux or Windows 95. What on Earth could be going on here?

I had a similar problem with the kernel as supplied with RH5.2, even when recompiled. I tracked it down to not being able to properly determine the IRQ that was being used by my 3c509 in PnP mode. I just forced the card to use a

fixed IRQ, and passed the module the correct IRQ, and all came back to me. I now can boot the machine into either Linux or W95, and networking works fine. I keep meaning to try it with the 2.2.7 kernel to see if it can determine the correct IRQ, but have not gotten around to it as it works now.

--
Alan Wood


ANSWER: Re: question for the board

Date: Tue, 11 May 1999 11:05:13 +0100
From: Stephen Crane, scrane@flexicom.com
I am confused about what I will need to install Red Hat 5.2 on my new dell system last attempt met me with a command line only --- xfree 3.3.3.1+ was needed for my video card TNT chipset however I was looking for the files - and I am hoping for RPMS out there for me to do it the easy way.

XFree86-3.3.3.1 rpms are in the usual place, i.e., updates.redhat.com or, my favourite mirror, Sunsite:

http://sunsite.doc.ic.ac.uk/packages/Linux/redhat-updates/5.2/i386/

The best index for, well almost everything, is rpmfind.net, e.g.:

http://rpmfind.net/linux/RPM/redhat/5.2/

Cheers,

--
Steve


ANSWER: Re: Network boot disk for i386 without hd

Date: Thu, 27 May 1999 15:38:42 +0200
From: Radovan Garabik, garabik@melkor.dnp.fmph.uniba.sk
I have a Linux machine in my office network and several i386 that has no hd, but has 1.44 floppy. They also have ne2000 network card, without no proms. Is there a way to make a boot disk that allow my i386 to boot and login into my Linux machine??? I searched the web but found only solutions that reffer to using network cards with eproms/proms. Thanks.
How much memory do the i386's have?
You can check out one of one-floppy mini distributions, or set up nfs-root linux on them - it requires loadlin.exe and kernel with compiled nfsroot support, and (depending on how much memory you have) swapping over nfs. It is not that difficult to be done, but not for a beginner. Alternatively you can boot into DOS and use good old NCSA telnet to connect to your linux machine.

--
Radovan


ANSWER: Re: FTP access methods...

Date: Thu, 27 May 1999 15:48:22 +0200
From: Radovan Garabik, garabik@melkor.dnp.fmph.uniba.sk
Ok tough guys, I have written down more one-liners and cool tricks from the Linux Gazette pages than from anywhere else. And I finally have a good question: In both Window$ and O$/2 I had apps that would treat ftp sites as folders (directories). It worked real well with keeping data in sync off-site. Is there a tool that will allow an FTP site to be mounted under Linux? It seems fairly useful to me, but freshmeat and other resources turned up nada.
Linux VFS is not very friendly to the idea of external filesystems... However, check out http://atrey.karlin.mff.cuni.cz/~pavel/podfuk/podfuk.html I have not tried it yet, but I am going in a short time.
I am working on a cool 1 disk Linux distro that has pilot backup features and other remote file access ideas that could really benefit from this.

probably NFS is the way to go....

--
Radovan


ANSWER: re: soundpro

Date: Fri, 28 May 1999 01:10:14 +0800 (SGT)
From: "Jayasuthan ......", suthan@eplx01.fairchildsemi.com

Please check for linux supported soundcards.. second the might compatible with other cards. Try compile soundcard driver as modules. And learn to use isapnp and conf.modules which might help

--
Jayasuthan


ANSWER: Re: Question about 2 GB max?

Date: Fri, 28 May 1999 15:39:40 -0400 (EDT)
From: Deirdre Saoirse, deirdre@deirdre.net

On Thu, 27 May 1999, Jim Dennis wrote:
Actually 2Gb is the maximum FILESIZE under 32-bit versions of Linux. (Alpha, and presumably UltraSPARC ports are not hampered by this). Linux ext2 filesystems can be much larger than 2Gb ---
That's fine as far as the general theory goes, however...: See:
"http://www.dartmouth.edu/cgi-bin/cgiwrap/jonh/lppc/faq.pl?_highlightWords=partition%20size&file=494

Traditionally, there has been a 2GB partition size limit (not just a FILE size limit) on PowerPC Linux partitions. I don't know if that will continue to be true with newer versions but it is true of LinuxPPC up to revision 4 and DR3 of MkLinux. I haven't checked if there's a YellowDogLinux specific answer however.

pre-5 of LinuxPPC reportedly handles larger partitions but I haven't found specifics for later versions (does anyone want to update the FAQ-o-Matic?).

--
Deirdre


ANSWER: Re: Question about 2 GB max?

Date: Fri, 28 May 1999 19:13:24 -0400 (EDT)
From: Tom Rini, tmrini@ntplx.net

On Fri, 28 May 1999, Deirdre Saoirse wrote:
That's fine as far as the general theory goes, however...: See: http://www.dartmouth.edu/cgi-bin/cgiwrap/jonh/lppc/faq.pl?_highlightWords=p artition%20size&file=494

This is however, WRONG. wrong wrong wrong wrong. Well, almost. :) If you use a newer e2fsprogs (ie compile 1.14 or whatever is current) you're fine. I'm not even sure if R4/DR3 really does have that problem (been a while, and I forget just when that problem was fixed). R5/YDL/anything w/ current e2fsprogs is fine.

--
Tom


Published in Linux Gazette Issue 42, June 1999


"Linux Gazette...making Linux just a little more fun!"


Mark's autofs tutorial revisited

by Mark Nielsen


Mark's autofs tutorial revisited


at The Computer Underground

If this document changes, it will be available here: Mark's autofs tutorial revisited. Also, an earlier version of this tutorial is at January 1998 Issue #24.


  1. Some notes.
  2. Installing Autofs
  3. Explaining what we did.
  4. Installing for new users.
  5. Installing a zip drive and other resources.

Some notes.

What is autofs? Autofs lets you use your floppy and cdrom drives a little easier. In the MS Windoze world, when you need to access your floppy drive, you just goto drive "a:" and it is there. To replicate this feature in the Linux or UNIX world, you use an automounter that attaches a device (like a floppy or cdrom drive) to a directory on the computer.

If you don't have an automounter, you must manually attach a device to a directory using the commands "mount" and "umount". An example of attaching your floppy drive to the directory "/mnt/floppy" would be

mount /dev/fGd0 /mnt/floppy

If you need to explicitly define how the floppy drive was formatted, you can use these commands

mount -t msdos /dev/fd0 /mnt/floppy  ## For msdos formatted disks 
mount -t ext2 /dev/fd0 /mnt/floppy  ## For "linux" formatted disks
Also, you must make sure the directory "/mnt/floppy" exists. A command to make the directory would be,
mkdir -p /mnt/floppy
And this command unmounts or frees up the floppy drive from being used.
    
umount /dev/fd0 
Also, something one should be aware of, KDE and GNOME, which are desktop environments for X, usually have their own way of using floppy drives. The problem is, if you connect to your computer through telnet or ssh, these features are not available to you. That isn't nice. Using Autofs, any program or user entering a directory that is assigned to a device (like a floppy drive) causes the device to be attached to that directory. This happens at the system level rather than in the GUI level.

Also, Autofs can be used to grab an nfs site (and other things) and attach it to a directory. It can do more than just automouting your floppy and cdrom drives.

If you need some more info, try these urls or commands on your computer,

  1. man automount
  2. man autofs
  3. man /usr/man/man5/auto.master.5
  4. more /usr/doc/autofs-3.1.3/README
  5. Automount howto
NOTE: You also should consider any security hazards about using autofs.

Installing Autofs.

How do you install Autofs? Well, although I hate to demonstrate things for only one particular version of Linux (since I am heavily growing fond of Debian), this demo will be made for RedHat 6.0.

I assume "/dev/cdrom" is your cdrom drive and "dev/fd0" is your floppy drive. I am also assuming you will backup your "/etc/auto.master" file. Use this script to create the following files and restart autofs. Login as "root", goto your home directory, copy whatever is between the next two lines to a file called "CreateAutofs.script" and then execute the script with the command

 
source CreateAutofs.script
----------------------------------------------------------------------
mkdir -p /root/Drives
cd /root/Drives

     ### Let us make sure the two directories exist, ignore errors
mkdir -p /mnt/Drives/floppy
mkdir /mnt/Drives/cdrom
     ### Let us backup the auto files in case they haven't
mv -f /etc/auto.master /etc/auto.master_old
mv -f /etc/auto.floppy /etc/auto.floppy_old
mv -f /etc/auto.cdrom /etc/auto.cdrom_old
     ### Create the files for autofs
echo "/mnt/Drives/cdrom /etc/auto.cdrom --timeout 10" > /etc/auto.master
echo "/mnt/Drives/floppy /etc/auto.floppy --timeout 3" >> /etc/auto.master
echo "floppy   -fstype=auto         :/dev/fd0" > /etc/auto.floppy
echo "cdrom    -fstype=iso9660,ro   :/dev/cdrom" > /etc/auto.cdrom
     ### Create the links to the floppy drive and cdrom drive
ln -s /mnt/Drives/floppy/floppy a:
ln -s /mnt/Drives/floppy/floppy floppy
ln -s /mnt/Drives/cdrom/cdrom d:
ln -s /mnt/Drives/cdrom/cdrom cdrom 
     ### Lets retstart autofs
/etc/rc.d/init.d/autofs stop
/etc/rc.d/init.d/autofs start
     ### If it didn't work, you might have to reboot 
cd /root/Drives
----------------------------------------------------------------------

Explaining what we did.

Now put a floppy disk formatted for MSDOS and a cdrom in and execute the commands

ls /root/Drives/a:
ls /root/Drives/d:
to see if there is anything on them. Hopefully you don't get any error messages.

Personally, my /etc/auto.floppy file looks like

floppy          -fstype=auto,defaults,user,suid :/dev/fd0
and my /etc/auto.cdrom file look like this
cdrom           -fstype=iso9660,user,suid       :/dev/cdrom
The reason why I gave conservative values in the script was the fact the my values might be security hazards. But since I am the only person using my computer, I wanted to make sure my personal account had full access to the floppy and cdrom drives. Previously "-fstype=auto" wasn't working quite right with msdos disks, but when I increased the timeout to 3 seconds, it seemed to be working fine. I made the timeout value for the cdrom 10 seconds because it wasn't working really well at 1 second, and I figured it was because the drive didn't have enough time to "warm up" before it was being shut down. You might want to test what the timeout value for your cdrom drive should be.

Your "/etc/rc.d/init.d/autofs" script first looks at "/etc/auto.master". That file usually has three things on each line. It has the directory which all mounts will be located at. Then next to that value is the filename which contains the configuration(s) for what devices you want mounted. We will call these filenames the "supplemental" files. Next to that value is the timeout which you want to occur after so many seconds of inactivity. The timeout will free or umount all devices specified in the supplemental files after so many seconds of inactivity.

Now, the supplemental files can have more than on entry, but for my purposes I don't do that. Read below for the explanation. The supplemental files can be named anything you want them to be named. They also have three values for each entry. The first value is the "pseudo" directory. I will explain this later. The second value contains the mount options. The third value is the device (like "/dev/fd0" which is the floppy drive) which the "pseudo" directory is connected to.

The "pseudo" directory is contained in the directory which is defined in "/etc/auto.master". When people try to access this "pseudo" directory, they will be rerouted to the device you specified. For example, the above script will generate a link called "a:" which if you list with the command "ls a:" will give you a list of files in the floppy drive. Or, a similar command would be "ls /mnt/Drives/floppy/floppy". But if you do the command "ls /mnt/Drives/floppy", you don't see anything even though the directory "/mnt/Drives/floppy/floppy" should exist. That is because "/mnt/Drives/floppy/floppy" doesn't exist as a file or directory, but somehow the system knows that if you specifically ask for "/mnt/Drives/floppy/floppy", it will reroute you to the floppy drive.

Now as to the reason why I didn't combine the floppy drive and cdrom drive into the same supplementary file. Each definition in the "/etc/auto.master" file will have its own "automount" program running for it. If you have several devices running on the same automount program and one of them fails, it could force the others not to work. That is why I want every device running on its own automount program which means there is one device per supplementary file per entry in the "/etc/auto.master" file.

Also, another thing to note is, I use links to the "pseudo" directories. Non computer geeks will get confused if they try to manually use the "pseudo" directories. Basically, the "pseudo" directories are directories that don't exist until you try to use them. I like to use links to the "pseudo" directories so that the user sees and uses the link, and thus is happy because they are just always "there", which is unlike the "pseudo" directories which come and go as you need them.


Installing for new users.

How do you install this for new users? First, you must understand, the mount options you put into the autofs configuration files heavily determine how much a user can use the floppy or cdrom drives or other types of devices. There are also security hazards using autofs one should be aware of. Do the following,


mkdir -p /etc/skel/Drives
ln -s /mnt/Drives/floppy/floppy  /etc/skel/Drives/floppy  ## link to floppy
ln -s /mnt/Drives/floppy/floppy  /etc/skel/Drives/a: 
ln -s /mnt/Drives/cdrom/cdrom    /etc/skel/Drives/cdrom    ## link to cdrom
ln -s /mnt/Drives/cdrom/cdrom    /etc/skel/Drives/d: 

How do you install it for a user called "frank"?

Well assuming that Frank's home directory is /home/frank,


mkdir -p /home/frank/Drives   ## make a path for frank
chown frank /home/frank/Drives   ## Let frank own the directory

ln -s /mnt/Drives/floppy/floppy /home/frank/Drives/a:   ## link to floppy
ln -s /mnt/Drives/floppy/floppy /home/frank/Drives/floppy
ln -s /mnt/Drives/cdrom/cdrom /home/frank/Drives/d:   ## link to cdrom
ln -s /mnt/Drives/cdrom/cdrom /home/frank/Drives/cdrom

chown frank /home/frank/Drives/*  ### Let frank own the contents of directory

A truly risky command to install for a user after installing it for new users would be
     ### DO NOT DO THIS UNLESS YOU LIKE RISK
mkdir -p /home/frank/Drives

if [ -d /etc/skel/Drives ]; then
    tar -C /etc/skel -c Drives | tar -C /home/frank -xv Drives
    chown -R frank /home/frank/Drives 
   else
     echo "Dude, like try to make a /etc/skel/Drives directory first."
fi

Installing a zip drive or other resources.

Okay, now for some more funky stuff. I am going to use one more configuration file to both do the zip drive and an nfs site. First, I am assuming the zip drive is the slave on the primary IDE controller of your computer. Actually, I tried to connect to this site through nfs, and it didn't work. I tried it to one of my local computers and it worked fine.


echo "/mnt/Drives/zip /etc/auto.zip --timeout 10 --timeout 5" >> /etc/auto.master
echo "kernel   -ro,soft,intr   ftp.kernel.org:/pub/linux" > /etc/auto.zip
echo "zip1   -fstype=auto,rw :/dev/hdb1 " >> /etc/auto.zip 
echo "zip2   -fstype=auto,rw :/dev/hdb2 " >> /etc/auto.zip  
echo "zip3   -fstype=auto,rw :/dev/hdb3 " >> /etc/auto.zip
echo "zip4   -fstype=auto,rw :/dev/hdb4 " >> /etc/auto.zip

ln -s /mnt/Drives/zip/kernel  /etc/skel/Drives/kernel 
ln -s /mnt/Drives/zip/zip4    /etc/skel/Drives/zip    ## link to cdrom


Mark works for The Computer Underground as JALG. In his spare time, he tries to do volunteer stuff. Mark takes an active role in COLUG located in Columbus, Ohio.


Copyright © 1999, Mark Nielsen
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"


Caldera 2.2 Quick Review

By Sean Lamb


Yesterday I installed the Caldera 2.2 CD that I got at Comdex over my existing RH 5.2 install. The short review is "cool!" The longer review follows:

This distro is really aimed at the newbie. From the boot floppy, immediately after running LILO, the disk loads a graphical, although text-based, interface while it loads the modules and does some basic hardware probing. The interface is smart enough to load basic keyboard and mouse drivers for those, like me, who are migrating from MSWindows (it does go through a mouse test page so you can refine your rodential setup if you need to). It also autoprobes for the setup files and loads the appropriate modules to access them. The first thing it does that requires user-supplied answers is partitioning. Setup launches a custom version of Partition Magic to create/resize partitions for Linux. I didn't make any changes to my partition table, so I just acknowledged through it after determining that it could see all my partitions.

The next step is the only step of the setup that I didn't like so much; it forced me to reformat the Linux partition. For a newbie without a lot of data on the drive, this isn't that big of a problem. But experienced users are likely to have data that they still want/need to keep on those partitions. There is an Expert mode for setup that you can choose before partitioning that likely addresses this issue, but I didn't try that. If I really needed to, I could have canceled setup and backed up my important data at this time and then restarted setup. For my installation, I allowed it to reformat the drive and setup continued.

After the partition was formatted and ready to continue, setup asked me what packages I wanted to install. For my install, there wasn't an expert option, but there were a few choices here: minimum packages, all recommended packages, or all packages. I chose all recommended packages for this step. It installs all the core packages and almost everything that a basic user will need to get the machine going. However, there are a couple of interesting choices that the developers have made for us (this option installs Apache but not any of the Office suites). What impressed me here is that as soon as you told it which package option you wanted, it started copying those packages to the disk immediately, while it still prompted for user information. This setup really shows off the multitasking capabilities of the OS nicely.

I've probably got the next few options out of order, but while it's copying packages to the hard drive, setup asks for your basic ethernet configuration, assigns a root password and creates user accounts, sets up XFree86 to your hardware and resolution choices (prompts with an extensive list of monitors but allows you to customize the monitor choice; similar options with the video card, but autoprobe worked for me) and tests the X configuration that you selected. There are probably some other options in there that I forgot, but you get the idea.

Once setup has gathered all the information it needs, setup does something that we will probably never see from an MS install: it lets you play tetris while setup finishes copying files. When it finishes copying files, it turns on the Finish button at the bottom of the screen next to the progress indicator to indicate that it is done. It doesn't stop your game, but waits patiently until you are done (it also doesn't force you to stop playing when the game ends, if you really want to play another game right now).

After setup is done, it launches the kernel and packages that are installed on the hard drive and boots into KDE, where it prompts for the username and password. At this point it looks much like a WinNT install, except that the widgets are different shapes/sizes, it presents a list of users' accounts on the machine (login names only). The login window does give you the option of how you want to login (either directly into KDE or into "failsafe" mode that is just the command shell) as well as a shutdown/reboot button. I've already found that the reboot button is very handy here, because now I don't have to give out the root password to the rest of the family in case they don't catch LILO in time and really want to boot MSWin.

On each user's first login to the system, assuming they choose to login to KDE directly (which is the default option), they are presented with the KDE Desktop Setup Wizard. This wizard asks the user to setup a theme and asks what handy icons to put on the desktop. One neat thing that the wizard does is that it gives you the option of placing icons on the desktop for floppy and CD drives that automount the media. One problem that I found was that as myself, as soon as I setup the color scheme that I wanted, I got logged out. It took a while to find the Wizard again, but he is setup in the Utilities menu if you want to run it again (this didn't happen when I was root).

The only problem that I had with setup is that it didn't setup LILO for me. When I rebooted, it hung with LI. From all the reading that I did, I knew that this was a fairly common issue, so finding possible solutions wasn't that difficult (it helped that I had a second machine here that was untouched that had a modem connection). What I found to help here was to boot from the setup boot disk and type "boot root=/dev/hda4" at the LILO boot prompt (change the root param as appropriate for your system). This booted the kernel that I had installed on the hard drive so that I could make the necessary config changes. I found that I was able to setup LILO properly by using Lisa and following the prompts (while logged in as root, of course). The cool thing here is that the setup boot disk can be used as a sort of rescue disk in case there are problems.

For me, the next step will be to setup my PNP devices to get my sound, ethernet and modem connections working (yeah, booted in Win95 right now). I'll probably end up disabling PNP on those cards and setting their configs manually to what Win95 set them up to be, but that's for next weekend.


Copyright © 1999, Sean Lamb
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"


From Word Processors to Super Computers
Donald Becker Speaks about Beowulf at NYLUG

By Stephen Adler





Editor's note: In the original article on Adler's website, many of the inline images display a larger copy of themselves when clicked on. These larger images are not included in the Linux Gazette version, to keep the total size of the Gazette small.



I got an e-mail over the weekend announcing that Donald Becker would be addressing the NYLUG on May 19th. That's the New York Linux users group for those of you out west. From out here on Long Island, NYC is a long way away. But I figured I would rough out the commute into NYC to catch what Donald had to say about his Beowulf project. Actually, if you can keep a secret, I'll admit to having fun writing up my encounters with Internet luminaries like Donald and publishing them on the Internet. This would give me a chance to do so once again, so the long commute into NYC didn't seem so bad.

A rainy day in New York City, and I'm hustling around looking for a parking lot.
Wednesday came flying along, I spent most of the afternoon beating up on an alpha personal workstation 433au, trying to get Linux installed on it. Hey, Red Hat 6.0 was out and since they have a generic kernel which seems to run on all Alpha variants, I figured this should be a snap. Wrong! For some reason, MILO refuses to boot up on the machine. I've been trying off and on to get alpha/Linux installed on this machine since January. It belongs to a professor at Stony Brook who is a real Linux enthusiast, and started down the path of the Linux install, and ran into this MILO problem. I gave it a try, a graduate student from Columbia gave it a try, and we have all failed. The Relativistic Heavy Ion Collider is coming on line soon so we don't have much time to spend on this box. It has become somewhat like King Author's sword. Who ever can pull that sword out of the rock, or install Linux on that machine, will lead a blessed life... Roy, (The professor who owns the Alpha) has now put up a reward for who ever can get Linux installed on the damn thing. The reward right now stands at 2 tickets to see the NY Yanks. (Or Knicks if you are of that persuasion...)

Gucci bags and Rolex watches for sale abound. Where are the damn umbrella sellers!
Time flies when you are having trouble getting Linux installed on something, as it did that Wednesday afternoon. I ended up missing the 4:05pm train into Penn Station and decided to drive in. To my dismay, it would have taken just as long to wait for the next train, as it would have to drive in. Rain poured out of the sky as I topped 20MPH speeds on the Long Island Expressway heading west into Manhattan. I wanted to get to the meeting in time to be able to meet Donald and the rest of the NYLUG members. That was just not going to happen. At this rate, I would be lucky to get to hear him speak at all.

It's 6:20pm and I'm heading up 3rd Ave in search of a parking lot. The meeting starts at 6:30pm. Damn, I'm always running just on time. With little effort, I was able to find a very reasonable parking lot which charged $10 'till closing. It's usually about $25 for a mid town parking lot. I dropped the car off, and dash out in search of the IBM building where the NYLUG was meeting. Rain is coming down, I'm getting soaked, and I'm looking all over the place for those street vendors who always have what you don't need at the time. Fake Rolex watches were up for sale, as were Gucci bags, but no umbrellas. I could feel the rain starting to seep onto my scalp as I ran across Madison, heading north towards 57 St.

IBM, a while back, started to get a clue about the benefits of Open Source/Free software and has now donated one of their meeting rooms for the NYLUG, who meet about once a month. (Rasterman is talking at the next one.) The IBM building stands very tall on the corner of 57th and Madison. It boasts some modern work of some sort at its entrance. One needs to sign in, in order to be let into the building. The meeting was being held on the 9th floor.

I arrive to the meeting room where NYLUG is gathered. A projector is setup with Donald's laptop plugged into it. There are about 30 or 40 people present. Jim Gleason, the organizer of the meeting, who works for VA Research, is there talking with Donald, looking rather busy. He sees me and introduces me to Donald. I have just drove in through about 2.5 hours of LIE traffic, dashed across several streets and avenues under the rain, and my bladder had been screaming at me since exit 40 on the LIE that it needed to be relieved. I couldn't concentrate much on what I was saying at the time. I shook hands with Donald, and muttered something like, "We use lots of computers at BNL". I don't remember how he responded, I think he didn't say anything. I then managed to get myself away, find a seat, store my laptop and look for a good place to take a photo of the room.

A shot of the NYLUG meeting room, courtesy of IBM. By the time Donald's talk was well underway, there was basically standing room only.

Jim Gleason took the mike, and called on people to sit down. He wanted to get the meeting going on time, (it was getting close to 7pm by now). I settled down into my seat, booted my laptop, and proceeded to ignore my aching bladder. I had more important business to take care at the time.

A solemn moment for Jim Gleason, the VA Research guy who is one of the NYLUG contacts and organizers. Actually, the shot was taken as he happened to look down at his notes. The only time he did so during his introduction. Murphy's law is at work here. Jim is a very energetic guy who is excited about his work.
At this point, I started to take notes, as Donald start talking. Since my notes are always rather jumbled, it will be easer to me to cover in broad stokes the topics he talked about instead of trying to give a word by word reproduction of what he said.

His introductory slide showed two things. His affiliation with the NASA Goddard Space Center and a company called Scyld Computing Corporation. My guess is that he has been given the liberty at NASA to work with this Scyld startup to help bring the Beowulf into the private sector. Good for him. At this point, something rather annoying started to happen. The projector which was hooked up to Donald's laptop, started to lose sync with it. Donald, who has a bit of shyness to himself, was having a hard time giving his talk while at the same time, pressing various auto sync buttons on the projector to try and get his transparencies back up on the screen. This went on through his talk. It really didn't matter since he didn't bother to walk through his slides, rather he just talked from the top of his head about what he as been doing for that past 7 years.

Donald's talk went on until 8:20pm. During that time I got the following out of his talk.

A bad picture of Donald at the beginning of his talk. It looks like my camera is getting out of sync with the fabric of space-time. (One of these day's I'm going to upgrade my analog camera to a digital one. But with the increase in property taxes out on Long Island and the small salary a "junior" scientist makes at BNL, it will be some time be for I do so.)
He introduced the concept of a Beowulf system. Basically, it is a cluster of many, off-the-shelf PC's, running Linux, and tied together through a high speed, low latency networking infrastructure. The network topology of this system tends to be a flat one which makes it easier on the application side. Fast Ethernet, tied through a fast Ethernet switch is the current network hardware of choice for a Beowulf cluster. ATM is too expensive at this point and I believe he mentioned that the latency tends to be greater than with fast Ethernet. (But don't hold me to that statement.) He did mention that the ATM "overhead" was way too large. After the talk was over, one of the questions from someone in the audience revealed that Beowulf is basically a library of software which one uses to help implement a distributed application. This includes facilities such as providing a global PID, methods of remote execution of processes, much like rsh, etc. There was some mention of mpi/vpm (and mpiII) which are parallel processing abstractions sitting above the Beowulf distributed processing layer. One of the tasks on my list is to clearly learn about this software, but unfortunately, Donald's talk was not a HOWTO on using Beowulf to parallelize your application. It was more like, "I've worked on Beowulf, and here are some interesting things about it...". So, the specifics of Beowulf still elude me.

Donald talked a bit about the open source nature of the project. In short, being an open source project was crucial in making it as reliable as it is. This also holds for the Linux kernel itself. While working on building Beowulf clusters, Donald ran across some problems with the Linux kernel which he had to fix. Things like only being able to mount 64 file systems got in his way. Having hundreds of PC's talking to each other on the network stressed the networking data structures in the Kernel which he also had to deal with. Being that he had the source code to the kernel he was able to make the Beowulf project work. He also took in contributions from outsiders. If the contributed software was of relevance and worked well, he would include it.

The side of the IBM building, as I face Madison Ave. Thank you IBM for letting the NYLUG use your meeting rooms so that we can hear Donald speak. Although it would be nice if you guys got a Linux friendly projector. Its OK if the projector is not Y2K certified. We'll take it any way.
Donald spoke a bit about the history of his project. His first cluster was made up of 100MHz DX4Somethings (DX486?). (Due to the projector not being able to sync properly to Donald's PC, I could only read part of the slides. You have to give credit to the IBM folk though. The projector was certified as being Y2K compliant. It had a rather official looking sticker on its side saying so...) In 1996, a 2.2 GF/sec cluster was built, followed by a 10GF/sec system in 1997. This was a threshold crossing system. NASA considered 10GF/sec to be the minimum computing power for a system to be called a "super computer". In 1998, a 40+GF/sec system was put together, (at Los Alamos National Laboratory I believe.) What made all this possible was the fact that price per performance was gaining rather rapidly for PC based machines. The threshold was crossed between 1996/1997 making the Beowulf type system competitive with the big Cray type systems. The Beowulf project crossed another watershed when a Beowulf system won the Gordon Bell prize for $/performance. (I believe this was around 1997.) The NASA Goddard Space Center at the time had a "Super Computer" in its basement, called the T3D I believe. It was a 10GF/sec machine. Donald was able, through open source software, a good network and cheap PC's, in essence beat it.

Donald spent some time showing pictures of current Beowulf clusters in operation. Some were rack mounted systems, some were bunches of PC's on a shelfs. The PC's on shelfs Beowulf system is called LOBOS which stands for Lots of Boxes on Shelves. One of the systems built in the 19 inch racks was called the hive due to the noise the large cabinet fans made.

The art work standing at the entrance to the IBM building. Unfortunately, I can't tell the difference between this and a bunch of steel beams welded together.
Some applications which are currently using Beowulf systems are climate modeling, ray tracing and galaxy evolution modeling. He was particularly intrigued with the galaxy evolution modeling application. In order to model a galaxy, you need to have every star in the galaxy, interact with every other star in the galaxy. Gravity's force is felt at infinite distances. One would think that this kind of find grained application would not work well on a network distributed system. But the guys at Los Alamos, came up with a tree structured algorithm which mapped very well onto a network topology, thus making a Beowulf architecture work for this type of computing problem. NASA uses the Beowulfs for image processing of satellite and Hubble images. The Hubble images had to be refocused because of the "over site" of one of the mirror polishers. One application of satellite image processing is to splice together all the satellite photos taken from various angles and positions of one area on earth, to form one large coherent image.

Some of the specifics about Beowulf clusters he mentioned were the following. Usually one of the nodes is set aside and dedicated to managing the rest of the nodes in the cluster. It's the job distributor. Some very simple techniques are used to keep track of which systems have not crashed. A multicast coming from each machine is received by the controlling node or it pings the rest of the nodes in the cluster. If one of the nodes goes down, the controller quits submitting jobs to it. There are some mechanisms within the Beowulf software for process migration from one node to another. He also talked about how he uses RPM extensively to maintain the software on the nodes. He referred to RPM as a "key technology" enabling the easy maintainability, software wise, of large clusters of PC's. A question came up asking about how he maintains his Beowulf code. He didn't answer the question very well. He didn't really want to answer the question since he did not want to endorse any kind of source code management software like rcs or cvs. But he did stress that RPM was key in order to be able to distribute software to may PC's.

Who's that imposter! (I've gotta' upgrade that damn camera...)
He also talked about the stability of the machines he works with. Most of his systems had been up for over a 100 days. I believe some of the Beowulf clusters had been up for over 200 days. What is important is not that a single machine has been up that long, but that large numbers of machines have been up and running for that amount of time. Because of the long running nature of a Beowulf cluster, one tends not to use the latest and greatest software release of anything. He was using a 2.0.3x version of Linux on his machines. He also pointed out a critical feature of having the source code available for the kernel and all the software which makes up a Beowulf system. If there is a bug found, then one can fix it by modifying a few lines of code. That one module or program gets recompiled and you're off and running again, with a minimum amount of administrative work. If one works with closed source systems, it is often the case that when a similar small bug is found and fixed, a whole cascade of software upgrades result. This is due to the fact that the bug fix will come in the form of a new software release. This release then upgrades your shared libraries. The shared library upgrades then force you to upgrade all your applications and on and on. After which you are then forced into revalidating your whole cluster for production use. Something which can take a long time. Donald mentioned that he validates his systems by running Linux kernel compilations for two days to "burn in" his systems.

Donald also spent some time talking about how one runs a Beowulf cluster and keeps it reliable. This is done by monitoring the hardware for possible future failures. The most common one is due to failing fans. There seems to be a host of applications which monitor system performance, from the temperature of the boxes, to network packet error checking. Keeping an eye on these problems helps keep a Beowulf cluster healthy.

Donald answering questions after his talk. I nice shot of his left back side.
One last thing worth mentioning. With all this talk of running Linux systems for 100's of days on end, a Windows "story" came up. It turns out that there is a bug in the timer software for Windows. It will cause your PC to crash in 49 days. This bug was just recently found and has been around for a long time. Since a Windows system rarely stays up for that long, its has only been until recently that this bug has been found.

One person in the audience asked why Donald used Linux as the kernel for building up his Beowulf project instead of one of the BSD kernels. Donald had an interesting answer to that question. First off, the BSD kernels were not as stable as Linux, back when he started working on his project. He then proceeded to complain that working with the BSD developers was very difficult. They tend to hide the development process thus making it harder to contribute the needed upgrades. (Remember that Donald had to work with the internal data structures of the kernel in order to make his project scale.) He then said that these BSD developers had very large egos. "Their ego's would fill this room" he said, thus implying the difficulty of working with them. He then went on to say that he was quite able to work with Linus. Linus was a laid back guy.

Another shot of Donald left backside. Although I'm starting to work around towards his front. If I'm lucky I may get him looking into the camera.

There were many other interesting questions which were discussed during Donald's talk. You can read my jumbled notes if you care to try and decipher them for more information.

Well, that's as far forward as I could get. Although I did get a nice shot of him and his book which I proudly display at the top of this write up.
The session came to an end about 8:20pm. During his session he plugged his new book about Beowulf clusters titled How to Build a Beowulf. The book was written in collaboration with several of the Beowulf developers and is a compilation of a lot of the tutorials and documentation on the software. It's published by MIT Press and fits in with the other "definitive" references to mpi, mpiII and pvm also published by MIT Press. He said that he makes about 30 cents per book sold and was counting up the number of people in the audience to see if he could buy dinner with the proceeds if every one bought one. One guy in the audience offered him 60 cents for the book he had in his hand, doubling his take home profit. Donald declined the offer.

People got up and started to leave the room after the talk was over. I stuck around to take some pictures of Donald as he talked to some NYLUGers. I eventually was able to get a chance to re-introduce myself to him. I have him my card and invited him out to BNL if he were ever in the area again. (I'm really bad at this sort of thing.) I then asked him if he had trouble getting funding for his first Beowulf system. He told me that he got the Beowulf idea back when he was working for the NSA. He presented the idea to his superiors. He needed $50K to put a cluster of nodes to together. For the NSA, $50K just too little to bother with and his request was declined. So he took his idea over to NASA. NASA thought it was worth funding, so he got a job there specifically to work on his Beowulf idea. The rest, as they say, is history.

My last shot of Donald as we start receiving our dinner orders. I was fortuitous enough to take this shot just as the waiter held Donald's plate right over his mouth. It is truly amazing how often Murphy's law kicks in. The guy looking right into the camera is named Judd. He works for Netscape and announced at the NYLUG meeting an install fest he was organizing.

I left the room and spent some time with Jim Gleason in the hallway just outside. VA Research is about to deliver a 36 node system to BNL this coming week and we talked about that a bit. Suddenly, my bladder screamed at me and I broke off in mid sentence, "Where's the men's room!". To my fortune, it was about 10 feet behind me. I don't know how I was able to ignore my bodily functions from exit 40 of the LIE until now...

A picture of the other table where the 2nd half of the NYLUGers hung out while waiting for their food to show up.
A small fraction of the group then headed over to Kapland's deli for a real NYC deli meal. I ordered an extra lean pastrami sandwich. In typical NY deli fashion, I was delivered just that, a mountain of extra lean pastrami sandwiched between two thin slices of rye bread; nothing else. The pickles and cole slaw were delivered on dishes as we sat down. I had to manually apply the Russian dressing myself.

I sat across one guy who seemed to do business with wall street. One tidbit which I found rather interesting was that he had this friend who put systems together for wall street trading firms. One would assume that these systems are bullet proof; 100% reliable. It turns out that they crash all the time. There is enough redundancy in these systems so that these crashes can be afforded. After hearing Donald talk about large numbers of systems being up for 100's of days at a time, and then hearing that wall street trading systems crash continuously was a real shock. Maybe wall street will begin to understand the power of Open Source. Until then, my retirement fund will not be as safe as it could be.

Another shot of Jim Gleason along with Matthew Hunt and Ari. Ari is the guy in the back who also works for VA Research. He's coming out to BNL to setup the 36 node machine I'm aching to submit my jobs to. The guy in the middle is Matthew Hunt, President of the Linux Users of NY group (LUNY).
At about 9:30pm, Jim Gleason was getting worried about getting Donald to JFK to catch his 11:30pm flight to NC. Donald was headed down to attend the LinuxExpo. It was getting late for me as well. I said good bye to the crowd of NYLUGers and headed out in search of that lot where I parked my car. The drive back to where I live on Long Island proceeded in standard form. After giving the MTA guy the $3.50 toll for using the Midtown Tunnel, I start counting the exists along the LI as I drive by them. 1, 2, ... 10, 11, ... 20, ..., 30...

Driving along on the LIE always leads my mind to wandering in thought. This time, my mind wandered around open source land. I still cannot get a grip on the power of the Internet. What really made Donald's project possible was the fact that he had access to Linux. You could never build a Beowulf cluster out of windows 3.1 machines. Think about it, this is what was running on those powerful 100MHz DX486 machines back when he started this project. I can imagine going to one of the NSA administrators and trying to convince him that you could take all those PC's the secretaries were using to write up memos using MS Word, gang them together and turn them into a super computer. And do so for only $50K. Back in 1992, that was a radical idea! And look at what we have now, super computers popping up and the beginning of a new industry. Also, has anyone ever heard of an NT Beowulf cluster? I'm sure Micro Soft would boast of one if there was one. (And take credit for the idea as well.) That would be a good way to test the stability of NT. Run 100 NT machines in a cluster and see how long you would keep them all up and running. It would be nice to see Mindcraft perform such a feat. Having 100 Linux machines running for over a hundred days translates to 10,000 cpu days of continuous running. Benchmark that Mindcraft...

Exit number 67, exit number 68. Ahhh, exit 68, home at last.



Please e-mail me your comments, if you have any. I'm always interested in what you may have to say related to this write up or anything else on your mind.

Click here if you want to read other articles I've published on the Internet, or click here to view my home page.


Copyright © 1999, Stephen Adler
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"



muse:
  1. v; to become absorbed in thought 
  2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration
© 1999 by mjh


Button Bar
Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. 

[Graphics Mews][WebWonderings][Musings][Resources]

This column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems.


I've been working on some extensions and updates to my XNotesPlus program this month, which I've had to work on in between the Muse, TheGimp.com, and a number of articles and cover art work for the Linux Journal.  Suprising how busy I'm keeping considering that, officially, I'm unemployed.

Part of the work I was doing this month for both the Muse and TheGimp.com wasn't for this months issues.  I was preparing for some future articles, ones which require more than the usual week and a half I spend on them for normal issues.  That led to an even more compressed time frame for this months Muse.  Running on a bit of brain strain, I opted for a little brain stimulus, or rather eye stimulus.  That, and a little followup to last months issue, which brought in quite a bit of email.  That, too, was a bit suprising.  But very welcome. 

In this months column you'll find:

  • A little eye candy, please - XScreensaver
  • A followup to last months Vector Drawing Tools for Linux
And, of course, the requisite set of product announcements, interesting bits of news and Web sites, and reader email.  Keep those letters coming!
The Artists' Guide to the Gimp
Available online from FatBrain, SoftPro Books and Borders Books.

In Denver, try the Tattered Cover Book Store.

Also, check out the associated web site, TheGimp.com, sponsored by SSC, Inc. and edited by The Graphics Muse - Michael J. Hammel.



Other Announcements:
Superficie 0.5
Graphic Counter Language 2.20.3.D
MathMap
Panorama Tools v1.7.2
GIMP 1.1.5
Gimp ImageMap Plug-In Release 0.9
DiaCanvas 0.10
tgif 4.1.9
Swift Generator 0.7.1
Giram 0.0.17
Install-Webserver 0.1
povfront 0.9-2
Wacom Driver for XFree86 alpha 3
xfsft 1.1.5
XawTV 2.44
RenderDotC 3.1
Terraform 0.3.1
AleVT 1.4.5
< More Mews >
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.

RealPlayer G2 alpha
  J-Dog - May 19th 1999, 10:16 EST 

RealPlayer for Unix allows you to play streaming audio and video over the Internet in real-time. 
http://www.real.com/
http://www.real.com/products/player/linux.html
Editor's Note:  Whoohoo!  Works great!



The Linux Image Montage Project pre-1120
  Jordan Husney - May 17th 1999, 07:15 EST 

The Linux Image Montage Project ("LIMP"), is an attempt to distill the Linux community's spirit down into one cool looking poster using user-contributed images and the GIMP. 

Changes: Now 70% complete, only 480 image to go until project completion.
http://linux.remotepoint.com/



IBM ANNOUNCES OPEN-SOURCE AVAILABILITY OF 3D VISUALIZATION SOFTWARE

IBM Visualization Data Explorer Source Code Made Available to Developer Community.
http://www.research.ibm.com/dci/software.html
http://www.research.ibm.com/dci/dx_release.html



ImageMagick 4.2.6
  Necronom IV - May 19th 1999, 16:03 EST

ImageMagick (TM) is a package for display and interactive manipulation of images for the X Window System. It is written in C and interfaces to the X library, and therefore does not require any proprietary toolkit in order to compile. Although the software is copyrighted, it is available for free and can be redistributed without fee. ImageMagick is known to compile and run on virtually any Unix system and Linux. It also runs under Windows NT, Windows 95, Macintosh, and VMS.

Changes: Many new features and bugfixes, see the Changelog for more information.
http://www.wizards.dupont.com/cristy/ImageMagick.html



IPAD 0.9.00
  sergio - May 19th 1999, 10:04 EST

IPAD is an intelligent vector drawing package built using the multiplatform IPAD-Pro core and so provides a very powerfull consistant interface across all supported platforms without the need to have X11 or MS Windows available. It allows easy editing across files using multiple overlapping windows. The graphics objects drawn and edited by IPAD have built in intelligence. They react to the mouse and each other so as to maximise user productivity and reduce tedious repetitive setup sequences.

Changes: Faster character drawing under X11, new grid support, new guide line support, new window manager dialog and much more info in file selector.
http://www.demon.co.uk/titan/



Image::Grab 0.9.3
  Mark Hershberger - May 19th 1999, 09:59 EST

The Image::Grab Perl module allows you to easily grab an image with an oft-changing URL from the internet. This makes it possible to write simple scripts to download weather maps or comic strips on a daily or hourly basis without user intervention. It is also useful for bypassing advertising banners.

Changes: Updated the realm code so that HTTP authentication actually works, changed the interface, but if you tried to use the realm call before, you should have gotten an error.
http://everybody.org/mah/hacks/Image-Grab-0.9.3.html



Panorama 0.11.2
  Angel Jimenez - May 18th 1999, 13:31 EST

Panorama is a framework for 3D graphics production. It will include modelling, rendering, animating, post-processing, etc. There's currently no support for animation, but this feature will be added soon.

Changes: Added a new text image filter, a new configuration file, and documentation. Lots of bug fixes.
http://www.gnu.org/software/panorama/panorama.html



Sketch 0.6.0
  Bernhard Herzog - May 18th 1999, 12:58 EST

Sketch is a drawing program similar to CorelDraw or Adobe Illustrator. It is written almost completely in python with some modules written in C, thus combining the flexibility and power of Python with the speed of C. Advanced features include gradient fills, clip masks, text along a path, blend groups, convert text to curves, and more.

Changes: First stable release. Includes some national language support, improved Illustrator filters and bug fixes.
http://www.online.de/home/sketch/



FFTW 2.1.2
  Steven G. Johnson - May 18th 1999, 12:57 EST

FFTW is a fast C FFT library. It includes complex, real, and parallel transforms, and can handle arbitrary array sizes efficiently. FFTW is typically faster than other publically-available FFT implementations, and is even competitive with vendor-tuned libraries (benchmarks are available at the homepage). To achieve this performance, FFTW uses novel code-generation and runtime self-optimization techniques (along with many other tricks).

Changes: Fixed a problem with our parallel MPI transform test programs under MPICH.
http://theory.lcs.mit.edu/~fftw/



LAGII 0.1.4
  XoXus - May 18th 1999, 12:02 EST

LAGII lets you run AGI games natively under Linux. AGI games include the Sierra classics such as Kings Quest, Space Quest, etc. Most games don't work fully, but they work quite well most of the time.

Changes: Functional X11 driver is nearly done.
http://www.zip.com.au/~gsymonds/LAGII/



Glide 2.60
  Lee Reynolds - May 18th 1999, 11:44 EST

This is the first alpha release of Glide 2 for the Banshee and Voodoo 3 cards.  According to the author the Quake 3 test will run under it, your mileage may vary. As of now it will only do 3D full screen, windowed support will be included later. The author will be at Linux Expo till the 23rd and has said he will ignore all email during this period. Those in need of support are advised to use the 3dfx newsgroups.

Changes: This is the first version of Glide for linux with support for the Voodoo Banshee and Voodoo 3 chipsets. This is still glide 2.x, glide 3 has not been ported to linux as of yet
http://glide.xxedgexx.com/3DfxRPMS_vb_glibc.html



LibGGI 2.0 Beta 2.1
  Andreas Beck - May 17th 1999, 07:19 EST

LibGGI is an attempt to unify all those graphical output systems that exist on Unix. It is a very fast, simple (ever tried to make a small graphics app directly in Xlib?) and lightweight interface layer, that allows you to run the very same binary on many different graphics subsystems like X, SVGAlib, Glide, etc.  LibGGI will detect (or you can select, of course) the environment you are running in, and redirect its output as required.

Changes: Better autoconf checking for some targets and other bugfixes, fbdev target uses acceleration on kernel-native matroxfb, better mode-switching for non-kgicon fbdev drivers, svgalib target enhanced, X target fixed for remote displays with different endianness as well as small enhancements for LibGII (especially for the Linux-Console input).
http://www.ggi-project.org/


Did You Know?

...there is a story in EETimesOnline about ILM's camera work, including a mention of Linux.
http://www.eet.com/story/career/timespeople/OEG19990517S0023

...and another on how working on special effects expose engineers to the hot technologies of the day.
http://www.eet.com/story/career/timespeople/OEG19990513S0020

...there is a project online that aims to produce a short (10 minute) 3D animated movie using POV-Ray.  The Internet Movie Project can be found at http://www.imp.org/

...RealPlayer is an SMIL (Synchronized Multimedia Integration Language) enabled player?  SMIL is based on XML and was designed by the good people at W3C.  Since SMIL is another text-based markup language, you can, as always, use any text editor to author SMIL files.  If you're interested in other SMIL players and some links to some GUI based authoring tools, take a look at http://www.justsmil.com.  So far, no authoring tools for Linux are listed, although I've sent email to the people working on GRiNS to ask if they plan on a Linux port (they are working on both IRIX and Solaris ports, so I'd think it wouldn't be a difficult port to Linux).

Q and A

Q:  Marjorie Richardson asks:  Could you tell me if there is a Linux graphics program that will convert RGB to CMYK easily? (easily as in push a button)

A: ImageMagick has the command line tool convert that will convert between many different file formats.  One output format is raw CMYK.  Keep in mind that raw CMYK may not be quite right for print output - it doesn't take into consideration display characteristics of either the monitor on which you viewed the original or the device on which you'll be printing.  But it will probably be fairly close for many images.
See: http://www.wizards.dupont.com/cristy/www/convert.html

A:  blah


Reader Mail

Hadess <hadess@writeme.com> writes:
First I'd like you to know that the only thing worth reading in the Linux Gazette (at least for me) is the Graphics Muse ^_^
'Muse:  I'm flattered, but I often read most of the rest of the Gazette myself.  Larry Ayers and the other regular and guest writers tend to know a lot more about systems administration than I do, and I often find bits of very useful into there.   I'm just the graphics guy.
Well, I use the GIMP everyday and I frequently use a file-manager to launch it associated with an image. I'd like GIMP to start only once. I'd like to launch it once and then, afterwards, launch another image and make it load by the first GIMP instance I launched.  Any ideas ?
'Muse:  If you want to work on the new image interactively, then I can only think of one way to do it - Perl::Fu.  But getting the Perl extension to work with the Gimp can take a bit of work.  I'm going to do an article on it for the June issue of TheGimp.com (this month will be on Gimp 1.2 status).  You can also take a look at the new O'Reilly text "Programming Web Graphics with Perl and GNU Software".  There is a chapter on the Gimp which goes into a  fair amount of detail on the Perl scripting interface for the Gimp.

The Perl extension has a server portion, so you could write a perl script to connect to the server and pass it the name of a plug-in and file that could be used to pop open a new image window for the currently running Gimp.  It will even, I believe, launch the Gimp if its not already running.

I'm no Perl::Fu expert, however.  This is just my impression of how things work.

Antti Huotari <ahuotari@cc.hut.fi> writes:

You said in your latest Graphics Muse column that Macromedia Flash 3 runs only on non-Linux systems. Well, infact it does run almost flawlessly on Linux with the latest WINE release  (OK, it still has a few bugs, but you can create Flash movies, etc. with it).

The following setup is used:

'Muse:  Ok, I stand corrected.  I just don't run WINE - I want these tools to run natively on Unix.  I've been waiting 10 years (since working on the Dell Unix product) for applications to run natively on desktop Unix.  So I guess that's why I didn't consider WINE.

I never have figured out how to do this, though.  I mean, where do I get Windows95/98?  You go into computer stores and all they stock is "Upgrades".  No one sells a complete installation apparently.  Not that I've looked too hard.

Just to let you know,
'Muse:  I appreciate the feedback.  I'm sure there are qutie a few readers who would love to know this works.
PS. Thanks for the nice book about Gimp.
'Muse:  Your quite welcome.  I'm glad you find it useful!

Antti Huotari <ahuotari@cc.hut.fi> followed up with:

Just to clarify things, Wine+Flash3 isn't ready for production yet. For example the text tool doesn't work and there are lots of little problems. But like I said you can play, create movies with it, etc. Sound effects seem to work fine too. And it gets better with every new release of Wine.
'Muse:  I'll post this too.  Its good info.  I'm sure it will help keep expectations in line for anyone who tries it.
From www.winehq.com:
Wine is an implementation of the Windows 3.x and Win32 APIs on top of X and Unix. (So it isn't an emulator and you don't need Microsoft Windows.) Wine consists of a program loader, which loads and executes a Windows binary, and a library that implements Windows API calls using their UNIX or X11 equivalents.
'Muse:  Hmmm.  I didn't realize that.  I guess I could try it eventually.  I just hate using any MS-based tools.  But I should open my mind to different options.  It is, after all, part of my own preachings to people:  Choice.  Thats the key.
But, I do agree with you that it would be much better to have applications that are especially made for *NIX.
'Muse:  It's happening.  Slowly, but I think it's an inevitable process at this point.  I'm looking forward to the ports of graphics applications that I'm sure we'll be hearing about over the next 6-12 months.


blah

I wondered around the Net a bit this month, but couldn't find anything interesting to write about relating to Web development.  Oh, sure there are lots of topics.  I just couldn't find one that peaked my curiousity - and I could fit into my writing schedule.  I thought about looking at Zope, but it had some system requirements I didn't have.  I spent a bit of time trying to install some other packages, some for the Vector Tools followup article and some for some future articles, and didn't really want to deal with it again for yet another package.  But Zope does look interesting.  I'll try to take a look at it sometime in the near future.

Other than that, I only came up with one Web-related item:  After a couple of years of wondering how to make this work, I finally figured out how to force text to flow around a table.  I like to use tables to place an image with a caption below it, image in the top cell and the caption in a cell below it, but I've always been stuck with shoving text into a cell in the table - I couldn't just have text flow around the table like I do with non-tabled images.

Well, it seems that my use of Netscape Composer had led me to ignore checking the completeness of the HTML it created.  I was perusing some other sites on the net - I think it was the BBC news site - and saw one with a configuration just like I wanted to make:  image in a table, caption below it, text flowing around it to the right.  I looked at the HTML and didn't see anything special.  So I copied everything from that site's table to one of my own, then started removing items one at a time to find the key element.  The key, it seems, is the ALIGN= argument.

It is possible to make text flow around a table, but not directly with Composer.  Composer, although it provides a toggle for the Align argument, doesn't actually place this argument in the TABLE tag if you use the default option of "Left".  Arrgghh!  You have to edit the TABLE tag and add the ALIGN=LEFT by hand.  So, configure your text editor in Netscape.  I use vi  (if you use Emacs there is a good chance you're not using Composer anyway).  You'll want to create the table first, just as always.  Make the table just a bit wider than the image and leave the image left aligned in it's cell (which leaves space between the image and the text that will flow around it).  Then add your text.  Later, go back and edit the HTML by hand (Edit->HTML Source) and add the ALIGN=LEFT argument to your table.  Viola!  Right-side flowing text around your tables.  This modification will stay as long as you don't edit that table again.  If you do make changes to the table, you'll have to go back and add the ALIGN=LEFT argument by hand again.

I guess it helps to check these automated contraptions every now and then - just to keep them honest.

Oh, and as to why this was titled "blah":  that's the text placeholder in the template file I use for the main page of the Graphics Muse.  It seemed appropriate.  The alternative was "Yada Yada Yada", but I wasn't sure how many Seinfeld fans there were reading the Muse.


Also:  A Followup to last months Vector Drawing Tool on Linux

A Little Eye Candy, Please - XScreensaver

In most issues of the Graphics Muse, I talk about useful tools, things with which you can perform some real world task.  That's fine, most of the time.  But one needs to have a little fun every now and then.  Sometimes, you just need to do something that, well, has no real purpose.  Except to make you smile.

In the world of graphics tools for Linux there are probably several programs (we can't really call them tools, per se) that could be considered just fun.  No, I don't mean games.  That's entertainment.  Despite what you read in the press or hear from politicians, games are not completely mindless jaunts for the juvenile crowd.  No, we need something that has nothing to offer but stunning visual preoccupation.

Thank goodness for screensavers.

Watching the ever streaming line of announcements over at freshmeat, I noticed an announcement for something called cmatrix.  It was said to be a screensaver in the style of The Matrix, that wonderfully confusing cinema fantasy starring Keanu Reeves.  Unfortunately, it's not an X-based program.  It runs under Curses, a terminal based graphical interface.  Then I saw that Jamie Zawinsky, late of Netscape, had added a Matrix hack to his xscreensaver program.  So I scrounged around a bit and found xscreensaver, downloaded it and put it to work.

Cool.
 
 
The Matrix (xmatrix) Screensaver
Xscreensaver is a program that manages other programs that draw on the root window.  The root window is a special window - it's the window inside which all other windows get drawn.  It doesn't look like a window like you might expect because it doesn't have a window frame around it.  But any program, properly written, can draw on the root window.  Jamie's design is to make xscreensaver a daemon - a program that runs all the time in the background - that would wait to see when there was a period of inactivity from the user.  When that period expires, xscreensaver runs another program.  This other program is the one that actually draws on the root window.  These other programs are called, in xscreensaver vernacular, hacks.

I don't usually run screensavers, a screen blanking or energy-saving monitor suffices most of the time.  But the shear number of hacks available for xscreensaver was somewhat astounding.  I counted over 80 of these in the hacks directory.  Jamie has screenshots for most of these on his xscreensaver Web site (http://www.jwz.org/xscreensaver/).  Be warned:  it can take quite a while to download the page with the screenshots if you're on a slow link.  You don't really have to download the latest version from this web site since most Linux distributions will contain a version of xscreensaver already.  But you might want to check to see if any new hacks have been added or if a new version is availalble.  On my Red Hat 5.2 I had a considerably older version than the latest (3.12) posted on Jamie's site.

Since xscreensaver runs as a daemon and a client (the hack), you need to look at these seperately.  The daemon, xscreensaver, has no user interface.  It's configuration is controlled by a standalone client program called xscreensaver-demo.


xscreensaver-demo

If you don't have an $HOME/.xscreensaver file, then xscreensaver uses its default configuration.  In this example case this means the demo window has all 80+ hacks available.  This if fine for the demo, but when you put together a menu option for your favorite Window Manager to launch the screensavers, you may want to limit this list using your own configuration file.  We'll talk about the configuration file a little later in this article.

The demo dialog allows you to double click on an entry in the list of hacks to start it.  To stop the hack just move the mouse or click a mouse button once.  This is handy for looking through the list of hacks to see what they all do.  You can also type a hack command (the command uses ordinary command line, aka shell, syntax) in the text field just below the list.  The buttons perform obvious functions - run the next hack in the list, run the previous hack in the list, quit the demo program and open the preferences dialog.  Quitting will exit the demo window but does not kill off the xscreensaver demo.  You do that manually with the xscreensaver-command program's -exit option.
 
 
Preferences Dialog Window
The demo window also has a preferences dialog.  Here you can configure the daemon's behaviour with respect to when it notices inactivity and whether or not it requires using a password to log back in.  You'll notice there isn't any field for specifying a password.  Xscreensaver will use your password from the /etc/password (or /etc/shadow, if you have that) file.  Saver Timeout is the inactivity period to wait on, while Cycle Timeout is the period of time the current hack gets run.  Xscreensaver will cycle through the list of hacks in your .xscreensaver file's programs entry if more than one entry exists and if the cycle timeout is set to something other than 0 (0 means never cycle).

The Fade Duration is used only with writable colormaps.  This causes the screen to fade to black when the screensaver starts or between hacks when cycling is enabled.  Fade ticks controls how fast this fade should occur.  Higher numbers make for smoother fades, but take longer to complete.  Fading to black may not work with your hardware and X server configuration, so changes to this field may have no effect.

The Lock Timeout is the grace period after the screensaver has kicked in where no password is required, even if Require Password has been set.  If this value were 5 minutes, for example, and the screensaver had only been running for 3 minutes before you typed something or moved the mouse, then you wouldn't have to type a password.  Useful for those short periods you run to the bathroom and hate having to retype your password just to get going again.

Password Timeout is the time for which the password dialog will remain on the screen waiting for a valid response before it gives up and returns to the screensaver.   In the example here, the dialog would be present for 30 seconds - ample time for a decent typist with the right password.

While the demo window and it's preference dialog allow you to configure how the daemon will run, it isn't really how you want to run xscreensaver in the background, say from a Window Manager menu.  As mentioned earlier, another program is used to kill off the xscreensaver daemon - xscreensaver-command.  This is a command line program designed to issue commands to the daemon without using windowing interface (although it can launch the Demo window too).  A hack in the programs list of your configuration file can be called directly using the xscreensaver-command program.  This is how I set up a menu under FVWM2 to specifically run the Matrix screensaver.

The .xscreensaver configuration file

All of the options you can set interactively with the Preferences dialog can also be set using the xscreensaver-command program.  Alternatively, you can specify often used options in your .xscreensaver configuration file.  We've already mentioned the programs entry, which is a list of programs to run in shell-syntax format.  You list one program per line and use shell continuation marks, but cannot use semicolons.  So, an entry like this

programs:        \
        xmatrix;  qix -root;  xv -root -rmode 5 image.gif -quit
is invalid.  Instead, you would enter it like this:
programs:          \
        xmatrix    \n\
        qix -root  \n\
        xv -root -rmode 5 image.gif -quit    \n\
Each of these programs are found via your PATH environment variable, so if they aren't available from an ordinary command line, then you may want to fully qualify the path name.

When you use xscreensaver-command to launch one of these programs specifically, you reference the program by the order in which it appears in the list.  So xmatrix is program number 1, qix is number 2 and so forth.  You can then invoke this screensaver directly using a command line like this:

% xscreensaver-command -select 1
The -select option tells xscreensaver to blank the screen immediately and run the hack specified.  There are variations on how you can do this using the xscreensaver-command program.  Check the man page that comes with the source for complete details.

All of the options in the .xscreensaver configuration file use a name:token format.  That is, you specify the name of the option, followed by a colon and the setting for that option.  Since the configuration option uses the same option names as the X Resources file, you can also place these settings in your .Xdefaults file if you like.  I prefer using program specific files like .xscreensaver because if I screw up that one file I don't take the chance of breaking some other program like I might if I used my .Xdefaults file instead.

[ More Eye Candy ]



Footnotes:
  1. If you haven't seen this movie yet, go now!  The story line far outpaces the special effects, which themselves are some of the best I've ever seen.  This movie is the 2001 of the current generation.
  2. In the documentation, hacks are also often called demos.  Don't let the terminology confuse you.

The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.
 
Online Magazines and News sources 
C|Net Tech News
Linux Weekly News
Slashdot.org
TheGimp.com

General Web Sites 
Linux Graphics
Linux Sound/Midi Page
Linux Artist.org

Some of the Mailing Lists and Newsgroups I keep an eye on and where I get much of the information in this column 
The Gimp User and Gimp Developer Mailing Lists
The IRTC-L discussion list
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.graphics.api.opengl
comp.os.linux.announce

Future Directions

Next month:  I'd like to look at TV and motion capture cards, but I don't have any so the best I can do is write about the drivers for them.  Barring that, I'm not sure what I'll look at.  Maybe a little about dealing with drawing using Gtk and Motif widgets, something I've been working on lately.  We'll see.

Let me know what you'd like to hear about!


© 1999 Michael J. Hammel


Copyright © 1999, Michael J. Hammel
Published in Issue 42 of Linux Gazette, June 1999

© 1999 Michael J. Hammel
indent
Superficie 0.5
Graphic Counter Language 2.20.3.D
MathMap
Panorama Tools v1.7.2
GIMP 1.1.5
Gimp ImageMap Plug-In Release 0.9
DiaCanvas 0.10
tgif 4.1.9
Swift Generator 0.7. 
Giram 0.0.17 
Install-Webserver 0.1 
povfront 0.9-2 
Wacom Driver for XFree86 alpha 3 
xfsft 1.1.5 
XawTV 2.44 
RenderDotC 3.1 
Terraform 0.3.1 
AleVT 1.4.5 
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.

Superficie 0.5
  Juan Pablo - May 17th 1999, 07:19 EST 

Superficie (surface) is a little program for viewing and doing basic manipulation of 3D surfaces. It reads a file with the data, and displays the object in a window, so you can interact with it. 

Changes: Improved autoconf scripts, bug fixes, mathematica package, etc.
http://www.linuxsupportline.com/~superficie/
 



Graphic Counter Language 2.20.3.D
  G. Adam Stanislav - May 17th 1999, 07:37 EST

GCL is a new CGI programming language that allows webmasters to create fully customized web counters in as few as 15-20 lines of code. The webmaster provides images for the digits in gif, xbm, or gd format, plus optional comma, head, tail, and background images. The webmaster can choose how the various elements that will comprise the final graphic counter are aligned. As of version 2.10, the webmaster may compile images directly to the binary executable.

Changes: 64-bit counter size allowing the count to go up to trillions and a faster lexical analyzer.
http://www.whizkidtech.net/gcl/



MathMap

MathMap is a GIMP plug-in which allows distortion of images specified by mathematical formulae. For each pixel in the generated image, an expression is evaluated which should return a pixel value. The expression can either refer to a pixel in the source image or can generate pixels completely independent of the source. MathMap not only allows the generation of still images but also of animations.

The MathMap homepage can be found at

http://www.unix.cslab.tuwien.ac.at/~schani/mathmap/
It includes a user's manual as well as screenshots and examples.
Mark Probst
Student, Programmer


Panorama Tools v1.7.2

Panorama Tools is a free program which can be used to generate, edit and transform many kinds of panoramic images. Its five main functionalities are:

H. Dersch
-------------------------------------
Spherical Panoramas, Macro Panoramas,
Free Panorama Software:
<http://www.fh-furtwangen.de/~dersch>

GIMP 1.1.5

ftp://ftp.gimp.org/pub/gimp/unstable/v1.1.5/
GTK+ 1.2.x is required. Grab it at: ftp://ftp.gimp.org/pub/gtk/v1.2/

What's new? Lots.

-Yosh


Gimp ImageMap Plug-In Release 0.9

Release 0.9 of my plug-in for the creation of clickable imagemaps is now available on my homepage: http://home-2.consunet.nl/~cb007736 .  This release contains new functionality (List of standard prefixes in settings dialog, better handling of invalid input, 'view source' dialog for easy copy/paste to other programs, etc.) and some cosmetic bugfixes.

This is the last release before 1.0. Version 1.0 will basically be the same as 0.9 but with all (?) bugs removed. Any volunteers for seriously beta testing this release?

Maurits Rijk
lpeek.mrijk@consunet.nl



DiaCanvas 0.10.

It's a generalization of the very nice canvas used by the drawing tool DIA. It is about to offer the same features, only at a more generic way.  It's much more GTK oriented than the original DIA.

You can find it at:
http://web.inter.nl.net/hcc/klem

I know it's not perfect at all, but it will give a nice impression of what it's suposed to be.
Arjan Molenaar
arjan@inter.nl.net



tgif 4.1.9
  Bill Cheng - May 20th 1999, 10:50 EST

tgif is a vector-based draw tool, with the additional benefit of being sort of a web-browser. That is, you can fetch drawings from a web server with it, and you can make objects in your picture into hotlinks to other parts of the drawing, or to other drawings accessible via http.

Changes: Fixed a few bugs and added a new X default, Tgif.PSFontAliases to allow fake font names in Tgif.AdditionalFonts. Using this X default, different encodings of the same PS font can be used. Also added 3 new internal commands: set_allow_interrupt(), size_named_obj_absolute(), and get_named_obj_bbox().
http://bourbon.cs.umd.edu:8001/tgif/



Swift Generator 0.7.1
  Olivier Debon - May 23rd 1999, 17:45 EST

Swift-Generator is a utility 'ala' Macromedia Generator. It aims at dynamically replacing texts, fonts, sounds and movie clips in either Template Generator files or standard Flash files. This allows Webmasters to create dynamic content such as stock tickers, news tickers, weather forecasts and the like.

Changes: Serious bug fixed.
http://www.swift-tools.com/



Giram 0.0.17
  David Odin - May 26th 1999, 22:38 EST

Giram Is Really A Modeller. It is a multi-purpose 3D modeller written with the GTK+ User interface v1.2. It can load and save POV-Ray and AutoCAD DXF sources files. Some basic modelling tools are already there and it is growing very fast.

Changes: Better and easier to use Plugins interface, better support for DXF, Giram can now export in S3D format, rotation can now be restrained to fixed angles, as well as various bugfixes.
http://www.minet.net/giram/



Install-Webserver 0.1
  Donncha O Caoimh - May 26th 1999, 22:34 EST

Install-Webserver will install Apache, PHP and MySQL for you. All you have to do is run one script.

Changes: First release.
http://members.xoom.com/xeer/

Editors Note:  It's not graphics, but it is web based and for people like me, could be very useful for testing things on my local box before uploading to my remote web server.



povfront 0.9-2
  Philippe P.E. DAVID - May 26th 1999, 22:32 EST

PovFront is a front-end for POVray ray tracing engine. It manages all the available options as well as the script only ones. It provides multiple rendering possibility and trace of them. It will provide network rendering in the future.

Changes: This is intended to be the last version before 1.0. The next major version will introduce network rendering.
http://perso.club-internet.fr/clovis1/



Wacom Driver for XFree86 alpha 3
  Fred - May 26th 1999, 09:34 EST

This is an XFree86 XInput driver for Wacom tablets. It handles wacom IV and V protocols.

Changes: Corrected lens cursor support for Intuos models.
http://www.lepied.com/xfree86/



xfsft 1.1.5
  Ross_Campbell - May 25th 1999, 17:05 EST

The Xfsft patches to X11R6 enable X11 servers (including XFree86) and the font server xfs to use TrueType fonts and improves on the way X11 handles international scalable fonts.

Changes: This version fixes a bug in the previous version and includes a version of mkfontdir that causes it to automatically build `encodings.dir' files.
http://www.dcs.ed.ac.uk/home/jec/programs/xfsft/



XawTV 2.44
  funnyguy - May 25th 1999, 12:59 EST

XawTV is a simple Xaw-based TV program which uses the bttv driver or video4linux. It contains various command-line utilities for grabbing images and avi movies, for tuning in TV stations, etc. A grabber driver for vic and a radio application (needs KDE) for the boards with radio support are included as well.

Changes: fbtv: added -q switch, started lirc support , changed float to double for X resources, fixed the radio programs, webcam bugfix.
http://www.in-berlin.de/User/kraxel/xawtv.html


RenderDotC 3.1
  Emil Mikulic - May 25th 1999, 12:01 EST

RenderDotC (RDC) is a photorealistic rendering toolkit which adheres to the RenderMan(R) standard. Using the Reyes architecture, RDC supports advanced capabilities such as motion blur, depth of field, trim curves, texture/environment/displacement mapping, and programmable shading in the RenderMan Shading Language. The shader compiler included in the toolkit compiles shaders all the way to machine language for the highest possible performance.

Changes: Now available for Linux
http://www.dotcsw.com



Terraform 0.3.1
  RNG - May 25th 1999, 11:43 EST

Terraform allows you to create fractal terrain (also called a height field) and transform it using a number of algorithms. It is meant to be a tool for those who want to generate digital terrain models for use in raytracing or other simulations. Terraform features different views and colormaps and has a preview mode which features interactive real-time rotation of the terrain object. Terraform is written using Gtk-- (the C++ wrapper for Gtk+).

Changes: Better dialogs, faster 2D redraw, lots of bug fixes, and some internal code changes
http://www.peoplesoft.com/peoplepages/g/robert_gasch/terraform/



AleVT 1.4.5
  froese - May 25th 1999, 11:03 EST

AleVT is an X11 teletext/videotext decoder and browser for the bttv driver. It features multiple windows, a page cache, regexp searching, built-in manual, and more. Also included is a program to get the time from teletext.
http://user.exit.de/froese/
indent
© 1999 by Michael J. Hammel
============================================================= !-->
© 1999 Michael J. Hammel
indent
A Follow Up to Vector Drawing Tools on Linux
A Little Eye Candy, Please (continued)
more musings...



A Follow Up to Vector Drawing Tools on Linux

I received a lot of email in response to my article on Vector Drawing tools for Linux.  That's good - it's really the only way I know anyone really reads this stuff.  Fortunately, it was all positive feedback, some praise and a lot of helpful hints relating to the tools I discussed.  Here is some of the email I received, and my responses to them.



One package I missed:  ImPress

I'll admit that ImPress has some bugs.  Please give it a try sometime.
http://www.ntlug.org/~ccox/impress/index.html

Regards,
Chris
ccox@acm.org

'Muse:  Whoa.  I didn't even know that ImPress was a vector tool.  So much for my research capabilities.  I downloaded it and took a quick look.  I have to say, this may be the simplest tool of the bunch.  For someone who wants to create simple diagrams and then print them out, this may just be the tool.  It has a simple interface that includes all the basic shapes you might want, plus support for embedded text and Postscript output.  It's not nearly as full featured as the TGIF or XFig, but it is simple to use and requires no special configuration.  It doesn't even require compiling - it's a Tcl/Tk script!  Very impressive.  Here are a few screen shots.
 

Impress main window
Toolbox

The downloadable package doesn't include much in the way of documentation, unfortunately.  The Web site only contains HTML-ized versions of the documentation that comes with the package.  But this small amount of documentation should still be enough to get you moving pretty quickly in this package.  I did note that at least one part of the documentation was wrong:  double clicking on the color palette entry in the main dialog doesn't bring up the color editing dialog.  You need to double click on the Fill button instead.  Minor detail.  The program still works pretty good.

My only question:  how did Chris get those pictures of Tux and the dinosaur into his example?  The dinosaur looks like it might be clip art (vector graphics), but the Tux image looks like an imported raster graphic.  There doesn't appear to be a raster import feature in the version I have.  Maybe its something under development.



KIllustrator
  1. Killustrator requires egcs 1.x OR gcc 2.8 since gcc 2.7.2 is so very broken for C++ code (ie no ANSI compliance).  Not a problen on Redhat 5.x, since the default C++ compiler on Redhat is egcs, and gcc is only for C code, especially the old stable kernel series. 2.0.x
  2. With KDE installed already on my system, it was a standard source code install with ./configure ; make ; make install working flawlessly. (for 0.6.3 , there are later versions)
  3. It imports Xfig drawings, apparantly, and exports GIF, eps, ppm, and xpm (it may support more, but I don't have for instance tiff or png development libraries installed, so configure may simple have selected to use those installed.) It also saves to its own (XML?) based format.
  4. For running KDE applications you need the Qt libraries and the KDE libraries, nothing more. To compile them Qt-devel, and kdesupport packages are needed.
  5. For a full KDE install, install packages in the order Qt, kdesupport, kdelibs, all others in any order {kdebase, kdegames, kdegraphics, kdenetwork, korgransier, klyx, kdeutils, kdetoys, kdemultimedia}
  6. Killustrator was designed from the start as a KDE application so its unlikely to be uncoupled. (unlike Gimp, which precedes GNOME). It only needs the kdelibraries and Qt - it will happily run on any X11 system with them installed. Its also the vector graphics package for the KDE office suite, and I think 0.6.x is the last release which will have a compile time option of running without koffice support. (The website might need checked for that one)
  7. As its a koffice application, it can be emabedded in other applications using the KOM/OP corba orb, which is usable independent of Qt and KDE, so it may be possible to embed killustrator in another application understanding KOM/OP.  ie as is done throughout koffice (SuSE 6.1 has an alpha release included)
George Russell
george.russell@clara.net

'Muse:  This was very helpful information, especially the bit about what libraries are needed to run and what libraries are needed to compile KDE applications.  Unfortunately, since it appears that KIllustrator is being tied to another suite of tools (KDE Office Suite), I doubt I'll try it myself.  I don't need all those other pieces.  Maybe they'll be part of the next full distribution I purchase, in which case I'll take a look at them then.  I just don't feel like downloading huge amounts of stuff I won't really use anyway.

Your comments about KIllustrator in the Linux Gazette are missleading.  Killustrator isn't "tied to KDE" as you indicate, it runs perfectly fine on any desktop.  KIllustrator, however, uses KDE as application development framework, which is something completely different.

This has nothing to do with "being KDE-aware", it's all about writing applications.  KDE is not just a desktop, it's a set of libraries and tools that makes it possible to  write applications. By asking programmers not to use modern tools to develop their applications, you force them to re-invent the wheel over and over again.In the best case this will provide us with applications similar to xfig and tgif (which both cannot compete with modern standards). In the worst case this leads to no applications at all.

Come'on, installing software on linux today means a few clicks in kpackage, that can't be that hard ;-)

Matthias Ettrich <ettrich@kde.org>

'Muse:  End users don't distiguish between "application development framework" and dependencies.  It's just symantics.  KIllustrator is tied to KDE because you need the appropriate KDE libraries to run the program and/or compile it.  The same is true of the Gimp - it's tied to Gtk.  The difference is that Gtk has been available in most Linux distributions, and for a number of other Unix platforms, for some time now (at least the past year).  KDE is just now becoming part of most Linux distributions, which will make grabbing the occassional extra KDE package from the net a less complex issue in the future.

This isn't to say KDE is a problem to deal with, it's just not currently convenient to the end user.  Gnome applications have the same problem.  But for users on non-Linux platforms, for which neither Gnome or KDE are available, these applications are of no use.  It's your choice as a developer, of course.  I prefer to write for any Unix platform, or at least as many as I can reasonably support.



XFig

I read in your LG article that you could not get xfig to export or print. Is your problem perhaps that you have not got fig2dev installed?  This program is part of the transfig package (see the xfig docs).

Yours,

Jeroen Nijhof
J.H.B.Nijhof@aston.ac.uk

'Muse:  Looks like that could be the reason.  I, indeed, do not have fig2dev installed.


Sketch

I've just read the new issue of the Linux Gazette and your Graphics Muse column and I was delighted to see that you investigated the vector drawing programs available for Linux.

As the developer of Sketch, I was somewhat disappointed, as you can imagine, when I read that you weren't able to install it. The points you raise are perfectly valid, although most of the problems are caused by misleading statements in PILs README, I think. I guess that everybody who is not very experienced with building Python C-Extensions will have similar problems, and I don't know how many people have given up installing Sketch because of this.

In your column, you write:

Sketch requires Python v1.5.1 or later, the Python Imaging Library, v1.0b1 and Tcl/TK, version 8.0 or later.  To build the Python Imaging Library (aka PIL) you can't use the RPM version of Python - you have to build the python distribution from source and install it.  This is because you have to build PIL under the "Extensions" directory of the Python 1.5 directories.
This is not true, actually. The PIL README says that you should unpack the archive in Python's Extensions directory, but you can in fact unpack it anywhere you like (in your home directory for instance) and build it there.
Although I have Python 1.5 installed on my stock RH 5.2 box, there is no Extensions directory.  Plus, if I just made the directory where 1.5 is installed (/usr/lib/python1.5), I'd have to build the PIL as the root user.  Not a good thing.  So I downloaded the Python 1.5 source, built it, then tried the PIL buid.  It didn't work - something about missing a config directory.
You don't need the Python sources to build the PIL as long as you have a complete installation of the Python interpreter and the C-header-files, libraries and configuration files. RedHat has split Python into several packages. The header files and configuration files are in the python-devel rpm, as far as I can tell (I don't use RedHat, but I had a look at their ftp server), so if you install that rpm you should be able to build PIL with these commands:
% tar xvzf Imaging-1.0b1.tar.gz
% cd Imaging-1.0b1/libImaging/
% ./configure
% make
% cd ..
% make -f Makefile.pre.in boot
% make
and install it under /usr/lib/python1.5/site-packages as described in the PIL README. After that, installing Sketch itself should be simple, I hope :)

All in all, I have to thank you for the article. As a developer, it's difficult to guess where user's may have problems and the information you provide is exactly what I need to make Sketch easier to install.

I really hope that you give Sketch another try and perhaps write about it and the other programs again in a future graphics muse column.

Bernhard Herzog <sketch@online.de>

'Muse:  Attention developers - this is exactly the way you should respond to end user and press criticisms!  I applaud Bernard for taking my issues to heart and offering such useful feedback.  I hope, for my own projects, that I reply to criticisms' in the same professional and meaningful manner.

Oh, and Bernard's feeback was perfect.  I managed to get things running pretty quick with his help.  Note that he is correct about the Red Hat RPMs - if you are using the Red Hat 5.2 distribution you may not have automatically installed the Python development package, which you need to build PIL.  You'll know this is the case if you try to run the Makefile.pre.in step (above) and get a message about

No rule to make target `/usr/lib/python1.5/config/Makefile
That would be because the "config" directory for Python only gets installed (using RPMs) with the python-devel-1.5.1-5 RPM for i386 package.  Don't forget to also install the three header files from PIL into the python include directories.  The INSTALL file for Sketch describes this simply enough.  After getting the Python development package and Python Imaging Library installed, the build for sketch was very simple.  Just folllow the steps in the README.  Sketch itself is easy to build.  It's all the bits and pieces it requires from Python that were a bit of a pain to get going.

One other thing:  make sure you build with the 0.6.0 version.  I tried with an earlier 0.5.5 version and had some build incompatibilities with my Python 1.5.1 installation.  You can get around these easily enough, but its even easier if you just grab the 0.6.0 (or later) source code.

Sketch's interface is fairly simple to learn.  Unlike TGIF or XFig, Sketch is more of an artists tool, something like Adobe Illustrator (it even reads and writes Illustrator files!).  I wouldn't really put sketch in the same category as the other two - they seem meant for different uses.  Being more of an artist (or at least a wanna-be artist), I really liked Sketch.  Once I managed to get it running.


And just a little praise...

This is a letter from a real Graphics Muse fan!  You are doing a great job with your column in the Linux Gazette, and all that monthly.  Congratulations!

'Muse:  Thanks!

Sometimes, I think of contributing more to the Linux community myself, but my daytime job eats up most of my time/energy.  How can you make such a neat article every month?  Where do you get your energy from? Kryptonite?  :-)

'Muse:  No, but a lack of anything that remotely resembles a social life helps.  As for your contribution to the cause - you're making it now, by providing feedback to me.  Don't underestimate the importance such feedback plays.

Why I'm really writing this letter...

In [last] month's Gazette you compare tgif and xfig.  In brief:  Well done.  Great job!  I liked especially the sentence that your preference towards tgif is _not_ mirrored in the numerical "test"-result.  If every software comparison/test were done so carefully, we would have far fewer flame wars in the news groups.

'Muse: Maybe, but human instinct is toward clarification from the point of view of the reader.  Which means argument is almost guaranteed at some point (at least between relatively intellectually motivated individuals).  But I digress.

As I am a long-time (old-time?) user of both programs, I just want to add some fine points to your careful judgment.  Why the heck would you want to use both?  Well, once you are in the boat, you must row.

Almost all of the documents I produce are typeset with LaTeX.  From time to time I have to include simple drawings.  Now, because TeX produces such wonderful looking documents, the graphics have to match that.  This means all the text (e.g. labels, legend) in a graph must be typeset with TeX.  Using fonts from a different family does not look good.  The problem is that TeX's graphing capabilities (i.e. the picture-environment) are very limited.  What the user wants is the full power of Postscript.  That said, xfig and it's companion programs transfig, and fig2dev are a blessing.  They allow for exactly what I have been describing.

The typical data-flow looks like this:

  editor                TeX                      dvips
|-------> doc.tex ---------------+----> doc.dvi ---+----> doc.ps
                                 ^                 ^
  xfig              fig2dev      |                 |
|-------> graph.fig ----+---> graph.tex            |
                        |                          |
                        +---> graph.ps ----------->+

The dependencies between the files are automatically updated with a Makefile.  OK, now you know why I am stuck with xfig: it is the only program that can separate the text-output (read: TeX) from the graphic-output (read: Postscript).

Enter: Postscript files.

Imagine a colleague walking in and saying: "We should include one of these fancy XYZ [insert program name yourself] outputs, you know that thing can produce Postscript-files."  Oh, oh -- this is bad news.

  1. Sad but true Postscript is not always Postscript.  Some software has a very particular idea of what makes up the Postscript-standard.
  2. Sure, the program's output looks fancy, but it cannot be published without some editing.  How to edit a Postscript file?  Lucky if you have Wolfgang Glunz's pstoedit [currently version 3.03].  pstoedit translates a ps-file with the help of ghostscript into a tgif-compatible file.  For a long time pstoedit's tgif-driver was the only one leading from un-editable Postscript back to an editable format.  Later it was the best driver to do that.  Today the xfig-driver does as well as the fig-driver.  But I started editing my ps-files a long time ago, and that is why I am using tgif.  Once upon a time I was the only tool to do what I needed.
You see, the stories behind the usage of this tool or that tool can be quite convoluted.  The numbers of an "arbitrary" test may not tell you what you need.  Therefore, your xfig versus tgif comparison is a shining example of how to write about performance, usability and all that.

Christoph L. Spiel
cspiel@ccmr.cornell.edu

'Muse:  All very good points!  So often we measure tools objectively, using what we think are absolute comparisons of speed or performance.  But we often fail to measure the seemingly intangible value of comfort that lies within a tool for the individual user.  Perhaps we should look at software less as abstract pieces of pseudo-machinery and more as extensions of our daily lives.  We give life to our automobiles by referring to them as "she".  And if that automobile provides no comfort, then it has limited value to the owner.  Comfort, it seems, should be an intrinsic part of our measurements of a software tool's usefulness to the individual.



A Little Eye Candy, Please (continued)

In my fvwm-menu file I've added the following entries to run xscreensaver:

AddToMenu XScreensaver "Screen Saver"    Title
+      "Matrix"        Function    ScreenSaverMatrix
+      "XSaver On"     Function    ScreenSaverOn
+      "XSaver Off"    Function    ScreenSaverOff

AddToFunc ScreenSaverMatrix
+ "I"   Exec      exec xscreensaver -no-splash &
+ "I"   Exec      exec xscreensaver-command -select 1&

AddToFunc ScreenSaverOn
+ "I"   Exec      exec xscreensaver -no-splash &

AddToFunc ScreenSaverOff
+ "I"   Exec      exec xscreensaver-command -exit&
+ "I"   Exec      exec xset s on

The first entry, Matrix, will run only the xmatrix screensaver immediately and leave it running.  The second entry just starts the screensaver using the first entry in my programs list in $HOME/.xscreensaver and allows it to cycle through the list.  This also starts only when the configured inactivity period has expired.  The last entry shuts xscreensaver off and returns my X servers screen blanker back on.

The three programs that come with xscreensaver - the xscreensaver daemon, xscreensaver-demo and xscreensaver-command - also include extensive man pages in HTML format.  It seems a bit odd that there are som many options for something as simple as a screensaver, but they are all useful options.  Be sure to read through the documentation before trying to set up running the screensaver from your window manager as I have done in the examples above.

Some of the other intersting hacks I have configured are:


Decay Screen


Spotlight



Radar

It's just a fun thing to play with, not much else really.  If you dig into the code for some of these hacks (and xscreensaver itself), however, you might just learn quite a bit about how low level graphics work under X.

Enjoy!
 
indent
© 1998 by Michael J. Hammel

© 1999 Michael J. Hammel
indent
indent
© 1999 by Michael J. Hammel

"Linux Gazette...making Linux just a little more fun!"


Linus at Fermi Lab

By Stephen Adler



FNAL main building. It's 5:28pm and I'm rushing to get to Ramsey Auditorium, which is through and on the other side of the main building.

Linus at Fermi Lab


Authors note: Slashdot posted this page on their site but the article really starts at http://ssadler.phy.bnl.gov/adler/Torvalds/comdex99.html . It's an introductory page which puts my FNAL and Comdex write-up into perspective. If you are only interested in what Linus had to say at FNAL, then just read on.

Editor's note: In the original article on Adler's website, many of the inline images display a larger copy of themselves when clicked on. These larger images are not included in the Linux Gazette version, to keep the total size of the Gazette small.



A clear day for flying. Long Island Islip Airport lies ahead.

April 19th, the day of Linus's talk at FNAL, dawned to be a gorgeous day on Long Island. I'm going to fly Southwest, the 1:20pm flight, through Baltimore, and transfer to the Chicago Midway flight. I'm to arrive in Chicago at 4:30pm. Linus's talk is scheduled for 5:30pm. Trying to get my reservation setup to fly out to Chicago was a mess. Originally, Linus was scheduled to talk at 7:30pm. And I planned my flight scheduled around that. (4:30pm arrival, 7:30pm talk, no problem.) But, that changed when I got a message about Linus's talk being rescheduled. By then I had no choice except brave the tight time table. So, I had a relaxing morning, enjoying some quality time with my wife. Flight time came and off to the airport I go. With such nice weather, all flights were on time. (The Free Software Gods were looking after me...)

4:25pm arrives, the plane wheels up to the midway terminal gate, and bam, I'm off running. Those 1970's or 80's commercials of OJ Simpson running through airports was the title theme of my thoughts at the time. (Where is the running lane!!!) I hit the National Car Rental booth. Two rather relaxed attendants are shooting the breeze. I quietly but firmly tell one of them that I have a rental reservation. ("Get me my car now!!!") The attendant gets a little nervous, shuts up and starts processing my car rental. The rental cars are located in an adjacent parking lot just out side the main terminal building. No need wait for a bus to take me to the car rental lot, (again, the Free Software Gods are looking over my shoulder...) Within 10 minutes after landing, I'm in my car looking for a way out of the airport. For those of you familiar with the Chicago area, I got on I-55 south (south west really...) to I-355. I-355 north to I-88 west. Turn off on 59 heading north. From there you hit Batavia rd west and bang, your at the FNAL main building. 4:45pm, on I-55. 5:00pm 355 north, 5:08 I-88 west. 5:18pm 59 north. 5:20 pm Batavia rd. 5:25pm FNAL main building. 2 minute walk to Ramsey Hall, site of Linus's talk. (Mind you I did not break any traffic laws. The Free Software Gods will attest to that. You can take the issue up with them.)


My first shot of Mad Dog. Dan Yocum is on the left and G P Yeh on the right.
Ramsey hall is an elegant auditorium. It has a red motif to it. Red carpeting and seating is the cause of that. Many a physicist have given talks in this auditorium, including Stephen Hawking. Now it's Linus's turn. The guy who organized this event is named Dan Yocum. He wrote to me in an e-mail that it was easy to get Linus to come out to FNAL. He e-mailed Mad Dog an invitation. ("It's easy, I just asked!") I later learned that Linus and Mad Dog had a rather thorough tour of the Lab. This included some accelerator facilities, one of the large collider detector facilities (CDF) and the computing center. Now it was Linus's turn to entertain some question from the audience. In some e-mail exchanges with Dan, I told him that I may not make it to Linus's talk and that his Comdex keynote address would probably be very similar. Dan replied telling me that Linus hates speeches. His plan was to make this a question and answer session. He wanted to hear from the physicists and not to hear himself talk.

Linus sitting amongst curious Ph.D. graduate students answering questions before his talk. This shot was taken seconds after I saw Linus for the first time.
You have to walk through FNAL's main building to get to Ramsey. I got there, went down to the front seats so that I could get a good position from where to take some snaps of Linus giving his talk. I put my notebook down and looked around to see if Linus was around. I was sitting on the right side of the auditorium. I looked over to the left and noticed a cluster of people. The first one who stood out was this guy with rather long beard, and frizzy white hair. It took a minute, but soon realized he was "Mad Dog". Out comes the camera and I walk (rush?) over to the left side of the auditorium, go right up to Mad Dog and take his picture. He looked at me like, who the hell are you! I waved at him or something to try and let him know that I'm a friendly guy, not this weirdo maniac running around taking pictures of strangers. I then turn to my left and sitting amongst a bunch of young guys, I see Linus. I remember the phrase going through my head, "There his is, Linus". He didn't notice me, he was too busy talking to the guys who were sitting next to him. Again, I take my camera, try and get as close as I can, zoom in and snap, I take another picture. I'm in this rather "fanatic" state right now. I'm not really thinking clearly, and all I can seem to do is take pictures of the guys who in reality are total strangers. I stand around and try and take some more pictures. I then go over to Mad Dog, introduce myself, give him my card and ask a bunch of dumb questions he really does not want to answer. Finally, my mental state settles down a bit, and I manage to get myself back over to my side of the auditorium from where I sit down, take out my note book, clear my head, and try to take some sensible notes of the talk.

With that, Dan Yocum gets up, and starts the standard Fermi Lab/Ramsey Hall tradition in introducing speakers. He introduces John Hall, and in turn John introduces Linus. (I've seen it worst at BNL where there were 4 introductory speakers ...) John gets up and starts in with this story about how he met Linus 5 years ago at a DECUS meeting in New Orleans. He gave some specifics about getting Linus's trip financed, (I thought I had it tough) and then some detail of Linus at the New Orleans convention. The one bit of John's introduction which stuck in my mind was his piano analogy. If one sits down to play a piano, the pianist can get a feeling for the quality of the piano as he plays it. A rough piano has a rough feeling, a great piano has a great feeling. It's in the touch. At the New Orleans conference, a Linux installation was underway, I believe headed by Mad Dog himself. He heard a voice over his should saying "Can I help you?". It was Linus, offering assistance in getting his Linux kernel up and running. Mad Dog tells the audience that within about 10 minutes, with Linus's help, Linux was up and running. (I can't remember if Mad Dog mentioned the hardware specs of this machine.) In any case, Mad Dog, keyboarding on this machine, was getting that first feel of Linux. 10 minutes later he made a mental note. Linux was going to be inevitable. It has that feel of a great piano. He continues on to talk a bit about his relationship with Linus which is clearly a deep one. With that, he ends and Linus comes up on stage to start his talk. Or rather his question and answer session.

Linus starts by saying he does not like podium and thus will not stand behind one for this Q&A session. He has this wireless mike which Dan has hooked him up with. I also notice that the FNAL media guys are recording this session for posterity, so if you don't like my write-up, you can contact them to get a full playback of Linus's talk. In any case, Linus starts off with a very brief history of Linux. It was 1992(?), he had a PC, but there was no Unix available for it. Since, and I quote "he was the best programmer since Jesus," he would fix that. He would write his own Unix like OS. So off he went and wrote it. The concept that need fosters development was key in getting the Linux kernel going and has been key through out all of its development. And then he did something which was, as he says, the most important decision of his life. He posted the code on the Internet, via some news group and asked for feedback. That he got. He expected people to download his code, run it and tell him whether it works or not. "Linus, this really sucks!" He got some of those responses; but more importantly, he got code back in the form of patch fixes and enhancements. And from then on it was history. With that he ends his introductory talk and starts in on the questions.

Dan Yocum starts it off by asking about the 2.3 kernel and/or plans for large files systems (i.e. file system journaling.) A good question, since in High Energy and Nuclear Physics there is a big need now for this type of file system. Petabytes of data will soon be recored and file systems which can handle this type of data load will be necessary. (Maybe not a petabyte file system, but terabyte file systems will be a must.) Linus's answer to that question was that up to this point, large files systems were not an issue. He reminded us that back in the days when he was starting the kernel, there was a 64 Meg partition limit which he had to solve. He then said something about how new users bring new problems and how this was the "development model" for the kernel.

At this point my notes get rather fuzzy so I'm just going to paraphrase from what I can decipher from them.

Someone asked about security issues with Linux. Linus said that people are keeping after the bug fixes. From my personal experience with Linux and the Red Hat distribution, this is the case.

Someone asked about addressing more than 2 Gigs on a 32 bit system. His answer was to use a 64 bit machine. Linux is fully 64 bit compliant.

There was a complicated SMP question to which the answer was that 2.0 and to some extent 2.2 are really a single spinlock SMP implementation. Linus will work on making it more fine grain.

He then talked about how one should not design for the theoretical perfect implementation since this will screw up another implementation. The kernel lives in a world of diverse needs and one needs to try and fit them all in. Therefore no one need gets all the attention but all needs are tended to some extent. This type of clear-headedness of Linus is an indication to me as to why the kernel has gotten as far as it has.

There was a question about capabilities. I believe this is like splitting up the super user function into separate users through access control lists. Theoretically it's a good idea, but in practice it's too complex. Most of the time, one sets up the system in the wrong way, making it less secure. He claimed it's a feature which needs to be added to Linux just so that one can check it off on the "Linux can do this" matrix, but then have a README on how to disable it.

Someone asked the copyright question. Linus talked about the license he released his original kernel code under. Basically, its intent was that anyone could use it, distribute it and modify it. But the modifications had to be freely distributable as well. The people were starting to sell the Linux kernel at computer shows by charging a couple of bucks for the floppies. They asked Linus if this was OK. Clearly, Linus said that it was obviously OK, since he wanted the code to be distributed and could not expect people to lose money on the distribution cost. So he modified his license. I'm not sure whether he modified his license further, but the fact is that he eventually switched over to the GPL license. He said that it was an awful piece of legalese but it fulfilled all his requirements. Also, the one bit of software which it really depended on was the GNU C compiler. That played a role in the adoption of GPL for the Linux code. Again, the main emphasis was that the source code had to be available to the "community" as well as the modifications, which were brought back into the Linux source repository.

A question on the Merced was asked. Linus said he would not sign any Non-Disclosure Agreements. The reason for this is that he does not want to be put in the situation where he cannot release his source code due to conflicts with an NDA. A very wise choice on his part. He lets others sign the agreements, which has been done by others. Notably, there are some people at CERN who are working on the Merced port. Linus defended Intel's move on asking for NDA's to be signed. It's done so that Intel can keep control over the flow of the technical information into the public domain. Once the CPU has been fully released by Intel into the "market," then they certainly want every one to know how to use it. But before that, it's clear that they need to keep their specs under wraps to keep the competition at bay. The big problem with the Merced is in the compiler technology sector. All the kernel needs is a version of gcc that will generate a Merced executable. It's up to the gcc guys to get it to generate Merced instructions. Linus is confident that once gcc is ready, which should be by the time the Merced is released, then the Linux port will follow within a couple of days or weeks.

Someone asked what is better, one really fast CPU or many not so fast CPUs. Linus's answer was that the best SMP system for the Linux kernel is a dual CPU one. If one were to build a Beowulf type cluster, one should do so using a set of dual CPU systems.

There was a question about SVGAlib -- what its viability was for the future. Linus's response to that was that 2 or 3 days after working with X11, he decided never to go back to console mode. All he needs, graphics wise, is to have 15 xterms open with the kernel compiling in one of them. He kept reminding the audience that all he really likes to do is compile the kernel. The fvwm2 window manager coupled to 15 concurrently opened xterms was all the graphics functionality he needed. This question was one directed towards games. He said that there was a good OS for running games called Windows. He claimed that MS admitted to the fact that they could not write an OS very well and basically kept out of the way of the software games developers by letting them take over the system when the game app was active.

A question was asked about how he decides whose code is to be included in the kernel. He said that drivers were no-brainers. Since the code sits outside the kernel, he tends to include them without much thought. When it comes to adding something that exists in kernel space, then his main requirement is that there be at least one person who will take charge in maintaining it. My take on this is that items like the TCP stack or the kernel version of NFS etc. are coordinated and maintained by someone besides Linus.


A question was asked about the recent benchmark comparison between NT and Linux. The benchmark was done by Mindcraft, and the results showed that Linux was 2 or 3 times slower at file and web serving than NT. There was an interesting story behind this. Linus was paneling on a session down in Atlanta. There was a Microsoft representative on the panel. Linus was presented with this benchmark report from Mindcraft, who seem to have a lot of credibility in the IT world regarding doing benchmarks. The report was presented to him just as he was sitting down at the panelist table. This left Linus in a rather awkward position of having to defend Linux against NT with this Microsoft Rep on the same panel and not having any time to digest these benchmarks. It turns out later that this company specializes in Microsoft OSes and has done a series of benchmarks comparing NT with Solaris etc. All the benchmarks come out in favor of NT and the large Unix companies (Sun etc.) have to mount a PR campaign to refute the results. Linux in this case has no corporate machine backing it up with resources to fight back. What surprised Linus was that the journalists came out defending Linux. It was the journalists who came out questioning the validity of this Mindcraft benchmark. As of this time, it seems that the benchmarks are going to be performed again, this time with an equally well-tuned Linux system.

Someone asked him if he ever has talked with Bill Gates. His reply was that, no he has not, but if he did, he would "be talking money." (His palms rubbed together as he was finishing his answer.)

More questions on benchmarks. The conclusion to his answer on benchmarks is that the best benchmark is your own application. It's not easy since this requires the vendors to give you access to their hardware and you have to do some porting. The bottom line is that your own application is truly the best benchmark.

Someone asked about frame buffers or rather how one could get a DVD app ported to Linux. Linus said that most of the work is in setting up the hardware. Once done, the hardware takes care of getting the DVD imagery onto the screen. The trick is to get this to interface to X11. He didn't seem to have any immediate plans on taking on this project. Also he mentioned that DVD encryption is a trade secret. I assumed this means that an open source application would be difficult to implement.

Someone who works at Lucent asked a question related to drivers for modems made by Lucent. The question lead to a discussion about how one can get companies to release the specs of their hardware. Linus made a point about how sometimes it's not a question about keeping the engineering design behind some gizmo a secret and thus keeping a market advantage. But rather one wants to keep secret the bad engineering that went into making the gizmo. He hypothesized an example of a gizmo that in order to get it to run, you need to write to xyz registers in some specific order, then toggle some interrupt lines, followed by holding the reset bit in the CSR high for 30 clock cycles etc. etc. This kind of kludgey design is the real reason behind not releasing specification. It's all hidden in the binary version of the driver.

Someone asked about UDI, Unified Driver Interface. Linus replied that it's in the Nice Theory stage but he is keeping an open mind about the idea.

Some question was asked which led to some interesting statements by Linus. This regards software development through Internet collaboration. Talk is very cheap, and he never takes anyone at face value. The best way to collaborate with Linus is to show him code that works. That is what he want to see. Otherwise, my guess is that unless your ideas are of obvious importance, they will go ignored.

A question came up about GUIs. He as no interest in GUI design or interfaces, and has no influence in current GUI theological discussions ongoing right now. (My guess is that this refers to GNOME vs. KDE type of theoretical friction.) He is happy using fvwm2 and his 15 xterms to apply patches to the kernel and rebuild it again and again.

I asked a question about how he maintains the Linux source repository. I wanted to know if he used CVS. His reply was that he has his own method. I should think of it as lovingly hand-crafted maintenance of the kernel source. He does not use CVS because he does not need it. He is the only one who applies patches or updates the source code, and he does not care to use the history logging mechanism CVS provides. He does use CVS at work, so he knows what it's capable of doing, but chooses not to use it.

By this time we started to run out of time, and a few more questions were asked. From these questions, the following general statements were given by Linus. MS is a good OS for running games. The bottleneck in the development cycle of the kernel was the users. A project should never grow beyond the scope of what can be kept in one person's head. My take on this is that the kernel is broken up into many "projects," each one with a leader in charge of it. And whatever that one person is in charge of, he must keep the whole concept and source code layout/structure/functionality in his head. Keeping "things" modular is the Unix way.

Developers grow linearly, while the users exponentially. The users of Linux have grown by 7 orders of magnitude, and his goal of global domination is only 2 orders of magnitude away. "What's 2 orders of magnitude after growing 7..." (Global domination is in reach.) Avoid black and white when trying to solve a problem. There is never a silver bullet which can be applied to a project or problem to "fix it".

Linus, Dan Yocum, and G P Yeh. Dan works in the FNAL IT department providing Linux support to anyone who needs it at FNAL and G P on CDF working on "event builders" for CDF. A bunch of Linux boxen tied together with an ATM switch.
The next great challenge for Linux is to conquer the desktop. When it comes to servers there is no loyalty. Servers are black boxes that sit in windowless rooms and are used to serve files and printers etc. As soon as a newer, better server comes out, the old one is replaced. No questions asked. This is one of the reasons why Linux has been able to penetrate the server market. It's the easiest one to crack. The desktop is totally different. There are very strong loyalties attached to desktops. If a new, better desktop comes out, people tend to get their shotguns out to defend their old, not-so-good technology, often resorting to falsehoods in order to defend them. Linus wants people to get used to using Linux for their desktop. Linus also wants to see the day when he can walk into CompUSA or equivalent store and find that one has a choice in the OS one wants to run on their new PC. He does not want to see one default OS, and does not want people to default to Linux either.

Linus concluded with the statement that there has always been a physical invariant regarding building his kernel. This being 12 minutes. It always took 12 minutes to compile the kernel. When he started out with his 386, it was 12 minutes, when he moved up to a 66MHz 486, the code has grown such that it still took 12 minutes. The growth of the code and the speed up of the Intel technology kept pace with each other such that the kernel compile time always took 12 minutes. This has changed recently. With his quad CPU development system, it now takes him 73 seconds to build the kernel. He admitted that the hardware development has now been recently out-paced his software (kernel) development.

With that, a physicist from FNAL named G P Yeh, who is one of FNAL's strongest Linux advocates, closed the session by thanking Linus for all his work. FNAL is now using Linux in a big way to process all the data coming out of the large collider detectors that will start taking data within a year or so. The data rate from these detectors is expected to increase 200 fold from the last time they took data. This is due to an upgrade to the Tevetron called the Main Injector. It's designed to increase the proton flux by a lot, and thus 200 times more data will flow out of the detectors. Linux will play a big part in analyzing all this data. (I can attest that Linux is playing a big role at BNL as well. It will be used on about 500 processors to analyze the data coming out of the 4 detectors being built for the Relativistic Heavy Ion Collider. The RHIC is scheduled to turn on this summer, and by this coming winter the Intel Linux farm will start its first production data processing.)

The audience crowds Linus after his talk.

With that final congratulatory announcement, the talk came to an end. People got up and scattered about. I headed over to the left side of the auditorium where Linus and Mad Dog were located. Linus was surrounded by people asking questions. I was out of earshot, so I could not listen to the back and forth between the guys and Linus. I did get a chance to get over to Mad Dog and reintroduce myself. My intent was to invite him out to BNL if and when he and/or Linus got out to NY. I'm not sure if Mad Dog is interested in seeing yet another collider facility, but he did encourage me to try and contact the Bizzar Show people and set up a talk or panel. Something with a topic along the lines of Linux in Physics. I told him I could do that, and I'll try to follow up with the organizers of the Bizzar. I then hung out with the Linus crowd for a bit, taking a couple of pictures. There was this one guy who had on a tee shirt with an Intel logo announcing the i80666 CPU. The phrase "Runs hotter than hell" was written underneath. Linus told the guy he like his tee shirt. This guy then took off his shirt so that Linus could see the back of his tee shirt. It sported a picture of Bill Gates with horns, looking like the devil. After another snap or two, I had my chance to introduce myself to Linus. I gave him my business card (not that I do any business, it's more like an identity card), and thanked him for his work on Linux because it's made our lives some much easier. He replied saying that he did not do it for me, he was just doing it for himself and the users are just a big pain. "Yeah", I replied, "Users are loosers..." I was a bit flush as I was talking to him, I really don't know what I was saying. The fact of the matter is, Linux has made my life a lot more complicated. Ever since I installed Linux on my first PC 3 years ago, (built from parts bought at a computer show), I've been so tied up in this Linux Open Source thing, and it's become such a central theme in my work, that I can hardly say it's made my life any easier. It's made it more fun, and it has save BNL and FNAL a lot of money. It has saved millions of dollars at FNAL alone. But as I said, I really wasn't thinking straight since I was talking to Linus for the first time in my life.


So I've had my chance of meeting Linus and Mad Dog. I must say that Mad Dog comes across as a very serious, level headed guy. It's hard to imagine someone with such a fantastic beard being so calm and decisive. My guess is that he has to be in his line of work at DEC (now Compaq?). I would also venture to guess that once you get to know him, and he gets to know you, if you manage to get a beer in his hand, then you're in for a ruckus of a good time. Linus impressed me as being very down to earth. He is not aloof and was willing to take time to talk to those interested in talking to him. He was very generous with his signatures at the end of his talk. He also impressed me as someone who has a practical approach to solving problems. In many of his answers, he alludes to the fact that one should follow the middle road. Don't make a project too grandiose. "A project has to sit inside one person's head", "There is no silver bullet", "Never design to the 100% theoretical limit", and on and on. I'm sure that this is one of the reasons why Linux is as successful as it has been. He also mentioned during his talk that he is willing to listen to new ideas. He said that it always starts off being a really dumb idea. But the idea is not dismissed. (Maybe there are a lot of ideas that are really dumb which he has dismissed.) But the point is that the idea would be knocked around the kernel development news group or e-mail list group and evolve into a not-so-dumb idea and finally into something important that could be included in the kernel.

I left Linus and Mad Dog behind in Ramsey. My plan was to stay at FNAL for the night and drive in early to catch the opening keynote at Comdex. Bill Gates was giving this keynote. From Linus to Bill, this was going to be a real contrast.


Copyright © 1999, Stephen Adler
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"


Linux Expo 1999

By Marjorie Richardson


Photo Album


Red Hat proved once again that they can put on a good show for the Linux community. Bigger and better than ever, Linux Expo again doubled in size and attracted top speakers such as Dr. Peter Braam and Dr. Theodore Ts'o. Big business was there too, represented by such companies as IBM, Hewlett Packard and SGI (formerly Silicon Graphics), as well as the usual Linux vendors, such as SuSE, Caldera, VA Linux Systems, Enhanced Software Technologies, Cygnus and many others.

I talked to Dave McAllister of SGI about their involvement in Linux and Open Source and found SGI to be much more committed to this community than I would have suspected. They released their most robust and scalable file system, XFS, to the community in an effort to aid Linux in reaching what he called ``Enterprise level''. Whatever their reasons for doing so, this is certainly something that was applauded by everyone I talked to at the show.

One of the most exciting announcements before the show was O'Reilly's and HP's sourceXchange.com web site. I attended a discussion about this site, which is designed to aid in getting needed open source developed by obtaining sponsors who will pay developers to write the code they need and then release it to the public. This is an idea whose time has come, as another group has also started a web site for the same purpose--this one is CoSource.com from a couple of independents, Bernie Thompson and Norman Jacobowitz, who write for LJ. It's obvious that Bernie, Norman and O'Reilly are committed to the community and wish to drive open source development, but I was a bit suspicious of HP. When I asked about HP's motives for involvement in this project, Wayne Caccamo told me HP felt this project was inevitable and wanted to take a leadership role in it and they wanted to ``ingratiate'' themselves to the Open Source community--talk about honesty! After that remark, I was ready to believe anything. I'm looking forward to seeing how both these sites work out. (For more on this subject, see Doc Searls' article on the Linux Journal web site at http://www.linuxresources.com/articles/conversations/001.html and Bernie Thompson's article in this issue, ``Market Making in the Bazaar''.)

There were the usual fun things to do, such as a chili pepper sauce contest and a paintball contest pitting vi against Emacs once more--and once again vi won, proving it is the best editor available--or that its advocates are the best shots. More than one group bought blocks of tickets to a local showing of Star Wars-The Phantom Menace. The ALS (Atlanta Linux Showcase) group invited me to go along with them. Fun movie, but not as compelling as the first one--then again, who expected it to be?

I especially enjoyed my booth time talking to current and future readers and authors. In particular, it was a pleasure to finally meet Alan Cox and Telsa Gwynne.

Alpha Processor, Inc., a Samsung company, announced they were joining Linux International, and Guy Ludden presented a check to Jon ``maddog'' Hall. I got the picture and then took several others of Jon, including one with a people-size Tux, who was roaming the show floor.

Compared to LinuxWorld, Linux Expo came across as more polished, more ``we've done this before successfully''. LinuxWorld had a lot of glitz--electricity and energy filling the air--that just wasn't there at Linux Expo. I think this had mostly to do with the fact that it wasn't the first time for these guys--the experience showed. The speakers all like Linux Expo better, as the Expo paid their travel expenses while LinuxWorld left them to get there on their own. LinuxWorld had more people and more vendors, but they also have the advantage of being in Silicon Valley.

Evan Leibowitz described the Expo as ``the show where Linux lost its innocence'' due to two unpleasant situations that arose. One was Pacific HiTech's being kicked out for passing out t-shirts without buying booth space. The other was the use of the Red Hat trademark without permission, by LinuxCare on their poster parodying a Palm Pilot ad. No matter which side you took on this incident--the calling of lawyers certainly signals the ``end of innocence''.

The show was a definitely a success. I talked to Bob Young on the last day, and he certainly seemed pleased with how it had turned out. See my interview with Bob in this issue. For more vendor announcements, see ``Linux Kernels''.


Copyright © 1999, Marjorie Richardson
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"


Book Review: Programming Web Graphics with Perl & GNU Software

By Jack Coats



While I am not a big time web developer or graphics enthusiast, I found Programming Web Graphics to be very interesting reading. The book begins with a down-to-earth explanation of graphics and file formats. From there, it goes into how web servers serve the files and reviews the free libraries available to develop graphics. The details of the libraries may not be everyone's cup of tea, but understanding what they can do helps with understanding how browsers and other utilities can benefit you.

The graphic programming tools are not for the rookie Perl hacker, but are explained in reasonable detail that anyone with some experience in Perl can learn to use the available free tools.

The exciting part of writing programs to do graphics on the Web is dynamic techniques. PWG covers image maps and animated GIFs, and includes techniques on how to roll your own tools, such as web counters, web cams and thumbnailing groups of images.

It is refreshing to see a book that does not ignore the non-graphical web user, and reviews the good and bad of writing browser-specific web pages.

Overall this is a great book for understanding some of the more advanced techniques and as a tool book for generating ideas and methods of your own. If you are looking for a ``how-to'', hands-on tutorial, for the un-initiated non-Perl coder, keep looking.


Copyright © 1999, Jack Coats
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"


Setting up mail for a home network using exim

By Jan W. Stumpel, Oegstgeest, The Netherlands


1 Introduction

Setting up a home network with Linux and Win95, using Samba, IP Masquerading, and diald has been described many times, also in the Linux Gazette, but so far I have not found a recipe for setting up mail on a small network with only one dial-up e-mail account. In this article I want to explain how I did it. With this system:

This is realized on my system (running Debian Linux 2.1) using the following programs:

I have this set up for two machines (1 Linux + 1 Win95) but it will probably also work for a somewhat larger network, and may be sufficient for a small office. Note: this article is Debian-oriented. If you use another distribution, change where appropriate!

2 The network and the names

For this article I assume the following names (change these to correspond with your own situation):

I also assume that the local networking works, and that there is on-demand dialup access using diald. There is no name server on heaven. /etc/resolv.conf contains the addresses of two name servers supplied by the ISP. These same addresses are entered into the TCP/IP configuration on earth.

/etc/hostname on heaven is

heaven

/etc/hosts on heaven is

127.0.0.1 localhost
192.168.1.1 heaven.home heaven
192.168.1.2 earth.home earth

On earth there is a file c:\windows\hosts with the same contents as /etc/hosts.

3 Mail addresses

Mail messages can have more than just the address in the 'To:' and 'From:' lines, for instance :

To: Emily Bloggs <joe.bloggs@isp.com>

'Emily Bloggs' in the above example is the 'real-name part'. It is set in the e-mail program which composes the message. This 'real-name part' can be used for delivering Emily's mail to her. Note: if the 'real-name part' has dots in it, it must be quoted using " characters ("Joe C. Bloggs"). See also man mailaddr.

4 Configuring exim

On a Debian system this is done by running eximconfig. It asks a number of questions which you can answer as follows:

In MS Internet Mail (or whatever mail client you use on Win95) heaven must be entered both as the STMP server and as the POP3 server. Under 'pop3 account' and 'pop3 password', enter the username emi and her Linux password. Enter the the name, Emily Bloggs, and the e-mail address, emi@home, in the appropriate place. Note that the e-mail address must be in the local domain!

On the Linux side, nothing special has to be set. /etc/pine/conf and the users' ~/.pinerc can be used 'out of the box'. The mail client (pine) constructs local addresses using the hostname together with user information from /etc/passwd.

With the above setup, local users can happily send mail to each other and reply to it. For instance, in pine at heaven, user joe sends mail to user emi. Automatically, pine changes this to:

To: Emily Bloggs <emi@heaven.home>

The message is delivered immediately (as you can see if you run eximon, the exim monitoring utility). emi (should she log in to heaven) would see the message as coming from

From: Joe Bloggs <joe@home>

So home really functions like a local domain within which messages can be exchanged. The problem is sending messages to the outside world. A From: address like <joe@home> is no good because nobody on the outside could reply to an address in the non-existent domain home.

5 Fixing the From: address

We must change the local From: address into a valid e-mail address (the e-mail account at the ISP), but only in the case of outgoing messages. With exim, we can do this by means of a 'transport filter'. The outgoing mail passes through this filter, and the From: address is changed. Local mail will not be affected.

The following filter will do the trick, provided we are sure that the address that we want to change is always between < and > signs. This is not guaranteed, but very common: pine, mutt, and mail, as well as MS Internet Mail all generate such addresses.

#!/usr/bin/perl
while (<STDIN>) {
	if (/^From: /) {
		s/<.*>/<'joe.bloggs@isp.com'>/;
		print "$_"; last;
		}
	print "$_";
	}
while (<STDIN>) { print "$_"; }
Don't forget to change the e-mail address to yours! Call this program outfilt, do chmod +x outfilt and put it in /usr/bin. Now we must add a line to /etc/exim.conf, so the last lines of the TRANSPORTS CONFIGURATION section read:

remote_smtp:
   driver = smtp
   headers_remove = "sender"
   transport_filter = "/usr/bin/outfilt"
end
Actually, we added two lines. The headers_remove line is also new. This prevents exim from adding a Sender: header to the message (as it would do with this setup, if you use pine). The Sender: line can cause trouble with some (badly configured) mail destinations.

With these changes to /etc/exim.conf, whenever anyone sends an e-mail message to the outside world it is now delivered properly by exim. Exim (through diald) opens the outside line at once. In a home situation this is probably what you want. In a small office, with a lot of e-mail traffic, you may want to defer messages and send them as a bunch at certain times, to save phone costs. This is possible, but I don't need it myself and have not looked into it. You could look at the 'Linux Mail-Queue mini-HOWTO'.

6 Fetchmail configuration

At the command fetchmail diald opens the line and the mail from the ISP is collected (and passed to exim for local delivery). Only users who have a .fetchmailrc, owned by themselves, in their home directory can run fetchmail. This file can be created using the configuration tool fetchmailconfig. You get something like:

# Configuration created Sun Mar 28 03:15:20 1999 by fetchmailconf
set postmaster "postmaster"
poll pop3.isp.com with proto POP3
       user "jbloggs" there with password "zaphod" is joe here options fetchall warnings 3600
The .fetchmailrc files belonging to the various users could all be copies of each other, but with the ownership set to the user concerned. It is not so nice that every user has the password in plain view. Maybe there is a better way, but in a home situation it does not matter.

The main point is that whoever runs fetchmail, the mail must always be delivered to the same user mailbox (joe's mailbox in this case).

7 Removing exim's delivery limit

Exim by default does not deliver more than 10 messages at a time. I am sure there are circumstances where this makes perfect sense, but having a dialup account is not one of them. To get rid of this restriction, you must put into the MAIN CONFIGURATION section of /etc/exim.conf, before the end statement, a line

smtp_accept_queue_per_connection = 0

8 Delivering personal mail

Through fetchmail and exim, all mail from the outside is by default delivered to Joe's mailbox (var/spool/mail/joe) at heaven. In Joe's home directory he puts a file called .forward, containing the following text:

# Exim filter
if $header_to: contains Emily then deliver emi endif

If mail contains 'Emily' in (the 'real name part' of) the To: address (and this will almost always be the case when her friends send her mail) it will go into her mail account on heaven, not into Joe's. She can move the mail to her own machine using POP3 (see below).

9 Transferring mail with qpopper

To let heaven act as POP3 server for earth, qpopper can be installed. I installed the Debian package qpopper_2.3-4.deb. Installation is automatic; no configuration is necessary. If Emily presses 'get/send messages' in MS Internet Mail, the contents of her mailbox on heaven get transferred to earth (and all mail, local or outside, which she has written gets delivered).

10 Manually checking the mail

Thanks to a 'shortcut' on earth's Win95 'desktop', which does a telnet to heaven, Emily can log into heaven and start fetchmail by hand. That is, if she does not want to wait for the scheduled cron times when fetchmail runs. After the mail has been transferred from the ISP, she can press 'get/send messages' to move any mail from her heaven mailbox into the earth one.


Copyright © 1999, Jan W. Stumpel
Published in Issue 42 of Linux Gazette, June 1999

"Linux Gazette...making Linux just a little more fun!"


An xdm Session

By Chris Carlson


So, you've got X Windows working on your system, you've set your system to automatically start xdm by setting the default run state to 5 and now you want to customize your personal windows session by having certain applications start automatically after you log in.

At work, I like to log out of my system every evening before I go home so that others may log in when I'm not there. It doesn't happen often, but I don't want someone coming into my office and using a window logged in as me. [You never know when someone gets curious and starts wandering through my saved mail messages.] The problem is, I have certain applications that I want brought up automatically, like my list of things to do and my calendar program.

In this article, I'm going to explain an X Windows session, how it is started and what you can do to customize it. It will show you how to automatically start the window manager of your choice, have applications start automatically and customize colors and fonts to your liking. Since X Windows is pretty much identical on all platforms, much of what I am going to explain can be used on other platforms that use X Windows other than just XFree86 on Linux. As a matter of fact, I will make some comparisons between the version of XFree86 that comes with Red Hat 5.x and what comes with Silicon Graphics IRIX®. You may note that the files I discuss on both systems have the same name but are usually just in different directories.

I realize that other articles have been written about X Windows configuration, for example Jay Ts' fine article in the December issue entitled ``X Window System Administration.'' X Windows is an extremely versatile windowing environment and, because of this, can be very complex. For this reason, I believe it will require many articles that might overlap but each will provide information from a different perspective. This article is intended to be from a user's perspective, rather than from an administrator's.

To start off with and to keep my article from becoming a book in itself, this article is written with the following assumptions:

  1. That you are working with the default configuration of xdm as it is installed by Red Hat (see Footnote). This means that you haven't changed any of the files found in /etc/X11/xdm. (Since I don't have an installation of any of the other Linux vendor releases, I'm presuming their default configuration is identical or similar enough that it won't cause any problems.) With this in mind, I will refer to filenames that are used and referenced by xdm (and their contents) as specified in the installed configuration file. It should be noted, however, that almost all of these filenames can be changed by modifying /etc/X11/xdm/xdm-config or by specifying a different configuration file on the command line when starting xdm. (On the SGI, the configuration file is /var/X11/xdm/xdm-config and I have seen some installations use /usr/lib/X11/xdm/xdm-config.)
  2. That you have a basic understanding of the server/client concept used by X Windows. i.e. The X server handles the display and keyboard and runs as an application. User's applications are clients that request services from the X server to display things and provide input.
  3. That you have some familiarity with X resources and how they are used in the X environment.

User Session Initialization and Termination

When the X server is started automatically via xdm, the user is presented with a login screen. When a user successfully logs in via this screen, xdm starts the ``user session''. This session is a shell script which, when it terminates, ends the user's session and xdm resets the X server and returns to the login screen.

Prior to starting a session, xdm runs a small startup script with root privileges to perform any user initialization that may be required. Currently, this file, /etc/X11/xdm/GiveConsole, changes the ownership of /dev/console to that of the user so messages sent there can be displayed on a window in the user's environment.

In like manner, when the session ends, xdm runs another small exit script with root privileges to clean up anything that might have been set up by the startup script. Currently, this script, /etc/X11/xdm/TakeConsole, changes the ownership of /dev/console back to root.

Note that these two files are /var/X11/xdm/GiveConsole and /var/X11/xdm/TakeConsole on the SGI.

The step of interest to this article is the actual starting of the user session itself. Here, xdm starts a subprocess running the script /etc/X11/xdm/Xsession (/var/X11/xdm/Xsession on SGI) and waits for it to exit. When it does, xdm processes the exit script and returns to the login screen. This session script is run with the user's privileges.

A resource has been set for xdm which causes the parameter ``failsafe'' to be passed to the user session if the user uses the F1 key rather than the Enter key to complete his/her login. This can be very useful if the user makes a mistake in his or her customized session script which makes it impossible to log in. How this feature is taken advantage of is discussed below. It should be noted that I found this resource defined for both Linux and SGI and is used in an identical manner on both.

The Xsession File

The /etc/X11/xdm/Xsession file provided by Red Hat is quite simple, especially when compared to the /var/X11/xdm/Xsession file provided with the SGI. This file is a standard Bourne shell script which performs all the user startup and initialization that the system administrator wants done for all users.

As described above, if the user logs in and pressed F1 rather than the Enter key, the parameter ``failsafe'' is passed to the session file. The first thing the /etc/X11/xdm/Xsession file does is check if this parameter exists and, if it does, exec's an xterm. This bypasses all other initialization and provides the user with a terminal window to work with. Notice that this is a good method of logging in if the user has done something to his/her personal session file that otherwise prevents logging in.

For those that don't understand the function of exec, this is a builtin command provided by all the standard shell programs. It causes the current running shell to be replaced by the exec'd program. Thus, the current running shell never returns from an exec (unless the program referenced fails to start for some reason) and the parent process is not aware of any change in the child process. The exec'd program retains the process ID of the shell and, when it terminates, it is as if the shell terminated and the user session ends.

Presuming ``failsafe'' is not a parameter passed to Xsession, the script continues by redirecting stderr to an error file. If it can write to it, this file will be .xsession-errors in the user's home directory. If the session can't write to the user's home directory or this file is write protected for some reason, the script will attempt to use /tmp/xses-$USER, where $USER is the user's login name.

This error file is useful for determining problems during the user's session. Any errors generated by applications that are started (including the window manager or applications started by the window manager) will be sent to this file. If the user has problems starting a user session after logging in, he/she can perform a ``failsafe'' login (as described above) and look at this file. The error messages may be of some help in determining the problem.

Finally, the standard Xsession file transfers control to one of a set of shell scripts, depending on their existence and if they are executable. It does this with the exec command which means that, whichever program is run, it replaces the Xsession process and becomes the new user session. The shell scripts are:

1. $HOME/.xsession
2. $HOME/.Xclients
3. /etc/X11/xinit/Xclients
Some interesting notes about this compared to the script used on an SGI computer. SGI does not require the scripts to be executable but will run /bin/sh against them if they aren``t. Also, SGI only looks for $HOME/.xsession. If this file doesn't exist, the system Xsession file sets up the default user environment provided by SGI. Red Hat chose to break the default user session into two steps, since the standard installation will provide /etc/X11/xinit/Xclients.

If none of the three files above exist or are executable, then the user``s .Xresources file is loaded (if it exists) and the program xsm is exec'd. xsm is one of the many window managers provided with Red Hat Linux.

User Customized Xsession File

As you may have guessed from the above explanation of the system's Xsession file, the user can create his/her own shell script which will be processed as the user session. This is a very powerful capability and provides each user the ability to do whatever processing they want each time they log in via the X login. In this script, the user can start various applications, set root window resources, set one-time environment variables, change default keyboard definitions and select a window manager.

The easiest way to set up your own personal Xsession file is to copy the system /etc/X11/xinit/Xclients file into your home directory as .xsession or .Xclients (what, in the future, I will refer to as the user's Xsession file) and then edit it as desired. I'm not going to step through the contents of the /etc/X11/xinit/Xclients file, you can do this on your own. I'm going to just explain some of the things one might want to do.

One important thing is to load desired resources into the root window. This is usually done with the following commands:

resources=$HOME/.Xresources
if [ -f "$resources" ]; then
/usr/bin/X11/xrdb -load "$resources"
	fi
Another thing that the user may wish to do is set the root window background to something different. This is done with the /usr/bin/X11/xsetroot command. For example, I have my background defined as follows:


Note that this command can also be used to set the default cursor and cursor color for the root window, a two-tone plaid pattern for the background or an X bitmap to be used as a pattern.

Also, the command /usr/bin/X11/xset can be used to set the desired bell volume, key click, DPMS (energy saving) features and mouse parameters. This command can also set autorepeat and screensaver parameters.

If you want to define special keys, you can run /usr/bin/X11/xmodmap from this script. For example, I like to be able to access the full ISO 8859-1 character set and insert internationalized characters in my documents. Also, Linux likes to define <Shft>F1 to be F11 and <Shft>F2 to be F12. Since my keyboard has an F11 and F12, I prefer these keys to be set to F13 and F14 respectively. To handle this, I have defined $HOME/.xmodmaprc to contain the following:

keycode 113 = Multi_key
	keysym F1 = F1 F13
	keysym F2 = F2 F14
	keysym F3 = F3 F15
	...
	keysym F10 = F10 F22
	keycode 95 = F11 F23
	keycode 96 = F12 F24
Then, in my $HOME/.xsession file I have the following:

if [ -r $HOME/.xmodmaprc ]; then
	    /usr/bin/X11/xmodmap $HOME/.xmodmaprc
	fi
Finally, the most important step is running a window manager. Red Hat likes to run fvwm because it can be set up to look a lot like Windows 95®. Since I use SGI computers a lot, I prefer Motif (which costs money and doesn't come with Linux normally). There is also xsm and twm available. You might want to read the man pages for each to determine which window manager you prefer.

If it is desired, the user can exec the window manager as the last thing in the Xsession file. This will mean that the user has to end the window manager to end their session and return to the login screen. I prefer to run the window manager as a background process and exec an xterm as the last thing. This way, when I exit the xterm session, the user session will end and the login screen will be brought up. Note that the window manager and any window applications will be terminated because the X display will be closed. Any non-window applications started as a background process will not be terminated automatically and could continue after the user's session ends.

I start the Motif window manager as follows:

/usr/bin/X11/mwm
I start the final xterm with:

exec nxterm -geometry 80x50+10+10 -ls
This creates a version of the xterm that supports color. It will be 80 characters wide and display 50 lines. The window will be positioned in the upper left corner of the screen (at pixel position 10x10). The last option forces nxterm to run the shell as a login shell.

From within the user's Xsession file, you can run a number of xterms, xclock or whatever, all of which will start automatically when you login. Be sure to specify a geometry (with the -geometry option) to get each application positioned on the screen where you want it.

Also, remember to run the applications in the background (by terminating the line with ``&'') otherwise, the user Xsession file will wait until that application terminates before continuing.

Important Tricks

Here I want to discuss some more interesting and important tricks that can be done from the user's Xsession file.

All window managers can execute programs from a pulldown menu. Sometimes these programs need special environment variables defined prior to their execution (for example, Netscape may need SOCKS_NS to be defined). Since the user's environment variables are not usually set until a shell is started, the window manager and any programs started from the window manager will not have the user's environment defined. Trying to set them in $HOME/.cshrc, $HOME/.profile or $HOME/.login won't do any good.

One trick is to define these environment variables in the user's Xsession file. It is necessary to set these environment variables before you start the window manager.

Another trick that I like to do is define XUSERFILESEARCHPATH in my user Xsession file. Most applications look for and use a application resource file, usually found in /usr/lib/X11/app-defaults. For example, Netscape uses the file /usr/lib/X11/app-defaults/Netscape for its application resource settings. If you want to change any of these settings for your personal environment, you can copy this file into your home directory and modify it. Next time you run Netscape, it will find the one in your home directory first and use it.

I have found my home directory cluttered with application resource files and wanted to put them into my own private app-defaults directory. I did this by creating the directory and copying all the resource files into it. Then, I set XUSERFILESEARCHPATH to the following in my user Xsession file:

/home/carlson/app-defaults/%N:/usr/lib/X11/%L/app-defaults/%N:/usr/lib/X11/app-defaults/%N
This makes the application search in /home/carlson/app-defaults for application resource files before going to the default locations under /usr/lib/X11.

One last trick is for those of you that have multiple computers all running X servers. Here at home, I have an SGI O2 and my Linux machine. When I log in remotely to my O2, I want to be able to run X applications and have them use the display on my Linux box. In order to do this, I need to run xhost each time I log in to my Linux box to allow remote logins to access the X server.

As part of my user Xsession file, I have the following line:


This sets the X server on my Linux box to allow access from moonlight, the name of my O2.

Conclusion

I hope you have found this information useful and interesting. I've tried to show you how to create your own user Xsession file to start applications, set a special environment and run your own window manager. I'm sure you can come up with many more ideas.

One useful tool that I wrote, based on a similar application provided with SGI, is userenv. This application creates a login shell as a child and has it print its environment. This environment is collected and then printed to stdout in a form that can be executed to create the same environment by a shell.

In my user Xsession file, I have the following line:

eval `userenv`
This computes my user environment and echos it in a form that the shell can execute the output to create the same environment. The eval command causes the output to be processed by the shell.

You are welcome to a copy of the source for this program from my web site, http://members.home.net/cwcarlson/files/utilities.tar.gz.

Footnote

I am running Red Hat 5.1 but it appears that it hasn't changed significantly for a few years. Also, I find the configuration almost identical with other Unix platforms such as Silicon Graphics IRIX®. The only differences appear to be in what directory files are maintained.)


Copyright © 1999, Chris Carlson
Published in Issue 42 of Linux Gazette, June 1999

Linux Gazette... making Linux just a little more fun!

Published by Linux Journal


The Back Page


About This Month's Authors


Stephen Adler

While not building detectors in search of the quark gluon plasma, Steve Adler spends his time either 4 wheeling around the lab grounds or writing articles about the people behind the open source movement.

Larry Ayers

Larry lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP.

Chris Carlson

Chris has been developing software for various systems and hardware since 1973. He worked for 8 years as a Developer's Support Engineer for Silicon Graphics, Inc. based in Southern California. He is now working for DataDirect Networks assisting in the development and test of SGI and Linux device drivers. He lives in Orange County, California.

Jack Coats

Jack (is a consulting UNIX administrator for Collective Technologies. Personal activities include his family, church, leading a local UNIX users group in Houston (HOUNIX), and hacking computers.

Jim Dennis

Jim is the proprietor of Starshine Technical Services and is now working for LinuxCare. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/Peter Norton Group and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

Michael J. Hammel

A Computer Science graduate of Texas Tech University, Michael J. Hammel, mjhammel@graphics-muse.org, is an software developer specializing in X/Motif living in Dallas, Texas (but calls Boulder, CO home for some reason). His background includes everything from data communications to GUI development to Interactive Cable systems, all based in Unix. He has worked for companies such as Nortel, Dell Computer, and Xi Graphics. Michael writes the monthly Graphics Muse column in the Linux Gazette, maintains the Graphics Muse Web site and theLinux Graphics mini-Howto, helps administer the Internet Ray Tracing Competition (http://irtc.org) and recently completed work on his new book "The Artist's Guide to the Gimp", published by SSC, Inc. His outside interests include running, basketball, Thai food, gardening, and dogs.

Mark Nielsen

Mark founded The Computer Underground, Inc. in June of 1998. Since then, he has been working on Linux solutions for his customers ranging from custom computer hardware sales to programming and networking. Mark specializes in Perl, SQL, and HTML programming along with Beowulf clusters. Mark believes in the concept of contributing back to the Linux community which helped to start his company. Mark and his employees are always looking for exciting projects to do.


Not Linux


Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites.

With this issue, Linux Gazette has a new editor. My name is Mike Orr, and I have been SSC's Webmaster since April. Margie is still here to advise me on the Gazette, and without her and Darcy's help, this first issue would not have come out. Special thanks also goes to Jim Dennis and Heather Stern, who also helped me out immensely this month.

I have been a Linux enthusiast since November 1991 and got my own computer to install Linux on in 1993. I started with SLS and Slackware, but have been running Debian since 1995. At times I can be seen lurking on the debian-devel mailing list, but currently I hang out mostly in the comp.lang.python newsgroup.

I have a personal web page at http://mso.oz.net/,

Have fun!


Michael Orr
Editor, Linux Gazette, gazette@ssc.com


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Back


Linux Gazette Issue 42, May 1999, http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1999 Specialized Systems Consultants, Inc.