Linux Gazette

July 1999, Issue 43 Published by Linux Journal

indent

Visit Our Sponsors:

Linux Journal
Communigate Pro
cyclades
LinuxMall
Red Hat
SuSE
InfoMagic
indent

Table of Contents:

An interesting article from the upcoming Linux Journal:
Linux as an OPI Server for the Graphic Arts Industry,
by Jeff Wall.  
 
 
 
 
 
 
 
 

indent

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
indent
Linux Gazette, http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-99 Specialized Systems Consultants, Inc.
indent

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at gazette@ssc.com

Contents:


Help Wanted -- Article Ideas

Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.


 Date: Thu, 27 May 1999 12:33:42 -0230 (NDT)
From: Neil Zanella
Subject: call for article: wireless ethernet

It would be nice if someone wrote an article on wireless ethernet on Linux (eg. WaveLAN). I think it would make a good article.


 Date: Sat, 05 Jun 1999 16:06:20 +0000
From: Jeffrey Bell (jfbell@earthlink.net)
Subject: Article idea I don't know if this has already been done but how about an article about setting up a network printer between to GNU/Linux boxes. -- Jeffrey A. Bell


 Date: Fri, 4 Jun 1999 10:27:30 -0700 (PDT)

From: Kenneth Scharf (scharkalvin@yahoo.com)
Subject: How to format floppies with an LS120

I have installed an LS120 IDE drive in my linux machine and it works fine. I compiled the kernel with ide-floppy support for this. There is only one thing missing, a utility that will format floppies in the LS120 drive. Once I get this I can rip out the 'real' floppy disk drive and grap it's interrupt for a second lan card. Any Ideas here?


 Date: Wed, 9 Jun 1999 11:55:19 -0600
From: Terry Singleton (terry@dynavar.com)
Subject: gazette

Is the gazette not searchable? I am trying to find out if linux supports multilink PPP? with the built in pptd or a custom one?

Terry Singleton, Network Engineer
Dynavar Networking

(In response to the many letters we have received on searching: there is a link to a new LG search engine on The Front Page. --Editor.)


 Date: Sun, 20 Jun 1999 15:57:36 GMT
From: d@fnmail.com (daniel)
Subject: suggestions and comments

Hi. read your "Getting started with Linux" and as it says it gives a brief introduction to Linux. I've just started with linux, slakware, and comming from windows there are some problems that are hard to figure out for yourself. First of all it's this thing with devices. It took me two hours to get my cd-rom to work. it's simple, but if you don't know how to mount your cd or you don't know what a mounted hd/cd etc. is then it's quite tricky. And very often you need your cd. Then it took me about three more hours to connect to internet. This was also quite tricky comparing to just use the dial-up networking in windows. Since much information about linux can be found on the internet it's not good if you can't connect to it. I don't need this information, but I think these two things are stuff you should put in your article, espescially how to connect to an internet provider so you can serch for information on the web. daniel, d@fnmail.com


 Date: Fri, 4 Jun 1999 14:53:27 -0500
From: Tom Wyrick (twyrick@paulo.com)
Subject: RedHat Linux 6.0 on a Tecra 8000

I recently attempted to install RedHat Linux 6.0 on a Toshiba Tecra 8000 notebook computer, and ran into a couple of problems. The first time I installed it, everything appeared to be working properly, except the keyboard keys were too "touchy". Many times, it would act like the keys were sticking and print a character twice when it was pressed once. (I've seen a couple other references to this issue on Usenet, but no solutions were posted.)

After I used Linux for several days on the notebook, I encountered a situation where it didn't unlock the hard drive for read/write usage after it finished performing a disk check with fsck, and subsequent reboots failed due to the file system being stuck in "read only" mode.

At this point, I decided to reformat and do another install from scratch. This time around, the only changes I made were #1, not to put the system in runlevel 5 so it started in X immediately upon boot-up, and #2, enabled the apmd service for advanced power management. When this install completed, I had problems right away where Linux would boot - and then I wouldn't be able to type on the keyboard at all. (Every so often, I was able to get control of the keyboard back - but only after multiple reboots by hitting the power button on the notebook.)

Has anyone else out there had any luck running Linux on a Tecra 8000? Thanks, Tom.



From: "box2.tin.it" (toblett@tin.it)
Subject: Extrenal ISDN adapters Date: Sat, 5 Jun 1999 21:44:36 +0200

Is it possible to get ISDN adapters to work under Linux though they are = not supportet by the manufacturer on gessing there details?

--
Peter


 Date: Sat, 5 Jun 1999 23:43:34 -0400
From: "Jay Bramble" (shipkiller@earthlink.net)
Subject: IPChaining and Firewall rules

I have a small home network with 5 systems. I use Linux as my proxy/firewall/dial-upon demand internet server and fileserver. Before I upgraded to RH6 I could go to any site on the Web. Now with RH6 I cannot get to some sites. ie: www.hotmail.com, www.outpost.com and www.iomega.com to name a few. I can get to them from my Linux box but not from the network. It sends the request and I see some data return but then everything stops. Here is my rc.firewall file:

# In rc.d make a script called rc.firewall. Make it mod 700.
# Makes it read/write/execute by owner(root)
# chmod 700 rc.firewall

#!/bin/sh
#
#rc.firewall - Initial SIMPLE IP Masquerade test for 2.2.3 kernels
#using IPCHAINS
#enable dynamic IP address
echo "1" > /proc/sys/net/ipv4/ip_dynaddr

/ipchains -M -S 7200 10 60

#
#Home Area Network
#192.168.28.0/24
#
ipchains -P forward DENY
ipchains -A forward -s 192.168.1.0/24 -j MASQ
ipchains -A forward -s 192.168.1.1/24 -j MASQ
ipchains -A forward -s 192.168.1.2/24 -j MASQ
ipchains -A forward -s 192.168.1.4/24 -j MASQ
ipchains -A forward -s 192.168.1.5/24 -j MASQ

It looks like those sites don't like how my proxy/firewall is setup. This only happened when I upgraded to RH6 and the 2.2 series of kernels.

Any Ideas?????????


 Date: Mon, 07 Jun 1999 18:43:21 -0400
From: "Edward G. Prentice" (egp@egp.net)
Subject: NFS boot RH6.0 Alpha?

I have a few old Alpha (UDB Multia) systems. All but one have no disks. I'm hoping to figure out how to NFS boot one of them to become a diskless firewall box. I noticed while configuring the kernel that there is an option for NFS mount of the root, so I suspect all the software pieces are there - just a small matter of configuring the server to listen for NFS requests. The primary question I have, is: does anyone know if it is possible (through the SRM or ARC consoles) to boot directly from the net, or do I have to boot a floppy first that then boots from the net? I think there's also a way to put milo into flash memory so I don't need the floppy for milo, but I don't know how to tell milo to see the ethernet device. If I can't do it directly, how do I do it indirectly? - it sure would be nice to have one (or more) diskless alphas. On a somewhat related topic - is there any problem with adding a PCI NIC card to the Multia to get a second ethernet device for my firewall effort? Thanks in advance. /egp


 Date: Thu, 10 Jun 1999 03:34:53 -0400
From: zak (zak@acadia.net)
Subject: KODAK Picture Disk & gimp

Hi, again. I've started saving my photos with a KODAK Picture Disk when I have them developed. When I was using Windows this was no problem, but now that I'm using Linux there is. KODAK does not support Linux with the software that comes with their disks. When I save my images to my HD using MCOPY, the images are upside down, but not reversed edge-wise. I've tried using gimp to turn them 'right-side-up', and have managed to do just about everything else with those images using gimp *but* that. (I fully admit I have no knowledge about image manipulation, and really only want to know enough to accomplish this one thing.) Can someone please tell me how to do this with gimp? I'm using RH 5.1. Thanks in advance for any help you can give me.
Zak


 Date: Thu, 10 Jun 1999 17:47:07 +0200
From: khreis (septcs@cybercable.tm.fr)
Subject: Linux as Xterminal with SGI

I'd like to ask about a strange thing I am getting when I run an SGI application displayed on a Linux Xserver(PC )

The Linux Xserver is running with RedHat 5.2 as following:

  Driver      "accel"
    Device      "Trio32/Trio64"
    Monitor     "Standard VGA, 640x480 @ 60 Hz"
    Subsection "Display"
        Depth       32
        Modes       "640x480"
        ViewPort    0 0
        Virtual     800 600

If I open a tiff image (RGB 24bits) on the SGI it shows ok with all colors. It is also OK if I open the same image on the Linux server using XV or Gimp.

But if I display the image on the linux monitor while it is running on the SGI the colors are not matching at all. The image resolution is perfect but for the colors , the white is getting yellowish as if the green channel is missing or getting dimmer or I can say as the blue and red channel are swapped.

I used with a csh login on sgi:

setenv DISPLAY linux1:0
fm /var/tmp

On linux The fm window appears and I doubleclick on a tiff image:

Spok.tif
a window open with the image of my cat ( white originally) with yellow dominance.

Today I am asking myself what is wrong and how should I repair. Can anybody help on this.

Thanks in advance


 Date: Fri, 11 Jun 1999 10:45:35 +0200
From: ANTONIO SORIA (mpenas@sego.es)
Subject: need help!!!

I have a problem and i hope linuxgazette can help me with it. I'm about to buy a Toshiba Satellite S4030CDS which comes with the Trident Cyber 9525 video card. I've seen in the Xfree86 3.3.3.1 docs that it supports Trident cyber 9520. Can i use this driver for the 9525? Please, if you know the answer or know somebody who can help me let me know.

Thanks very much for reading my message and for the great gazzette!!


 Date: Fri, 11 Jun 1999 11:41:42 -0400
From: Kedric Bartsch (root@129.190.137.43)
Subject: vertical scroll bars and fvwm95

I have been using fvwm95 on RH5.2. All the xterms have scrollbars on the left side of their window. I recently installed SuSE6.1 and found that the xterm windows in fvwm95 have no vertical scroll bars at all. This makes it tough to look back through a screen's previous display. I tried the "Scroll module" in the fvwm95 configuration menue but it scrolls the window itself rather than the screen display history. I know the previous lines are there because past display lines appear when I resize the window vertically.

My question is how to I add vertical scroll bars to the xterm displays?

Thanks,
Kedric C. Bartsch


 Date: Sat, 12 Jun 1999 17:24:13 +0100
From: "Jolt-Freak" (stephen@ph01480.freeserve.co.uk)
Subject: X won't start

I Have Reacently Installed LINUX on an old 486DX2/66 that I bought for that specific purpose. I can boot up an login into root but when i issue the command the command startx to get X to star this is what i get:

(This message came with MIME-escapes embedded. I'm not sure which numbers were typed and which are MIME codes. --Ed.)
execve failed for /ect/X11/X (errno 2)
and then 6
_X11TransSocketUNIXConnect: Can't Connect: errno =3D 2
then
Giving up
and Finally I wonderered if anyone could help a LINUX newbie

Jolt-Freak
(Only playing with the prompt)


 Date: Mon, 14 Jun 1999 13:04:39 +0200
From: Izbaner (lizbaner@alfa.c-map.pl)
Subject: set_multmode {Error 0x04}

I have a problem with hard disk (Seagate 4.3MB). On the start appears the kernel message:

hda: set_multmode 0x51 {DriveStatus SeekComplete Error}
error 0x04 {DriveStatusError}

It appears in all kernel's versions and distributions, which I have (SuSE 5.3, 6.0, 6.1; RedHat 5.1, 6.0; kernel from 2.0.34 to 2.2.9). After that I can work in text mode, but in X Window some applications hang up the system (no key work, no actions, no way to exit or shutdown the system...).

Help me, please!!!

__________________
Lucas z Izbanerowic


 Date: Mon, 14 Jun 1999 14:12:04 +0200
From: rakeshm@za.ibm.com
Subject: FAT32 and Linux

Hi everyone...

I just got a new PC and it came with Win 98 (and FAT32) pre-installed. I also recently read an article saying that Linux does not get along with FAT32. =&gr; LILO can;t be loaded on FAT32. Is this correct ?

I plan on installing Red Hat Linux 6.0 on a seperate slave drive, and having a dual boot. I need to keep my Win98 as well as everyone in the family uses it, and likes Games. Has anyone had any problems with Win98 and Linux ? Is there anything that I have to watch out for ?

Thanks
Regards
Rakesh Mistry


 Date: Tue, 15 Jun 1999 07:51:20 +0800
From: Haji Mokhtar Stork (znur@pl.jaring.my)
Subject: Installation of REDHAT with Win98

By now you would have become the fifth person I have tried to contact over the above mentioned subject. The closest I got was to a detailed description on how to partition and install Dos/Win95/OS-2, but when I tried contacting the person through his e-mail, apparently it no longer exists. So I am back to square one.

I have purchased REDHAT 6.0 at a local Linux fair. Unfortunately I go not get them to partition and install it on my computer for free as I was leaving for China the next day. Now I am trying to do so with great hardship. The material I have downloaded from Linuz.com and metalab.unc.edu etc does not serve my needs, so I need your expertise in detail on how to go about it.

I have formatted my HardDisk 4.5GB.
It has a DOS 6.2 pre-partition of26%
The Extended partition is 74%
Logical Drives E: and F: are each 37%
Drive E: has Win98. F: is for Linux.
I have a second slave Hard Disk as D:

Problem arise when I boot REDHAT 6.0, its an auto generated process which guides me through. First question: SCSI [select: No/Yes?Back]. When selecting Yes, a selection is provided but on choosing and going to auto verification, all proposals are rejected because my system has an Adaptec AVA-1502 SCSI Host Adaptor. I can't go any further!

First, I have gone about the right partitioning?

I can't read the README file on my REDHAT CD Rom because I cannot get it started.

Please kindly assist me. Thank you.

Haji Mokhtar Stork
Malaysia.


 Date: Tue, 15 Jun 1999 03:20:09 PDT
From: Marek fastcom (mfastcom@hotmail.com)
Subject: LINUX Ghostscripts *.DWG into *.EPS

Are there any LINUX scripts (Ghostscripts) available for converting *.DWG into *.EPS alternatively JPG, TIF,etc.

Regards.
Marek


 Date: Wed, 16 Jun 1999 07:14:20 -0700 (PDT)
From: "Allen D. Tate" (computermantate@yahoo.com)
Subject: Dell Optiplex GX1 and the PS/2 Mouse

I have a Dell OptiPlex GX1, Pentium II w/64 MB RAM and I'm trying to get XWindow up and running but when I run startx, I get no response from the mouse. Has anyone ran into a similar problem? If so how did you fix it? I tried changing the mouse settings in the X86Config file but it didn't seem to help. Any comments or suggestions will be greatly appreciated.

Thanks,
Allen Tate
Evansville, Indiana


Hello to all in the linux community, I would like to ask anyone who might have some idea on how to mount a cdrom in linux, when I go into the /mnt directory and type mount cdrom the response I get is something like Linux does not recognise hdc as a block device. I am running linux 6.0 and I have no idea what manufacturer or driver my cdrom drive is, can anyone help ???

thanks
Dave


 Date: Sat, 19 Jun 1999 23:41:38 +0200
From: Thomas H (thomas@snt.nu)
Subject: Cable modem problems + graphical ftp client

Hi!

I am a new GNU/Linux user (coming from the OS/2 world). I find the Linux programs powerful, although it often is quite frustrating to learn to use all of them.

It's a shame to say, but I write this from a Windoze machine. The reason is that my cable modem provider (Telia in Sweden) pops out my connection in 175 seconds when using dhclient under SuSE Linux 6.0. Anyone got a solution?

Another question I would be very grateful to have an answer to regards FTP. Which good FTP-programs for X do you recommend? I need one that supports the PASV (passive) mode. I also would like to have a program that can sync files between my /home/thomas and my ftp server. I have heard about "rsync" but don't know anything about it.

Thanks for any help!


 Date: Sun, 20 Jun 1999 00:17:25 -0700
From: Ricky Deitemeyer (ricky@mediabase.premrad.com)
Subject: FAT Compatibility

At work I have a Linux (Redhat 6.0) workstation and at home I have a WinnNT machine. What are some good utils that I could use to write to a disk with a FAT fs under Linux? (I'm assuming that this would be easier than trying to get NT to read ext2...)


 Date: Mon, 21 Jun 1999 12:52:38 +0100
From: Network Management (Netman@fastnet-systems.com)
Subject: Netflex3 cards on RedHat 5.2

I am a new user to Linux and am running a Compaq Deskpro (for my sins). I have seen several mails about using the integrated Netflex3 cards but have not seen any replies which mean anything to me. Can someone please send me instructions on how to find and install a driver for this card.

Thanks in advance for anyone who can help.
Andy


 Date: Mon, 21 Jun 1999 17:45:17 +0200
From: Horacio Antunez (hantunez@ippt.gov.pl)
Subject: Installation problems

While trying to install RedHat 6.0, after checking CDROM and Floppy disk I got the message

scsi : 0 hosts
scsi : detected total
Partition check
VFS: Cannot open device 08:21
Kernel panic: VFS: Unable to mount root fs on 08:21

Configuration:
Dell Precision 610, P.III Xeon 500 MHz, 1GB RAM, NT 4.0
HD: 2x 9 GB SCSI
M.O. drive (also with SCSI controller)

The thing is that I had no problems on an identical machine except for: 256MB RAM, only 1 HD 9GB SCSI, no M.O. drive.

Is there any upper limit for RAM?

Does Linux support:

TIA for any help
Horacio Antunez


 Date: Mon, 21 Jun 1999 13:53:51 -0500
From: Gregory Buck (GBuck@sbsway.com)
Subject: Tseng Labs

Does Tseng Labs have an e-mail address I can get in touch with them through? I have been to their web site and e-mailed them at: 'financial@tseng.com' and 'prodsupp@tseng.com' (both addresses generate an "unknown e-mail address" type of error).

Thank you for your help.
Gregory Buck


 Date: Mon, 21 Jun 1999 17:00:48 -0600
From: Bryan Anderson (byran@sykes.com)
Subject: Compiling problems

I am currently running redhat 4.2 for SPARC on a SPARC IPC. I have been programming C and C++ for about four years now, but this problem has me stumped. I am trying to port a few apps over from i386 linux to Sparc linux. I downloaded the source files and untarred them just fine, but from then on the horrors begin. I have read the docs for each app and followed the instructions meticulously, and even tried some of my own homebrewed fixes to try and get the sources to run, but I still can not get anywhere. The problems seem to lie in not the source itself, but in the libs installed on my machine. I get tons of warnings and one error that seems to stand out. The error is transcribed as something like this:

/usr/include/time.h:58 -- Parse error for fd_func

The only other problem I have found is on perusal of the *.h files, I have found numerous references to header files in directories that don't EVEN EXIST!!! Is this a standard thing, or just on the SPARC port for linux? For example, I am looking inside my /usr/include/sys/time.h and find references to a linux/time.h, when there is not a linux directory anywhere!! Is this a standard? If it is, it seems that someone got a little sloppy in their porting. I have managed to fix this problem by going in and removing references to files that don't exist or redirecting them to files that do exist, but the error above has definitely stumped me. Does anyone have experience with this error and how it can be fixed? I don't have much experience in building my own header files, but when I did do it, I never saw that error. I would be much obliged to anyone who could provide some guidence in this issue.


 Date: Tue, 22 Jun 1999 01:17:20 +0200
From: "Bgsoft" (maximiliam@agarde.it)
Subject: Info sulla Red Hat!

Spett.le Direttoredel Linux Gazzette,

Mi chiamo Nino Brando, e dato che ho visto in edicola una promozione(se si può chiamarla così) della red hat di linux, contenente 4 CD.

Vengo al nocciolo, e possibile che ci sia il sistema operativo Linux? Anche perchè il prezzo non supera le £.25.000, e poi la Red Hat è un sistema operativo?

Chiedo scusa per la mia ignoranza ma è da poco che vorrei entrare nel mondo Linux, sarei lieto di una risposta ed anche di qualche consiglio.

La ringrazio,
distinti saluti
Nino Brando

(Can somebody who speaks Italian please help this person? He sent me an English version but I couldn't understand it either. :) I think he saw a Red Hat disk set and is wondering if it's the real Linux. --Ed.)


Date: Wed, 23 Jun 1999 23:20:31 PDT
From: junainah sarian (ainina76@hotmail.com)
Subject: Installing Linux in Windows 98

I've difficulties in installing the Linux. Can you help me in solving this matter?


 Date: Thu, 24 Jun 1999 14:50:21 +0300
From: vintze (vintze@libertatea.ro)
Subject: help me please!

I'm from Romania. I install linux in my PowerPC but i can't print. I have a 8600/200 Power Mac and a HP 4MV printer. Please HELP me. my e-mail addres is vintze@libertatea.ro


 Date: Fri, 25 Jun 1999 09:24:45 +1000
From: Zubin Henner (zubinh@one.net.au)
Subject: Help! Compatibility problems between linux and windows filesystems; StarOffice 5.1; Graphics settings

Hi Linuxsters, I am a current Win98 user trying to switch to Linux, but I have run into a few little problems!

Firstly, after successfully partitioning my HDD and installing Red Hat Linux 5.2, I wanted to reinstall Win98 as it was getting rather slow. So I mounted my (FAT32) C: drive in Linux (mount -t vfat /dev/hda1 /mnt/c:) and backed up my essential files onto the Linux partition.

After reinstalling Windows I copied the files back, only to discover that some seemed to be corrupted by the Linux filesystem. Some self-extracting programs like WinZip 7.0 and WinBoost 1.24 will not execute, giving errors like "Permission denied" or "This is not a Win32 application". This is not a drama as I can simply download these again, however many of my important compressed "zip" files containing Word documents are corrupt and some files cannot be extracted. What is the sitch? Can these files be "fixed"? Should I completely avoid mixing the two filesystems in future? Any help would be gratefully appreciated!

Another problem I had was installing StarOffice 5.1 Personal Edition for Linux - it freezes on page 2 of setup where it asks for the registration key number. I downloaded the program from download.com and therefore don't have a number. Even if I did it would be no use as the program freezes anyway. Also I have since realised that it is probably because I only have 15MB RAM (1MB for built-in video card) and not the minimum requirement of 32MB (duh!). Which poses the question, "Why does MS Office happily run on my system with the flaw-ridden inefficient Windows OS, while StarOffice won't?" Please excuse me, I am only new to Linux! I plan to upgrade my RAM soon so hopefully this won't be a problem - just curious, you know?

I am also having problems getting the appropriate graphics settings happening on Linux. I have a Socket 7 M571 motherboard with a built-in 64-bit VGA chip. It is a 1 to 4MB chip and I have set it to 1MB (in BIOS) due to my pathetic RAM situation. In Windows I can easily go to 800x600 and 1024x768 with 16- and 24-bit colour. During Red Hat Linux 5.2 installation I selected "VGA16" video card and "Custom: Non-interlaced SVGA; 800x600@60Hz, 640x480@72Hz" monitor type. But it won't go above 640x480 resolution without going into virtual mode. The colour settings are also not right. What do I do?

And finally (!) every time I exit "X", I get an error message "FreeFont count is 2; should be 1; Fixing..." What does this mean and how do I fix it? (do I need to?)

I love the idea of leaving Windows behind but I can't while I have these problems! Could I ask that any responses be reasonably basic as I am totally new to Linux (or should I say "Linux is totally new to me!"). All help will be greatly appreciated! Thank you...

Zubin.


 Date: Sun, 27 Jun 1999 02:49:34 PDT
From: javafun@excite.com
Subject: linux in algeria

I would thank for your good job, i installed red hat linux 5.1 without much trouble except for xwindow my video card is sis 5597 i wonder if it's supported under linux.

friendly mimoune


 Date: Wed, 30 Jun 1999 12:48:10 +0100
From: "ian baker" (ian@pncl.co.uk)
Subject: Question!

Hi,

I am growing to like the idea, philosophy behind Linux. I am a home user, reasonable non-technical, is this a good move? and what is the difference between Redhat and Suse. I hope I'm sending this to the right place and hope that you'll give me an answer.

Many thanks
Ian Baker (UK)


General Mail


 Date: Tue, 1 Jun 1999 18:15:14 -0400
From: "Pierre Abbat" (phma@oltronics.net)
Subject: Garbled HTML in Linux Gazette

I found a reliable way to crash KFM: look at the front page, follow the link to the current issue, then follow the front page link. A few people have reported KFM crashes. I ran Amaya to check the page for errors and found the following:

*** Errors in http://www.linuxgazette.com/ temp file: /home/phma/.amaya/1/www.linuxgazette.com
   line 53, char 51: Unknown attribute "NOSAVE"
   line 57, char 22: Unknown attribute "color-"#BB0000""
   line 107, char 73: Unknown tag </table<
   line 114, char 7: Tag <table> is not allowed here
[rest of error message not shown.]

Please fix them (I suspect the </table<, but it might be something else). phma

(These tags were related to the old search engine. I took them out and it works now with my KFM. Please let me know if you have any further problems. --Editor)


 Date: Wed, 16 Jun 1999 12:15:34 -0600
From: Coran Fisher (salyavin@verinet.com)
Subject: Setting up mail for a home network using exim

I was just reading the article "Setting up mail for a home network using exim" in issue 42 and I noticed one possible problem. In the suggested .forward file any mail that contains Emily in the To header will go to emi's mailbox, if someone was to send a message with several addresses in the To field one to jbloggs and one address had Emily in it (weather Emily be the Emily on the local network or some other Emily). If a person sent mail with only one To address and bcc'd or cc'd the rest that problem wouldn't exist but alas not all email is sent that way (at least to me). You could probably cut down on the possibility of this a bit by having it look for Emily's first and last name.

Regards,
Coran

(A revised version of the article appears in this issue. --Ed.)


 Date: Sat, 19 Jun 1999 14:30:47 +1000 (EST)
From: corprint login <'corprint@mail.netspace.net.au'>
Subject: LG Issue 42 - email article

I was most impressed with the following article in Linux Gazette 42, June 1999: "Setting up mail for a home network using exim"

I followed Jan's suggestions on a RedHat 5.1 Linux system and it eventually worked. You may care to note the following comments and pass them on to the author (as no email address was offered).

  1. The "exim" package as supplied with RedHat Linux does not include "eximconfig". I pinched a copy from a Debian distribution using ar and tar. The configure file is too hard to setup without it.
  2. The perl script "outfilt" as published, fails on my system. The '@' character in the script should be '\@'.
  3. As I do not use a Smarthost for email, I found that taking out the reference to it in the ROUTERS CONFIGURATION section of the exim configure file, worked fine. (Note that Debian uses exim.conf whereas the RedHat version of exim uses the file 'configure').

I have seen many requests for interfacing MS Internet Mail (???) to Linux mail facilities on the Linux User Group discussion lists and this article is most timely.

Thank you for a great magazine.
Frank Drew


Published in Linux Gazette Issue 43, July 1999

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


News in General


 August 1999 Linux Journal

The August issue of Linux Journal will be hitting the newsstands in mid-July. This issue focuses on Graphics with an article about flight simulators one about game ports at Loki, and one about Motif/Lesstif application development. Linux Journal now has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue64/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/ljsubsorder.html.

For Subcribers Only: Linux Journal archives are now available on-line at http://interactive.linuxjournal.com/


 EDUCAUSE news

News from EDUCAUSE: Edupage, 26 May 1999

A federal judge has indicated that he may rule in favor of Sun Microsystems in the company's copyright battle with Microsoft , allowing Sun to keep control of its Java programming language. The ongoing legal battle between Sun and Microsoft arose from concerns that Microsoft violated its licensing agreement with Sun for use of Java's source code by altering Java to run more effectively on the Windows operating system. U.S. District Judge Ronald Whyte wrote that he will most likely rule in favor of Sun, preventing Microsoft and other companies from changing Java to run certain software products better than others. Some analysts speculate that the court loss may not deter Microsoft, but will instead provide the company with incentive to stop using Java or even to develop an alternative. (Los Angeles Times 05/26/99)

IBM plans to adjust its AIX operating system to support Linux applications. This will allow IBM customers to store all of their Web applications on one server, the company says. IBM's Robert LeBlanc says, "As more customers move to the Web, they'll need to integrate applications." Enabling AIX to run Linux will help customers simplify and manage growing networks, says LeBlanc. IBM's modified version of AIX will be released by the end of this year, the company says. Analysts say IBM and Sun, which modified Solaris to support Linux, are ensuring that they will be able to take advantage of any Linux Web applications that may become popular in the future. In addition to the AIX changes, IBM plans to ship its DB2 software with Pacific HiTech's TurboLinux version of Linux. Pacific HiTech will package IBM's WebSphere software with TurboLinux by the end of 1999, says IBM's Dick Sullivan. (Bloomberg 05/25/99)


 Ecrix's VXA-1 Tape Drive to Ship with Penguin Computing's Linux Servers

Boulder, CO - June 8, 1999 - Ecrix Corporation today announced a key partnership with Penguin Computing Inc., the Linux reliability leader and the nation's largest and fastest-growing company focused exclusively on turnkey Linux solutions. According to the agreement, Penguin will offer Ecrix's highly reliable VXA-1 tape drive on all of its Linux servers, providing an exciting new data backup and restore option for its customers. Based on Ecrix's groundbreaking VXA technology, the VXA-1 tape drive delivers major advances over conventional tape technology, offering users unprecedented data restore capabilities. The VXA-1 tape drive is the price/performance leader in its market, with 66GB of capacity, 6MB/second data transfer speed, and an MSRP of $1,295. The partnership with Penguin Computing enables Ecrix to begin penetrating the fast-growing Linux market, and provides Penguin's customers with leading-edge tape drive products.


 SuSE Launches Business Partner Program

Nuremberg, Germany -- June 4, 1999 -- Today, SuSE GmbH, the parent company of SuSE Inc., began offering a Business Partner Program targeted specifically at Linux system integrators and consultants. This program is in addition to the recently-announced VAR and ISV Partner Programs launched at Spring Comdex '99 by SuSE Inc.

The Business Partner Program includes priority support, training, a moderated private on-line forum, and access to a knowledge base, among other features. Qualified Partners are those who seek to offer Linux services and want to benefefit from association with the SuSE brand.

Those interested in applying for the SuSE Business Partner Program should contact SuSE by sending e-mail to business-partner@suse.de or calling +49 911 740 53 56 (Europe). Those interested in the VAR and ISV programs should send email to info@suse.com or call 1-510-835-7873 (U.S.).


 LinuxMall.com Partners with Workstation 2000

LinuxMall.com is pleased to announce our partnership with Workstation 2000. Workstation 2000 is a California-based provider of Linux workstations, notebooks and servers. Workstation 2000 combines high-quality hardware with the Linux OS to provide solutions for small business, corporations, educational institutions and personal use.

The Workstation 2000 Developer Station is the ideal workhorse for productivity under Linux. The Developer Station base system is equipped with a 400 MHz Pentium II processor, Intel SE440-BX motherboard, 64 MB of RAM and a 4 GB EIDE hard disk. All that horsepower is tucked into a quality mid-tower case with an 8 MB AGP video card, 10/100 Ethernet card and a 40x CD-ROM.

Learn more about the Workstation 2000 Developer Station: http://www.LinuxMall.com/products/01112.html


 Magic Software to Send Linux Developers to Meet the Penguins

Magic Software Enterprises announced that it will award a free 10-day cruise for two to Antarctica to the developer who builds the best e-commerce solution for the Linux platform using Magic, the company's highly productive development technology. The contest, titled The Magic for Linux Really Cool Contest, runs from May 20, 1999 through October 15, 1999, with all entry forms due no later than September 30, 1999. Comp lete details on the contest can be obtained through the company's website, http://www.magic-sw.com.

Magic is also on the board of directors of Linux International.

Magic also announced new technology bringing interactive processing to web applications. They will be demonstrating it at Linux World in August.


 UseNetServer.Com allowing Linux users free access to their NNTP servers

UseNetServer.Com has made the decision to convert our NNTP systems to Linux from MS Windows NT. In doing this we are opening up our servers to the Linux Community for propagation of important information. UseNetServer.Com is allowing all users interested in Linux newsgroup issues free access to our servers. Your login / password is linux / free connecting to Linux-news.usenetserver.com (207.153.76.21) or news2.usenetserver.com (207.153.76.19).

We have found several major bugs in the kernel which Alan Cox and Stephen Tweedie have quickly resolved for us. Allowing commercial grade news to the Linux community will speed the dissemination of Linux patches and problems. If you have any new news groups you would like added to the hierarchy drop me an email at usenet@exectech.net and we will include them. Joe Devita, and his crew of propeller heads at the Linux General Store (http://www.linuxgeneralstore.com) in Atlanta helped install and solve all of our problems. Please note we are still working Rob Fleischmann with BCandid (Highwind Software) to resolve some of their software issues with Linux.

UseNetServer is peering with all the major NNTP providers to include SuperNew, UUNet, SprintNet, and many other smaller providers. This provides a near real time feed of this information, so you don't have to wait on your slow local server for the data. I have added 70gb dedicated to just the Linux groups, this will spool up to a ton of information for you. If you are overseas, you can still connect to our servers as we are very well connected via NetRail a tier-1 internet provider. We have terrific speed to the UK and Asia which multiple DS-3s connected to MAE-East and MAE-West. Check us out. http://www.usenetserver.com, we're a small company trying to help out a big community. We look forward to your comments on this free access.


 Linux certification exams

June 3, 1999, Raleigh, NC-- The Linux Professional Institute (LPI), an industry-wide group developing a professional certification program for Linux, is pleased to announce the creation of its corporate sponsorship program and a number of early sponsors. LPI also welcomes the addition of several new members to its Advisory Council, including IBM, ExecuTrain and CompUSA.

Two sponsorship plans, for corporations and individuals, have been introduced to allow anyone to assist the LPI in its goal of creating a high-quality, vendor-neutral program. LPI aims to deliver its first certification exams in July 1999.

"While we have heavily depended on the volunteer community in the spirit of other Linux projects, putting together a respected certification program requires a substantial investment," said Chuck Mead, LPI Director of Corporate Relations. "The financial support of the Linux community is crucial to our program's timeliness and credibility."

The LPI corporate sponsorship program allows for donations from $1,000 to more than $50,000. Individual sponsorships allow for donations from $100 to $1,000.

Current sponsors include Caldera, LinuxCare, SuSE, Digital Creations, Jon 'maddog' Hall, Richard Ames, and others.

A full description of sponsor benefits and other features of the program, can be found at http://www.lpi.org/sponsorship.html on the LPI website.


 Sair - Weiley Linux and GNU certification program

New York, NY May 3, 1999 Global publisher John Wiley & Sons, Inc. today announced its partnership with Sair, Inc., to publish a series of test preparation guides for the Sair Linux and GNU Certification program.

Dr. P. Tobin Maginnis, noted Linux researcher and President and Founder of the Oxford, Mississippi-based Sair, Inc., has put together an advisory board of Linux industry leaders to develop an authoritative, non-proprietary certification program. The comprehensive, four-level training and testing program is aimed at information technology professionals in the private and public sectors. Students will acquire high-level skills and in-depth knowledge of Linux, the fastest growing open source operating system in the world.

http://www.linuxcertification.org.


 German thin client developer opens office in Hong Kong

IGEL GmbH from Germany - expert developer of thin client technology (embedded systems) based on Linux OS - expands in the Asia Pacific market with the establishment of "IGEL Asia Limited" in Hong Kong, opening May, 1999.

IGEL GMBH is expanding in the Asia Pacific region to accommodate growth and a need to be closer to the major OEM production centers in Asia. CEO, Franz Hintermayr said that IGEL currently works with a large number of international clients, some of whom have strong ties as well as production in Asia. One of the aims of having an office in Hong Kong is to work more closely and effectively with these existing and potential partners in that region. Apart from the OEM business IGEL seeks to further develop ties with telecoms, ISP's and distribution channels for its range of products and services. IGEL products will also be localized, for which a local development team will be set up, to specifically target markets such as China, Taiwan, Korea and Japan; which use the double byte character set. The operation in Hong Kong will be managed by Mr. Jean Louis van der Velde, who has been active in the IT business in Asia for the past 12 years.

IGEL, established in 1989, is one of today's most innovative vendors of computer technology. These technologies include JNT for embedded systems, Etherminal Thin Client products, internet decoders, and IGEL clock soft-and hardware products for professional time synchronization. For more Information please refer to the IGEL web-site at www.igel.de or the mirror site www.igelasia.com


 Renting software

Ottawa, Canada -- June 9, 1999 -- Corel Corporation is pleased to announce that it is joining forces with Channelware, a business unit of Nortel Networks to rent its award-winning software applications to customers. This initiative joins the first program involving distribution of Channelware's NetActive software through retail stores.

Corel Print House Magic 4 NetActive Version will be available for customers to rent at select Blockbuster locations in Austin, Texas, and Anchorage, Alaska, starting in June. "Channelware invented secure Software Activation and we are proud to be teaming up with them in this breakthrough initiative," said Dr. Michael Cowpland, president and chief executive officer of Corel Corporation. "The dynamics of the computer world are changing rapidly, and we are keen to use this rental technology to allow our customers to access our products more easily - even without leaving their houses. This is an innovation in the software industry."

Channelware's NetActive technology will be embedded in the Corel software, making it possible for customers to rent the software for 72 hours. Once the customer brings the CD-ROM home and launches the application, the software will connect to the Channelware Activation Server. A one-time InstanceKey is then delivered over the Web in seconds. The key enables the customer to start using the software. The NetActive system keeps track of how long the customer uses the software, and offers the customer options for extended use.

Unlike standard video rentals, the customer will keep the software rental CD after the initial rental is over. After the rental period, the user has the option to: rent the software again; buy the right to use the rented software on a perpetual basis; or buy retail versions of Corel Print House Magic online and have the shrinkwrap versions of the product delivered to the door. The 72-hour rental has a suggested retail price (SRP) of US $5.99; re-renting costs US $3.99. Customers can buy Corel Print House Magic 4 NetActive Version on a perpetual basis for US $29.95*.


 Free Linux training materials

GBdirect, Europe's leading provider of Linux training, today announced the release of free Linux training materials. Lecture notes for the first four modules of their ``Linux Systems'' training course are now available on the web (www.linuxtraining.co.uk). The cover:

  1. An Overview of a Linux System
  2. The Linux Filesystem
  3. The Linux Command Line Interface
  4. Basic Linux Tools

Each module consists of 20-25 pages of bullet-pointed lecture notes followed by graduated exercises. Experienced commercial instructors should be able to deliver the lecture notes in about 1 hour, leaving between 1 and 2 hours for practical work based on the exercises. In addition to good teaching skills, the users of these materials are expected to have sound knowledge of the Linux and UNIX operating systems.

In the interests of good citizenship, the modules are distributed in low-bandwidth, open-source, formats.

GBdirect's primary motivation for releasing thse notes as open source software is to ensure their widest possible dissemination. A secondary motivation is the company's desire to `give something back' to the community which provides them with virtually all of their office software.

The company hope that others will contribute to the Linux Training Materials Project, by authoring their own lecture notes or by modifying those which GBdirect have contributed. To encourage such participation, GBdirect are releasing their materials under an open source licence derived from the Linux Documentation Project. This allows end-users to copy and distribute the lecture notes as the please, but protects the copyrights of their original authors.


 Pacific HiTech renamed to TurboLinux

SAN FRANCISCO, June 8 -- Pacific HiTech, the leader in high-performance Linux, today announced it has officially changed its name to TurboLinux, Inc. The change in the corporate identity marks the next milestone in the company's ramp-up of its North American operations in the wake of recent major alliance announcements with IBM and Computer Associates.

"We are determined to be a key catalyst in fueling the adoption of Linux worldwide and have demonstrated our ability to do this successfully in the Pacific Rim," said Cliff Miller, CEO of TurboLinux. "Building on that success by extending our presence into the North American market and other global markets represents the logical next steps for us. Our name change reflects our larger, global role beyond the Pacific Rim."

TurboLinux is quickly emerging as a dominant, global player in the Linux industry with offices in the U.S, Japan, China and Australia. Its product is currently the fastest growing operating system platform in Japan, with more than two million units of TurboLinux distributed in the past 18 months via retail and wholesale channels, hardware OEM programs, and book and magazine bundling. When TurboLinux 3.0 was introduced in Asia in December, it outsold Microsoft Corp.'s Windows NT (2000) at Japanese retail point of sale outlets, according to the high technology analyst firm Computer News. Further, the product was voted "Editor's Choice Award for 1998" by Byte Magazine in Japan.

The company's web sites is www.turbolinux.com or, in Japanese, at www.pht.co.jp.

TurboLinux also announced it will be the first Linux provider to sign an original equipment manufacturing (OEM) software agreement with Sendmail for Sendmail Pro. TurboLinux will integrate and bundle Sendmail Pro with an enterprise Linux mail server product to be introduced later this year. TurboLinux will provide Sendmail support to Linux customers in Japan.


 TurboLinux 3.6 distribution released

SAN FRANCISCO, June 29 /PRNewswire/ -- TurboLinux, the leader in high-performance Linux, today announced it is shipping its newest English language offering, TurboLinux Workstation 3.6.

Based on the 2.2.9 Linux kernel, TurboLinux Workstation 3.6 retails for $49.95 and is currently available from the company's web site at www.turbolinux.com. It will be available in North America through retail outlets and resellers later this summer.

"TurboLinux is best known as the Linux leader in the Pacific Rim through our Japanese and Chinese language products," said Cliff Miller, president and CEO of TurboLinux. "TurboLinux Workstation 3.6 is the first of a series of forthcoming Linux offerings that are designed to meet the needs of high performance Linux users in North America and illustrate our ongoing commitment to this market. On TurboLinux Workstation 3.6 we've also improved the installer that Forbes Online and other reviewers described as the best in the market."

TurboLinux Workstation 3.6 includes Netscape's latest version 4.6 browser and an easy-to-install RPM version of Corel's popular WordPerfect 8 for Linux. Other popular office productivity software for Linux and a comprehensive suite of developer tools are also included. For increased flexibility, TurboLinux Workstation 3.6 users can choose between the default TurboDesk desktop environment or the latest GNOME or KDE windows managers.

TurboLinux Workstation 3.6 ships with an all-new, 300-page user's guide. In addition to the installation and source CDs, users also receive a Companion CD packed with popular Linux applications and utilities, including Tripwire, Staroffice 5.0 and X-win32. TurboLinux provides 60 days of free installation support.


 Corel and Rebel.com Sponsor Ottawa Linux Symposium

Ottawa, Canada- June 15, 1999- Corel Corporation and Rebel.com are hosting the 1st annual Ottawa Linux Symposium in Ottawa from July 22 to July 24, 1999.

The Ottawa Linux Symposium, run by Achilles Internet Ltd., will provide the opportunity for Linux developers and system administrators to expand their knowledge of the Linux operating system. The event is for anyone who is interested in the technology behind Linux and will feature a number of prominent speakers from the Linux community. The keynote speaker for the event is Alan Cox, one of the primary Linux developers. Mr. Cox is the maintainer for the AC series of leading-edge Linux patches.

Achilles has invited 350 Linux developers from all around the world. The list of speakers is impressive, including: Pat Beirne of Corel Corporation; Alex deVries of The Puffin Group Inc.; Zach A. Brown of Red Hat Software Ltd.; Stephane Eranian of Hewlett-Packard; Miguel de Icaza of GNOME Support; Richard Guy Briggs of Free S/WAN; and Mike Shaver of Mozilla.org.


 Benchmark specialist invites Red Hat and Microsoft to a rematch

Chicago, IL -(June 17 1999) - Neal Nelson, benchmark guru and founder of the world's largest independent client/server testing facility, has extended an invitation to Microsoft and Red Hat to participate in an open, public performance comparison between hot operating system rivals Windows NT and Linux.

Nelson issued the invitation as a result of a recently published study sponsored by Microsoft.* One of the conclusions of the study is that "Microsoft Windows NT Server 4.0 is 2.5 times faster than Linux as a File Server and 3.7 times faster as a Web Server."

Many have questioned the test results because different tuning levels were used for NT than those used with Linux. For example, NT was tested with NT tuning, benchmarking and technical support from Microsoft, as well as Internet Information Server 4.0 tuning information from the Standard Performance Evaluation Corp.

Linux however, received almost no additional tuning, support or involvement from Linux-based technical sources. The testing lab cited difficulty in obtaining tuning information from Linux knowledge bases, and a query with Red Hat ended up going through the wrong channels.

This has outraged the growing base of Linux supporters who are clamoring for an unbiased test, one that is not sponsored by either Microsoft or Linux.

*Study conducted by Mindcraft, Inc., a software testing company based in Los Gatos, CA


 32BitsOnline.com Merges with Bleeding Edge Magazine

VANCOUVER, BC - June 17, 1999 - Medullas Publishing Company, parent company of 32BitsOnline Magazine (http://www.32bitsonline.com/) and Linux Applications (http://www.linuxapps.com/) today announced that it has acquired Bleeding Edge Magazine (http://www.gcs.bc.ca/bem/). Under the term of the agreement, Bleeding Edge will join 32BitsOnline Magazine as its news information source for software development.

Like 32BitsOnline, Bleeding Edge will continue to centrally focus on developing open source application for Linux. In addition to application development, Bleeding Edge will also focus in delivering articles on gaming development.


 New site offers free personal file storage for linux users

Mill Valley, CA, JUNE//99 - Linux users can now get 25 Megs of free disk space for their files accessible from any computer on the Internet. FreeLinuxSpace.com, a new website service from FreeDiskSpace, offers subscribers free a virtual folders system, where they can upload, store and download all types of files into their personal secured area. "For Linux users this service alleviates the need for setting up floppy, zip, or hardware drives," it also gives business people and students the ability to store files securely and share files with colleagues worldwide", said Ari Freeman, CMO of FreeDiskSpace.

The FreeLinuxSpace folder service includes password protection, file descriptions, multiple file downloads, free trial versions of software programs and requires no FTP software. Folders can be upgraded to include high security protection through "https" and shared folder access to unlimited amount of users.

To get 25 free Megs of online file storage and to learn more about the sites. Go to FreeDiskSpace.com or FreeLinuxSpace.com.


 Linux Press new series: Linux Resource Series

PENNGROVE, CA (June 29, 1999) - Linux Press today introduced its newest line of books, the *Linux Resource Series*. Designed to provide comprehensive documentation for the latest Linux distributions and concepts, the Linux Resource Series will enable users of all levels to access Linux information.

First in the series is The Installation and Getting Started Guides for Red Hat Linux 6.0. Based on Red Hat's latest Linux 6.0 distribution, the two user manuals have been combined into one handy volume. Also included are two Red Hat Linux 6.0 CD-ROMs that contain the Linux operating system, the source code and a selection of over 600 packages such as C/C++ compilers, programming languages, Internet Server, utilities, editors and much more. Bonus files include a commercial-grade backup program and disk partitioning tools.

"The Installation and Getting Started Guides for Red Hat Linux 6.0" provide information on the following significant subjects: Installation, Package Selection with RPM, System Administration, System Configuration, Latest Stable 2.2.x Kernel, Networking, GNOME and KDE Window Managers, and Enhanced Font Support.


 Linux Links

Linux Knowledge Base: http://linuxkb.cheek.com/
(many forms of Linux documentation including the HOWTOs, Gazette articles, and third-party documentation.

The Linux Guide, a comprehensive compendium of Linux terms and definitions: http://www.linuxlinks.com/guide/

Red Hat's imminent IPO (stock offering): http://www.redhat.com/corp/press_ipo.html. Check the Red Hat home page for updates.

A Linux web camera: http://www.linuxcam.com/, http://www.musiqueplus.com/

Applix's new Linux division (newsalert.com article): http://www.newsalert.com/bin/story?StoryId=Cn2CHqbKbyteXotm

Free web-based e-mail, run on a Linux server: http://www.linuxmail.org/

Canadian Linux site, with links to lots of Linux information: http://www.linuxcanada.net

Server administration system for schools (kindergarten through high school): http://k12admin.cmsd.bc.ca/

Home Depot testing Linux for mushrooming PC volume (computerworld article): http://www.computerworld.com/home/print.nsf/CWFlash/990621B00E

Extensive IDG interview with Linus (SunWorld article): http://www.sunworld.com/swol-06-1999/swol-06-torvalds.html?0621a

TheLinuxStore Comes to www.onsale.com (Yahoo article): http://biz.yahoo.com/bw/990621/ca_onsale__1.html


Software Announcements


 C.O.L.A news


 Cygnus introduces Code Fusion IDE for Linux

PC EXPO, N.Y., June 21, 1999 - Cygnus Solutions, the leader in open-source software, today unveiled Cygnus Code FusionÔ for Linux, the industry's highest performance1, most complete Integrated Development Environment (IDE) for Linux developers. Code Fusion IDE makes it possible for developers familiar with programming on Windows platforms to quickly become productive in developing for Linux.

The Code Fusion IDE is optimized for the Intel2 Architecture to provide developers the tools required for building the fastest applications possible. This complete Linux IDE tightly integrates C, C++, and Java3 programming languages with a robust graphical user interface (GUI) to enhance developer productivity and reduce software product time-to-market. Cygnus Code Fusion supports all major Linux distributions to offer Linux developers the most flexibility in development on and for Linux.

With Code Fusion, Cygnus combines the latest Cygnus-certified, open-source GNU tools release with an intuitive graphical IDE framework. The performance and functionality of the Code Fusion IDE -- featuring a C, C++, and Java tools project manager, editor, graphical browsers and the Cygnus InsightÔ debugger interface, is being demonstrated for the first time at PC Expo in the Linux Pavilion at the Cygnus Booth, 1525-27.

Cygnus Code Fusion for Linux will be shipped in July 1999 and is priced at $299. Code Fusion IDE features a simple installation of all necessary tools to develop software on Linux, including printed and on-line documentation and 30-day installation support upon registration. Code Fusion will be available for purchase online at www.cygnus.com/linux and through the Cygnus Partner Program.

Cygnus also announced plans to release the source code to Cygnus Insight, a graphical user interface (GUI) for the industry-standard GNU debugger, GDB. Known in programming circles as GDBtk, the Cygnus Insight GUI provides the technology for effective and efficient debug sessions by improving a software developer's ability to visualize, manage, and examine the status of a program as it is debugged. The source code for Cygnus Insight debugger will be available from Cygnus in July on http://sourceware.cygnus.com/gdb.


 Cygnus launches subscription service for open source software

SUNNYVALE, Calif., June 8, 1999 - Cygnus Solutions, the leader in open-source software, today announced the immediate availability of SourcewareÔ CD, a subscription program for the open-source software projects hosted by Cygnus at http://sourceware.cygnus.com. The Sourceware CD provides convenient access to the latest open-source technologies, such as eCos (Embedded Cygnus Operating System), the EGCS compiler, GDB debugger, and Cygwin, which are currently available on the sourceware.cygnus.com Web site. Sourceware.cygnus.com is an open-source Web resource for software developers around the world that provides infrastructural software technologies intended to establish a common, open standard platform for software development. The Sourceware CD provides a complete snapshot of sources and selected binaries for:

· Cygwin - a UNIX API for Win32 systems,
· eCos - the Embedded Cygnus Operating System,
· EGCS Compiler project - features industry leading embedded compiler
technology,
· GDB - industry leading embedded and native debugger,
· Open-Source Tools for the Java  Language - a developer toolkit and
Mauve, a test suite for Java class libraries,
· binutils, libstdc++, GNATS, automake, autoconf and other open source
sponsored projects.

The Cygnus Sourceware CD is immediately available and is priced at $19.99 for a single snapshot, or $69.99 for an annual CD subscription (domestic customers only) that includes four quarterly shipments of the latest source code from all Sourceware projects. Sourceware CDs can be ordered immediately at www.cygnus.com/sourcewarecd or http://sourceware.cygnus.com/.


 iServer: a Web/Application Server Written Entirely in Java

Kearny, NJ. - June 27, 1999 - Servertec today announced the availability of a new release of iServer, a small, fast, scalable and easy to administer platform independent Web/Application Server written entirely in JavaTM.

iServer is the perfect Web Server for serving static Web pages and a powerful Application Server for generating dynamic, data driving Web pages using Java Servlets, iScript, Common Gateway Interface (CGI) and Server Side Includes (SSI).

iServer is now more scalable than ever, it can use any JDBC accessible database to store users, groups, access rights and access control lists, as well as, log client requests, server events and errors. The release also features support for JSDK 2.1, invoker servlet, an expanded API, bug fixes and updates to administrator and documentation.

iServer preview release is available for free at http://www.servertec.com (connect-time charges may apply).


 Compaq to Offer Linux Software Tools from Cygnus

VARVISION, San Diego, CA, May 26, 1999  Cygnus Solutions, the leader in open-source software, today announced that Compaq Computer Corporation plans to make available the Cygnus Professional Linux Developers Kit online to members of the Compaq Solutions Alliance (CSA).  Cygnus GNUPro Toolkit for Linux, Cygnus Source-Navigator for Linux, and future Linux software development products from Cygnus will be available to more than 3,500 independent software vendors, consultants and systems integrators who are members of the CSA program. Cygnus is also offering CSA members the industry=92s first Linux support package for GNUPro tools.  Given the growing demand for Linux products, any software developer, software consultant, or system integrator can evaluate Cygnus=92 Linux products at the CSA Test Drive New Technologies web site (www.compaq.com/csa/).  Members can then link to Cygnus to purchase the software at special pricing.


 Canto Media Asset Management (MAM) solution

May 26, 1999 (Berlin)- Canto Software, creator of Cumulus, the award-winnin g Media Asset Management (MAM) solution, announced support for the Linux operating system to be made available by the end of this year. The company will expand what it already the broadest platform support for a media asset management solution.

http://www.canto.com


 Linux STREAMS

Linux STREAMS (LiS) version 2.2 is now available.

Documentation: http://email.gcom.com/LiS/
Download: ftp://ftp.gcom.com/pub/linux/src/LiS-2.2/LiS-2.2.tgz

Support for 2.2.x kernels. Better loadable module support. Support for kerneld. Some bug fixes. Better documentation.


 Xref-Speller v.93.4

Xref-Speller v.93.4 for Linux is now available at addresses:

Primary site: http://www.xref.sk/
Mirror site: http://guma.ii.fmph.uniba.sk/xref/

Xref-Speller is a source browsing and advanced editing package intended for C and Java software developers.


 e-smith server and gateway

The e-smith server and gateway is a special distribution of Linux that installs on a PC in about 10 minutes, automatically converting it into a Internet thin communications server (SMTP, POP3, web, security, routing, etc. services). Installation is 100% automatic. A graphical user interface makes it very simple to configure the server and administer the network. We designed the product to be usable by enterprises without Linux expertise. It's an open source product, available for free download, or on CDROM with a manual for $40. We sell support contracts on it (ninety days for $195 / one year for $390).

http://www.e-smith.net


 eSoft inks liscensing pact with HP for "redphish"

BROOMFIELD, Colo., June 21, 1999 - eSoft Inc. (NASDAQ Small Caps: ESFT), the company that develops Internet access solutions for small businesses, today announced it has entered into a software licensing agreement with Hewlett-Packard Company (HP), one of the world's leading computer corporations. This is the first agreement for redphish(tm), the Linux(tm) licensing program recently unveiled by eSoft and is expected to total up to $500,000 in development and licensing fees.

eSoft Inc. was founded in 1984 with headquarters in Broomfield, Colo. eSoft provides a family of Internet appliances and services that enable small to medium-sized business to harness the full power of the Internet. The TEAM Internet family of products is designed for businesses with up to 200 workstations and provides low-cost, LAN-to-Internet connectivity and includes a range of featus, irencluding e-mail, Web browsing, firewall security, a Web server, remote access and virtual private network (VPN) functionality. Contact eSoft at 295 Interlocken Blvd., #500, Broomfield, Colo., 80021, USA; 303-444-1600 phone; 303-444-1640 fax; www.esoft.com. TEAM Internet is a registered trademark of eSoft Inc.


 Other Products

ACIS First 3D Modeling Engine To Offer LIN UX Port: http://www.spatial.com

NetBeans announces integrated EJB, CORBA and XML support in Java 2 Technology Development Suite: www.netbeans.com

Kaffe will be the first Java Virtual Machine to run Microsoft Java extensions on non-MS operating systems (newsalert.com article): http://www.newsalert.com/bin/story?StoryId=Cn2r:qbWbtLLnmdG3

SGMLtools 1.0.10 available for download: http://www.inf.ufrgs.br/~casantos/SGMLtools/
[Please download in the late evening to conserve bandwidth.]

South African vendor of Linux distributions: http://www.os2.co.za/software


Published in Linux Gazette Issue 43, July 1999


Contents:

(!)Greetings From Jim Dennis

(?)Hey answer guy!!!
(?)One more thing. --or--
Null Modems: Connecting MS-DOS to Linux as a Serial Terminal
(?)RedHat 5.2 Kernel 2.0.36 --or--
Upgrade Breaks Several Programs, /proc Problems, BogoMIPS Discrepancies
A visit to "Library Hell"
(?)Floppy/mount Problems: Disk Spins, Lights are on, No one's Home? --or--
Floppy Failure: mdir Works; mount Fails
Found the Culprit!
(?)need your help --or--
Incompetance in Parenting
(?)bad clusters --or--
Try Linux ... and Grammar
(?)Duplicating / --or--
Out of Space....or Inodes? All Sparsity Lost?
(?)RAID 1 solutions --or--
Arco Duplidisk: Disk Mirroring
(?)Modem Help --or--
Searching for Days for a Linux Modem: The Daze Continues

(!) Greetings from Jim Dennis

So, my LG activity for this month is pretty sparse. Does that mean that I haven't been involved in any Linux activity? Does it mean that I'm not getting enough LG TAG e-mail?

HARDLY!

However, my work at Linuxcare has been taking a pretty big bite out of my time. In addition the long drive up to the city (from my house in Campbell to Linuxcare's offices in San Francisco is about 50 miles) keeps me away from the keyboard for far too long. (Yes, I'm looking for cheap digs up in the city to keep my up there during the week).

Mostly I've been working with our training department, presenting classes on Linux Systems Administration to our customers and our new employees, and helping develop and refine the courseware around which the classes are built.

I've also been watching the Linux news on the 'net with my usual zeal.

The leading story this month seems to be "Mindcraft III --- The Return of the Benchmarkers." The results of the benchmarking tests aren't surprising. NT with IIS still fared better on this particular platform under these test conditions than the Linux+Apache+Samba combination. The Linux 2.2.9 kernel and The Apache 1.3.6 release seems to have closed almost half of the gap.

As I suggested last month, the most interesting lessons from this story have little to do with the programming and the numeric results. There were technical issues in the 2.2.5 kernel that were addressed by 2.2.9. I guess Apache was updated to use the sendfile() system call. These are relatively minor tweaks.

Microsoft and Mindcraft collaborated for a significant amount of time to find a set of conditions under which the Linux/Apache/ Samba combination would perform at a disadvantage to NT.

When MS and Mindcraft originally published their results the suite of tests and the processes employeed were thoroughly and quickly discredited. I've never seen such in-depth analysis about the value (or lack thereof) of benchmarking in the computing industry press.

Nonetheless, the developers of the involved open source packages shrugged, analyzed the results, did some profiling of their own, looked over their respective bits of code, devoted hours to coding tweaks, a few days worth to tests, and spent some time exchanging and debating different approaches to improving the code.

The important lessons from this are:

  1. Just because a criticism is discredited, biased, and possibly dishonest doesn't mean that we can't find some clues to lead to real improvements. These developers could have stuck their heads in the sand and dismissed the whole topic as unimportant. They could have felt that the PR and advocacy responses would suffice.
     
    That "ostrich" approach is more commonly found in corporate and government circles than among freeware programmers. This is largely due to management. A development manager at a large corporation will tend to put as much energy into internal PR and "spin control" as to any real improvement in the product. Programmers often find themselves at odds with their own management.
     
  2. When we choose to attend to criticisms, it's vital not to adopt their demonstration model as your objective. We must stay true to our own requirements.
     
    It would be easy to focus on "beating the Mindcraft benchmark" --- to insert special case code that exists solely to produce superior results under the specific conditions present in that suite of tests.
     
    This is referred to as "fraud."
     
    It would be technically easy for the kernel developers to write the code for this. However, it would be difficult to actually perpetrate this or any other fraud in any open source project (since the code is there for all to see --- and there are a number of people who actually read that code).
     
    So, the Linux, Apache, and Samba developers showed admirable focus on real improvements and seemed to have eschewed any temptation to commit fraud.
     
    (We can't know whether the competition has rigged their platform, since it is closed source and hasn't been thoroughly audited by reputable independents).

This leads us to a broader lesson. We can't properly evaluate any statistics (benchmark results are statistics, after all) without considering the source. What were the objectives (the requirements) of the people involved? Are the objectives of the people who took the measurements compatible with those of their audience. In large part any statistic "means" what the presenter intends it to "mean" (i.e. the number can only be applied to the situation that was measured).

Benchmarks are employed primarily by two groups of people: Software and hardware company marketeers, and computer periodical writers, editors and publishers. Occasionally sysadmins and IT people use statistics that are similar to benchmarks --- simulations results --- for their performance tuning and capacity planning work. Unfortunately these simulations are often confused with benchmarks.

Jim's first rule of requirements analysis is:

Identify the involved parties.

In this case we see two different producers of benchmarks and a common audience (the potential customers, and the readership are mostly the same). We also see that the real customers of most periodicals are the advertisers --- which work for the same corporations as the marketeers. This leads to a preference for benchmarks that is bred of familiarity.

Most real people on the street don't "use" benchmarks. They may be affected by them (as the opinions they form and get form others are partially swayed by overall reputations of the organizations that produce the benchmarks and those of the publications they read).

One of the best responses to the Mindcraft III results that I've read is by Christopher Lansdown. Basically it turns the question around.

Instead of interpreting the top of the graphs as "how fast does this go?" (a performance question) he looks at the bottom and the "baseline" system configurations (intended for comparison) and asks: "What is the most cost effective hardware and software combination which will provide the optimal capacity?"

This is an objective which matches that of most IT directors, sysadmins, webmasters and other people in the real world.

Let's consider the hypothetical question: Which is faster, an ostrich or a penguin? Which is faster UNDERWATER?

What Christopher points out is that a single processor PC with a couple hundred Mb of RAM and a single fast ethernet card is adequate for serving simple, static HTML pages to the web for any organization that has less than about 5 or 6 T1 (high speed) Internet lines. That is regardless of the demand/load (millions of hits per day) since the webserver will be idly waiting for the communications channels to clear whenever the demand exceeds the channel capacity.

The Mindcraft benchmarks clearly demonstrate this fact. You don't need NT with IIS and a 4 CPU SMP system with a Gigabyte of RAM and four 100Mbps ethernet cards to provide web services to the Internet. These results also suggest rather strongly that you don't need that platform for serving static HTML to your high speed Intranet.

Of course, the immediate retort is to question the applicability of these results to dynamic content. The Mindcraft benchmark design doesn't measure any form of dynamic content (but the c't magazine did - their article also has performance tuning hints for high-end hardware). Given the obvious objectives of the designers of this benchmark suite we can speculate that NT wouldn't fare as well in that scenario. Other empirical and anecdotal evidence supports that hypothesis; most users who have experience with Linux and NT webservers claim that the Linux systems "seem" more responsive and more robust; Microsoft uses about a half dozen separate NT webservers at their site (which still "feels" slow to many users).

This brings us back to our key lesson. Selection of hardware and software platforms should be based on requirements analysis. Benchmarks serve the requirements of the people who produce and disseminate them. Those requirements are unlikely to match those of the people who will be ultimately selecting software and hardware for real world deployment.

It is interesting to ask: "How does NT gain an advantage in this situation?" and "What could Linux do to perform better under those circumstances?"

From what I've read there are a few tricks that might help. Apparently one of the issues in this scenario is the fact that the system tested as four high speed ethernet cards.

Normally Linux (and other operating systems) are "interrupt-driven" --- activity on an interface generates an "interrupt" (a hardware event) which triggers some software activity (to schedule a handler). This is normally a efficient model. Most devices (network interfaces, hard disk controllers, serial ports, keyboards, etc) only need to be "serviced" occasionally (at rates that are glacial by comparison to modern processors).

Apparently NT has some sort of option to disable interrupts on (at least some) interfaces.

The other common model for handling I/O is called "polling." In this case the CPU checks for new data as frequently as its processing load allows. Polling is incredibly inefficient under most circumstances.

However, under the conditions present in the Mindcraft survey it can be more efficient and offer less latency than interrupt driven techniques.

It would be sheer idiocy for Linux to adopt a straight polling strategy for it's networking interfaces. However, it might be possible to have a hybrid. If the interrupt frequency on a given device exceeds one threshold the kernel might then switch to polling on that device. When the polling shows that the activity on that device as dropped back below another threshold it might be able to switch back to interrupt-driven mode.

I don't know if this is feasible. I don't even know if it's being considered by any Linux kernel developers. It might involve some significant retooling of each of the ethernet drivers. But, it is an interesting question. Other interesting questions: Will this be of benefit to any significant number of real world applications? Do those benefits outweigh the costs of implementation (larger more complex kernels, more opportunities for bugs, etc)?

Another obvious criticism of the whole Mindcraft scenario is the use of Apache. The Apache team's priorities relate to correctness (conformance to published standards), portability (the Apache web server and related tools run on almost all forms of UNIX, not just Linux; they even run on NT and its ilk), and features (support for the many modules and forms of dynamic content, etc). Note that performance isn't in the top three on this list.

Apache isn't the only web server available for Linux. It also isn't the "vendor preferred" web server (whatever that would mean!) So the primary justification for using it in these benchmarks is that it is the dominant web server in the Linux market. In fact Apache is the dominant web server on the Internet as a whole. Over half of all publicly accessible web servers run Apache or some derivative. (We might be tempted to draw a conclusion from this. It might be that some features are more important to more web masters than sheer performance speeds and latencies. Of course that might be an erroneous conclusion --- the dominance of Apache could be due to other factors. The dominance of MS Windows is primarily and artifact of the PC purchasing process --- MS Windows comes pre-installed, as did MS-DOS before it).

So, what if we switch out Apache for some other web server.

Zeus (http://www.zeustech.net/products/zeus3/), a commercial offering for Linux and other forms of UNIX, is probably the fastest in existence.

thttpd (http://www.acme.com/software/thttpd/) is probably the fastest in the "free" world. It's about as fast as the experimental kHTTPd (an implementation of a web server that runs directly in the kernel -- like the kNFSd that's available for Linux 2.2.x).

Under many conditions thttpd (and probably kHTTPd) are a few times faster than Apache. So they might beat NT + IIS by about 100 to 200 per cent. Of course, performance analysis is not that simple. If the kernel really is tied up in interrupt processing for a major portion of it's time in the Mindcraft scenario --- then the fast lightweight web server might offer only marginal improvement FOR THAT TEST.

For us back in the real world the implication is clear, however. If all you want to do is serve static pages with as little load and delay as possible --- consider using a lightweight httpd.

Also back in the real world we get back to other questions. How much does the hardware for a Mindcraft configuration cost? How much would it cost for a normal corporation to purchase/license the NT+IIS configuration that would be required for that configuration? (If I recall correctly, Microsoft still charges user licensing fees based on the desired capacity of concurrent IIS processes/threads/connections. I don't know the details, but I get the impression that you'd have to add a few grand to the $900 copy of NT server to legally support a "Mindcraft" configuration).

It's likely that a different test --- one whose objectives were stated to more closely simulate a "real world" market might give much different results.

Consider this:

Objective: Build/configure a web service out of standard commercially/freely available hardware and software components such that the total cost of the installation/deployment would be cost a typical customer less than $3000 outlay and no more than $1000 per year of recurring expenses (not counting bandwidth and ISP charges).
Participants will be free to bring any software and hardware that conforms to these requirements and to perform any tuning or optimizations they wish before and between scheduled executions of the test suite.
Results: The competing configurations will be tested with a mixture various sorts of common requests. The required responses will include static and dynamic pages which will be checked for correctness against a published baseline. Configurations generating more than X errors will be disqualified. Response times will be measured and graphed over a range of simulated loads. Any service failures will be noted on the graph where they occur. The graphs for each configuration will be computed based on the averages over Y runs through the test suite.
The graphs will be published as the final results.

The whole test could be redone for $5000 and $10000 price points to give an overview of the scalability of each configuration.

Note that this proposed benchmark idea (it's not a complete specification) doesn't generate a simple number. The graphs of the entire performance are the result. This allows the potential customer to gauge the configurations against their anticipated requirements.

How would a team of Linux/Apache and Samba enthusiasts approach this sort of contest? I'll save that question for next month.

Meanwhile, if you're enough of a glutton for my writing (an odd form of PUNishment I'll admit) and my paltry selection of answers, rants and ramblings for this month isn't enough then take a look at a couple of my "Open Letters" (http://www.starshine.org/jimd/openletters). By next month I hope that my book (Linux Systems Administration) will be off to the printers and my work at Linuxcare will have reached a level where I can do MORE ANSWER GUY QUESTIONS!

[ But not quite as many as January, ok? -- Heather ]


(?) Hey answer guy!!!

From Nate Brazell on Mon, 31 May 1999

Wow!

I really didn't expect a response. And certainly not one as detailed as this!!!

Thanks Dennis.

I do have questions regarding this part:
>> mount $NEWFS /mnt/tmp (Mounting my new FS)
>> cp -pax $OLDDIR /mnt/tmp (Copying all data to /mnt/tmp)
>> umount /mnt/tmp (unmounting /mnt/tmp? Where does my data go?)

(!) Your data stays both in $OLDDIR and on the filesystem that you had mounted on /mnt/tmp and which you'll be mounting over a new (empty) mount point which has the same name as the directory that contains the original copy of your data).
See the next couple of commands:
>> mv $OLDDIR $OLDDIR.old (Moving directories)
>> mkdir $OLDDIR (recreating directory)
>> chmod $OLD_DIR_PERMS $OLDDIR (Setting perms)
>> mount $NEWFS $OLDDIR (Mounting new FS)
Using these commands you now have two copies of your data. One copy is named .../$OLDDIR.old and the other is a new filesystem mounted on .../$OLDDIR
After you've verified, to your satisfaction, that everything is alright after your change, you can remove the old copy with 'rm -fr $OLDDIR.old'
In general there are two ways to transparently migrate data from one filesystem to another under UNIX.
The method I've describe moves the data onto a new filesystem that's mounted directly under the old location. Another method is to create a new filesystem on an arbitrary mount point (conventionally /u1, /u2, etc). and the original directory is replaced with a symlink to point to a directory under that new fs.
In either case it's possible that some differences will not be entirely transparent. In particular some files might have had hardlinks that crossed the boundary of the directory tree. Those links would now be broken (resulting in two separate files where formerly you had one file with two or more links. This is rarely a problem. However you could test for this case with a bit of scripting and editing.
Mainly you generate a report using 'find'. Use something like:
find $FSROOT -xdev -not -type d -links +1 \
	-printf "%i %p\n" | sort -n
... where $FSROOT is the root of whichever filesystem houses the directory tree that you're trying to migrate.
This prints a list of files sorted by their inodes. Any set of hard links to a given file have their device number and inode pair in common. You can then manually seach the resulting list (usually fairly short). For any even file you don't have to worry at all if all of its links, or none of its links, are under the subdirectory tree that you are moving. Probably there will be none that have this problem. For those that do, simply replace one set of the hard links with symlinks. In other words, all of the hard links that are inside the target directory tree should be converted to symlinks, or vice versa.
It's very unlikely that this will cause any problem. If you ever see a case where a UNIX or Linux program suffers from "transplant shock" I'd like to hear about it.

(?) Where is the old data that needs to go back into the newly created $OLDDIR?

(!) You copied it with the 'cp -pax'

(?) Null Modems: Connecting MS-DOS to Linux as a Serial Terminal

From phax on Mon, 31 May 1999

Would the terminal program start the null modem connection or would you have to have it be connected before hand through DOS (I don't know a whole lot about DOS)? I know Linux will be looking for a terminal on ttyS0 but will a terminal emulator show up as a terminal connected on that port? Sorry to be such a nag,

Richard Mills

(!) Linux will look for terminal connections on a line (/dev/ttyS0, /dev/ttyS1, whatever) if it it has a "getty" process running on that port.
You set up a getty process by modifying your /etc/inittab and adding a line like:
d1:23:respawn:/sbin/agetty -L 38400,19200,9600,2400,1200 ttyS1 vt100
... where you can use agetty, uugetty, mgetty, or getty_ps (but not mingetty). The syntax and additional configuration of each of these other getty packages differs slightly. Search through old issues of the Answer Guy for more detailed explanations and examples.
[ Issues 16, 17, 18, 21, 22, and 23 mention getty, and more recently, issues 34 and 37 describe using X over serial lines. -- Heather ]
As far as the DOS side of this, you generally just have to start up your terminal emulation package and configure it for "direct" or "null-modem" use.

(?) Upgrade Breaks Several Programs, /proc Problems, BogoMIPS Discrepancies

A visit to "Library Hell"

This refers to Upgrade breaks several programs... in Issue 42.


From Peter Caffall on Mon, 31 May 1999

Jim:

Thanks you for your detailed reply. Since I wrote, I resolved (although not yet solved) the problem. I had a free partition on my disk, which I made bootable, and installed (from scratch) the new RedHat 5.2. This came up with no real problems. Then I began moving some of the stuff from the old partition to the new. Everything works. When things settle down, and I've got everything from the old slice that I need, I will just wipe it out, and free it up.

The reference to libc.so.5.4.33 was due to a reference on another page to problems with Netscape.

Thanks again
Pete Caffall

(!) Glad you got it working. If you have the disk space (on a second drive or extra partition) you could do a fresh installation of Red Hat 6.0 and then selectively migrate your configuration and data files from your old filesystems. It's sort of a slow laborious way to do upgrades, but it's one that works for me.

(?) Floppy Failure: mdir Works; mount Fails

From Tim Baverstock on Fri, 25 Jun 1999

Hi.

I came across this page where someone'd asked you a question, apparently identical to something a (non-techie) friend of mine is now experiencing, except that his Linux is a vanilla RedHat 5.1 install (although with Star Office, and RedHat 5.2 Ghostscript and ppp).

He has a PCI PnP soundcard in his machine, which he's not managed to get working with W95 or with Linux, but the rest of the machine worked fine for both OSs, including the floppy.

All of a sudden, about a month ago, the floppy stopped mounting on Linux (works fine on W95).

(!) Does writing to the floppy work under MS Windows?

(?) I can `less -f /dev/fd0', to see the data on the floppy, and mdir/mcopy work fine.

(!) Does 'mcopy' work in both directions (copying to the floppy as well as from it)?

(?) The machine mounts his W95 C: drive as /mnt/dosC, and that works perfectly as well.

(!) So we know that this kernel is compiled with FAT fs support (linked in directly or the loadable module support is working).

(?) When I try `mount -t msdos /dev/fd0 /tmp/floppy', the mount command goes into `D' wait in the `ps axf' output, as does the update demon. The floppy lights, spins, then stops, but no failure messages appear, and I can't kill the mount. Subsequent attepts to mount also block, and if I recall correctly, mcopy says it can't write to the device.

Nothing appears in /var/log/messages.

During shutdown, the umount -a line in /etc/rc.d/init.d/halt hangs too.

If you're interested in whether fiddling with the soundcard fixes the problem, I'll be happy to let you know, but since mcopy and mdir work, this seems unlikely.

Nothing's been added or removed within the machine's case, so I think the only thing that could have changed, which persists over powerdowns, is the CMOS, and hence (presumably) some aspect of PnP that W95 was fiddling around with.

I've only ever had isapnp work under RedHat 6.0, when Redhat did it all for me! :) For my earlier kernels, I used the cmgr patch.

Cheers,
Tim Baverstock.

(!) What happens if you try mounting it in read-only mode?
It sure sounds like a hardware failure. I'd buy an extra floppy drive (about $20 US in most computer parts stores). I've asked questions to see if the problem is limited to the write functionality (since a careful reading of your messages seems to correlate to read-only vs. read/write access). When you mount a filesystem in rw mode under Linux --- I think the atime on the root of that filesystem will be updated (involving a write to the media). If it works when you try the 'mount -o ro' variation on the command --- that suggests that it is related to the write functions.

(?) Found The Culprit!

From Tim Baverstock on Sun, 27 Jun 1999

Hi Jim.

Ach! Rats!

I forgot to email you the solution I discovered!

The drive wrote perfectly well under Windows, and worked without difficulty in both directions with mcopy. I should have made this clearer in my first email; my apologies for this.

The functionality of the drive, and the evident integrity of the msdos filing system module eliminated those subsystems from the problem, which was why I was so perplexed, and why I wrote to you. :)

The next day, I used strace on `mount' to try and find out where it hung. It hung on the actual mount() system call itself.

I noticed that the automounter was in `D' discwait on the process list during its own mount attempt, so I disabled it in the boot sequence while trying to find out what was going on (I wanted to strace the very first attempt to mount the floppydrive) but that cured the problem!

Further investigation (with strace) revealed that I'd earlier changed /etc/resolv.conf to include a domain search path while trying to set my friend up with an ISP account, and the DNS hang was causing automount to hang while trying to finagle those strange pseudo-NFS mounts of the local host it does (by the host's internet name, not as `localhost') for the floppy drive!

I fixed resolv.conf, and the problem went away, although I've left AMD disabled, because autofs does the same job, and was installed alongside it on RedHat; and because one day I'll get my friend's ISP working on Linux as well as Windows. I don't want this to repeat. :)

Many thanks for your response, and my apologies once more for not writing sooner,

Tim Baverstock.


(?) Incompetance in Parenting

From Bernard Hahn on Fri, 25 Jun 1999

Hello my name is Bernie.

I have a 16 year old son that is heading for big trouble on the net while I am at work. I can not be in both places at the some time to keep an eye on him. Do you know if there are any programs that will run in Windows 98 that can copy the key board buffer to a file that would let me read in a text format. I would like the program to run at boot up and be able to copy the buffer all day long. I believe reading his keyboard buffer may be of some help to me.

Please help, thank you for any help you my have to offer,
Bernie in Los Angeles

(!) First, I'm NOT the "I want to spy on my children's use of MS Windows" Guy! I'm the Linux Gazette Answer Guy.
Of course, if you used Linux it would be pretty easy to secure the system so that the Internet and the modem were inaccessible during specific times of day or until specific passwords had been typed. It would even be possible to configure filtering and access control (to monitor and limit web access). You'd probably need to invest in some cabinetry (physically securing a PC generally involves carpentry).
Your question has nothing to do with Linux.
More importantly your problem is much larger than any software can solve. No software in the world could possibly make your son more trustworthy. You cannot keep your kid out of "big trouble on the net" by spying on his keystrokes.
If the muddled thinking that leads you to the fundamentally flawed (and morally corrupt) notion that you should covertly spy on your own teenage child using such software is typical of your approach to parenting then its probably too late for Bernie Jr.
I don't know what kind of "big trouble" you're trying to protect the kid from. If it's porn, keep in mind that porn sites are generally accessed through a GUI browser --- which are conveniently configured for one-handed operation (point and shoot, so to speak). If you're afraid that the kid is "cracking" (sometimes erroneously referred to as "hacking") and/or phreaking than any attempts you make to lock him out of your Windows '98 PC will just be too pathetic. If you do successfully find out that he's been vising 'badboys.net' what do you plan to do? Confront him with your printouts? Ground him until he's 35?
So what do you think the kid will do when he knows he's been caught out? Will it be an contest: your computer skills and time against his? Can he detect and bypass your futre methods better than you can implement them? Will he go use some buddy's computer? Will he skip the virtual trouble of the 'net and go out to get into trouble of a more dangerous variety? What will it do for the kid's opinion of you that you don't have the balls to talk to him directly and that you have to resort sneaking around on him?
The whole think is disgusting.
If you can't trust the kid with the computer by himself --- lock the computer is some room when you're not home or get rid of it.

(?) Try Linux ... and Grammar

From firefly on Mon, 14 Jun 1999

hi dont know if you can help me so ill run my problem by ya!

i just bought a8.4gig samsung drv i put it in as a slave and used partition magic to partition it 4k clusters......2gig/5gigand 1.4gig rebooted and installed win 95b

when itryed to use files form the hdd thsy had errors so i thought though ill format the drv and start again..... i removed all partitions and rebooted with a boot disk95ver and it started to format when it got too 27% it started saying. TRYING TO RECOVER FILE ALLOCATION UNITS now scandisk says ive got bad clusters...could you tell me whats happening here?

thanks g.lishman

(!) Sounds like a bad drive, bad cable, or bad controller (not to mention a bad keyboard actuator).
It could be some incompatibility between the slave and master (some IDE drives cannot co-exist in some combinations on an IDE channel). Try running it on the other IDE channel that you'll find in most recent PCs. (Configure the new drive as standalone or as the master to your IDE/ATAPI CD-ROM if you have one on that channel). Make sure to try a fresh drive cable.
You might also try using some punctuation and capitalization in your messages. This is not IRC. When you ask volunteers (such as me) to provide the technical support that your vendor was supposed to have sold you, the least you can do is spend a little extra time on your message. It's best if you can give the impression that you've done a bit of research and made some attempt to find the answer on your own.
Naturally you could also try installing Linux on this drive. Linux has a neat utility called 'badblocks' which can be used by itself and which is called by our filesystem creation and filesystem check utilities (mke2fs and e2fsck among others). After all, I'm the LINUX GAZETTE "Answer Guy" not the "my Samsung IDE hard drive doesn't work with Microsoft Windows '95 rev B answer guy."

(?) Out of Space....or Inodes? All Sparsity Lost?

From Derek Wyatt on Fri, 11 Jun 1999

Hi James,

I know this question has been asked before (i'v read the 'stuff' in the previous columns) but this one has an interesting wrinkle which i can't answer. I hope you can :)

I was copying a new slackware 4.0 installation from one disk to another. Incidently, i used two methods, using tar and find | afio, etc... It was the right way. I've done it many many many times before.

(!) You might not have preserved allocation "holes" (the "sparsity") of the files as you transferred them.
When a program opens a file in write mode and does a seek() or lseek() to some point that is more than a block past the end of the file, the Linux native filesystems (ext2, minix, etc) will leave the unnecessary blocks unallocated. This is possible in inode based filesystems (not in FAT/MS-DOS formatted filesystems).
These filesystems treat reads into such unallocated regions of a file as blocks of NULs (ASCII zero characters).
So, you use normal read and write commands in sequence (like 'cp' and 'cat' to to copy files) then you'll expand any such "holes" in the allocation map (the inode's list of clusters) into blocks of NULs and the file will take more space than it used to.
One possibility is that you used to have such "sparse" files and that your method of copying them failed to preserve those "holes." You could use the GNU 'cp --sparse=always' option to restore the "holes" in selected files (or create new ones wherever there are blocks of NULs in the data).
Most files are not sparse --- in fact there are only a couple of old dbm style libraries that used to create them in normal system use (the sendmail newaliases command used to be a prime example).
I don't think this accounts for your whole problem (i.e. it's not wholly a "holey" problem).

(?) Now, the problem is this: after the copy was complete, i used the slackware bootdisk and rootdisk to reboot things nice and clean to test the disk, and every copy i tried to do (including running lilo) resulted in a "file too large" error message. A 'df' reported that the disk had lots of space on it, as did 'du' (as did basic common sense :) ). The disk became completely unusable until i destroyed it and reinstalled slackware from scratch.

(!) Perhaps you should look at the output of the 'df -i' command.
Your Linux filesystems actually have couple of resources that are depleted at different rates from one another. If you have lots of small files than you are using up inodes faster than data blocks. The normal 'df' command output reports on your free data space. 'df -i' reports on the inode utilization.
So, its possible that you ran out of inodes even if you have plenty of disk space.

(?) Now, considering that the disk was just a 'raw' disk with data on it (ie. it wasn't the root partition at this point) I have no idea why it would behave like this. I tried eliminating /proc/* just for the heck of it, but to no avail.

(!) It is very easy to accidentally copy/archive your /proc (which is generally no harm). The problem is that you can easily restore that to your new root fs and mount a real /proc back over the restored copy/snapshot of the state of your proc fs back when you did the backup.
I recommend that you use the find -xdev or -mount options to prevent your find from crossing filesystem boundaries.
Let's say you have /, /usr/, /var, /usr/local, and /home as local filesystems. To use 'cpio' to back them up you could use a command like
find / /usr/ /var /usr/local /home -mount -depth | cpio ....
... to feed only the file names that are on that list of filesystem to cpio.
When using 'cpio' you can preserve sparsity while COPYING IN your data using the --sparse option.
Of course 'tar' works differently from 'cpio' in just about everyway that you could think of. You have to use something a bit more like:
tar cSlf /dev/st0 / /usr/ /var /usr/local /home
... where the -S preserves sparsity (during archive creation; and apparently NOT during restoration unless the archive was correctly created). Personally I think that this is a bug in GNU tar.
[ I suppose fortcing someone to use -S (or --sparse) when restoring offers the ability to desparsify the file, on a new filesystem which has room for it. Why it should be the default to not come out as it went in, though, I've no idea. -- Heather ]
The tar -l option instructs 'tar' not to cross fs boundaries.
The key general point here is that you might have mounted /proc or any other filesystem over a non-empty mount point. I personally think that the distribution maintainers should modify the default rc* scripts so that they won't mount filesystems over non-empty directories. I'd modify them to uniquely rename and remake any non-empty directory before mounting something over it (and post a nastygram to syslog, of course).
[ I disagree; I often touch a file named THIS_IS_THE_UNDERLYING_MOUNTPOINT for mount points, and I've actually had occasional administrative use for a few files to sit in the underlying drive in case that fs doesn't mount. Usually, notes about what should have been there, although I suppose that could be the content of the commentary filename above. -- Heather ]

(?) I hope i've given you enough information here. I've been using linux for years and have never come across something like this.

(!) I really don't know if I've given you the answer that will help for your situation. I've just tried to explain the most common couple of "out of space" situations that I've seen and heard about --- with the hopes that you're situation isn't more bizarre.
If your space problems persist through a reboot then you don't have the old "open anonymous file" problem (which I've described on other occasions). It's also a very good idea to run fsck (do a filesystem check) when you can't account for your missing space.

(?) Thanks alot! And keep up the good work :)

Sincerely,
Derek Quinn Wyatt

(!) I hope it helps.

(?) Need to learn details. Any suggestions?

From Derek Wyatt on Fri, 11 Jun 1999

Jim, thanks a lot for your quick reply.

I don't think this applies in my situation but there are a few things here that are news to me. It's good to know. If you were here, i'm sure you could figure it out :) But you're not. I simply need to learn more to solve something like this myself.

My knowledge of linux, how to use it and administrate it is in the upper intermediate level, i think. In order to get it higher, i need to learn about the details of the filesystem, the kernel, processes, etc etc... How would you recommend going about this sort of thing? Are there some online documents, or some books you would recommend? How about some source code to pour over?

Thanks again
:)
Derek

(!) Most Linux distributions, certainly all the large ones, contain the option to install the source code.
The Linux-Kernel mailing list (http://www,tux.org/lkml/) has archives mirrored in a few places, and several of the documents in the Linux Documentation Project (http://metalab,unc.edu/LDP/) are more rather than less technical.

(?) Arco Duplidisk: Disk Mirroring

From Randy Kerr on Tue, 08 Jun 1999

(?) Hi Jim,

Not quite sure how to post a question to the 'Answer Guy' and this seems to be the only option.

(!) One of these days we should clarify that. The address: <mailto: answerguy@linuxgazette.com> should work, as does <mailto: linux-questions-only@ssc.com> and, of course my home/personal address at starshine.org (deprecated).

(?) I was wondering if you have had any experience or know of anyone using Arco's DupliDisk for mirroring IDE drives. Wanted to know something about reliability, ease of installation, etc. Specifically, if a hard disk must be cloned prior to linking to its mate, or if the card mirrors the entire drive upon installation.

Thanks a lot.
Randy Kerr

(!) I don't have any experience with these controllers.
However it should work. According to their FAQ (http://www.arcoide.com/faq.htm):
Q : Does the DupliDisk support Windows 95, 98 and NT?
A : Yes, the DupliDisk is a total hardware solution and will work with any operating system--Windows 3.x, 95, 98 and NT, UNIX, LINUX, BSDI, FreeBSD, OS/2, Novell, Solaris386--without the use of ...
I have no idea regarding ease of use, reliability or any of that. A web search on the phrase "Arco Duplidisk" generates almost a 100 hits at Yahoo! including reviews in Computerist Magazine (http://www.p3p.com), VAR Business (http://www.varbusiness.com), Telephony World (http://www.telephonyworld.com), PC Today (http://www.pctoday.com) Medicine News (http://www.medicine-news.com/articles/computer)
Here are pointers to those reviews directly though I'm no judge of their accuracy or value:
Arco Announces New IDE Backup Device DupliDisk Makes Disk Mirroring an Affordable Option
http://www.p3p.com/news/10/arco.shtml
Arco Will Back You Up - VARBUSINESS - December 1996
http://www.varbusiness.com/print-archive/19961201/1220varsh048.asp
Arco Announces New IDE Backup Device DupliDisk Makes Disk Mirroring an Affordable Option
http://www.telephonyworld.com/roundup/duplidisk.htm
ARCO Computer,DupliDisk,medicine-news.com, Press Releases computer hard- and software,
http://www.medicine-news.com/articles/computer/arco98_1.html
PC Today's Hard-Hitting Product Reviews
http://www.pctoday.com/editorial/hardware/980416.html
In any event I'm not sure that the ~$200 you'd spend on one of these would really net you much advantage over Linux built-in md (multi-device) drivers (which implement striping, mirroring and RAID 5 through software).
These devices don't do give any performance advantage over a single disk drive (mentioned in their FAQ). Even the Linux software driver gives some performance edge over single disk (by interleaving read and write requests among the available drives and resynching the devices asynchronously through its caching mechanisms).
You should also consider the nature of the risks which the Duplidisk addresses vs. the actual risk profiles that are present. Duplidisk only protects you from a single drive failure (per controller). It doesn't address accidental deletion, damage due to software bugs (data corruption, etc) or deliberate sabotage due to failures in your security measures (including crackers, trojans, viruses, etc).
Drive failure is currently one of the less common causes of data losss under Linux (although the rate of damage caused by PC virus infection are probably even lower than disk failure under Linux).
Overall, I think you're much better served by using an extra hard drive (which you'd connect to Duplidisk) and just perform nightly snapshots to it using 'cp -pax' and or 'cpio -p' or 'tar cf ...' piped into a 'tar xf ...' The "snapshot" method protects against several different threats --- particularly that of accidental deletion; which is the most common cause of data loss. (If you have a 'cron' job which makes your snapshots in the middle of the night you'll usually have a half a day or so to realize that you've accidentally removed or damaged some of them).
Personally I think that's a better way of spending your money. (Heck, you can use the extra two hundred bucks to put in a third drive --- and use a combination of md/RAID-1 --- mirroring across a pair of drives and using the third for snapshots).

(?) Searching for Days for a Linux Modem: The Daze Continues

From Chris on Tue, 08 Jun 1999

i have been searching for days trying to find a Linux compatable modem. could you recomend a decent one?

thank you chris

(!) Any external modem should work. Some internal modems will --- but I don't recommend internal modems as a rule.
Unfortunately modems are one of those classes of PC components where the specific models and brands change so often that I just can't keep track of the "good ones."
I personally like Zyxel and Practical Peripherals. I don't like the USR "Sportster" line (cheap) and find that their "Courier" series is of high quality but also inordinately high price for home applications. (ISPs see very high duty cycles on their modems --- the extra money spent on an industrial grade modem for them is usually well spent; home and most office system modems get only intermittent use and light duty cycles).

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com

New Tips ] [ Answers to Mail Bag Questions ]


"." in root's path is unsafe!! (was: a.out binaries not working)


Date: Sat, 12 Jun 1999 14:58:28 -0400
From: "Peter V. Inskeep"

In Linux Gazette, Issue # 42, I provided an answer to the question of getting an a.out binary to run. I suggested that typing "./a.out" (sans quotes) would do the trick. I went on to suggest that the "current directory" be added to the path to avoid the bother of typing ./ before the name of the binary to be run.

Several have written to me to point out that adding ./ to the path is not good practice from a security viewpoint. Therefore, I urge anyone who has added the ./ to their path after reading my answer, remove it. Instead, just get in the practice of typing ./myprogramname when one wants to run a binary in the current directory.

I'd like to take this opportunity to thank Alex B., Art W., and Pete in the UK for taking the time to write me notes explaining the pitfalls of putting dot slash (./) in one's PATH. This is especially true for root, but apparently not good practice for anyone. As I understand it, a transgressor could easily put a program with evil intentions, but with a common name, such as "ls," in one of your commonly used directories. Next time you type ls on the command line, the evil program is run rather than the ls directory display program. This is so if one had modified one's PATH to include ./. If one had to include ./ in one's path, make sure it is at the end of the PATH statement.

Thanks for giving me this opportunity to correct the bad information I presented. Also, thanks again to those who took the time to write to me to explain the consequences of adding ./ to the PATH statement.

Pete, NO2D

Date: Thu, 10 Jun 1999 13:19:05 +0100
From: Pat Neave

> Try running the a.out binary with the command line: ./a.out I recently
> installed RedHat 5.2 and found that its $PATH statement does not include
> a path of " ./: " ./ is the path of the current directory that you are
> in. Remarkably, RedHat does not set up paths so that your current path 
> is looked at to execute a file. 

There is a good reason for RedHat (and hopefully all the Linux distributers) not to include '.' in your PATH. Its a security risk. Now, you may be OK on a non-networked system but I don't think it is a good habit to get into. The following is quoted from the Path Mini HOWTO:

12. Security concerns

The path is sometimes a big security problem. It is a very common way to hack into a system using some mistakes in path settings. It is easy to make Trojan horse attacks if hacker gets root or other users to execute his versions of commands.

A common mistake in the past (?) was to keep '.' in the root's path. Malicious hacker makes program 'ls' in his home directory. If root makes

	     # cd ~hacker
	     # ls
	
he executes ls command of hacker's.

--
Pat

Date: Mon, 14 Jun 1999 19:31:48 +0100
From: Jeffrey Voight

If you find it absolutely necessary to include . in your path, at least put it as the last entry in your path so that the system binaries are searched before '.' is.

Date: Tue, 15 Jun 1999 10:34:52 +0100

From: Alexander Thorp, athorp@lucent.com

Peter Inskeep writes:

"Remarkably, RedHat does not set up paths so that your current path is looked at to execute a file", i.e. does not include the directory . in $PATH."

This would only be remarkable to a DOS user. The inclusion of . in $PATH exposes the user to trojan horses. It should never appear in root's $PATH, and I don't like it in mine either.

Alex Thorp

(This is a sample of many letters received on the dangers of '.'. I don't use it in the root path, but I like in my path. :) --Ed.)


More Vi .exrc stuff


Date: Wed, 9 Jun 1999 15:18:11 -0400 (EDT)
From: Matt Boutet

When setting up you .exrc file for vi you can use the map command to map the function keys in addtion to the few unassigned 'normal' keys. Example:

	map #1 :set nu
this makes F1 turn line numbering on.

Matt


gzipping TWHT-1 (unzipping UNIX files on Windows)


Date: Fri, 25 Jun 1999 10:24:41 +0530
From: "Nagesh S K"

I am using winzip for windows from http://www.winzip.com this can handle various zip formats including gz,tar,zip,ajr etc. hope this helps.

Subject: Linux / Windows

Date: Tue, 29 Jun 1999 17:22:36 +0200

From: Peter Van Rompaey

maybe you already know this, but .tar and .gz files can be unpacked under Windows using Winzip 7

All the README-files (and every other plain-text file for that matter) can be opened with Notepad/Wordpad.

Also, if you use a Windows filesystem on you floppies ( vfat ), you can read 'em under Windows, but you can also mount them RW on any linux which has vfat support compiled into the kernel (most distributions have, trust me :-)

If you use StarOffice 5, then you can use Office 97 files to exchange data, cuz SO5 uses an O97-compatible file format.

Hope this helps ya and feel free to DMAL for comments/questions,

greetz,
Blacky
Undernet - #Supportline #Groningen


Deleted web pages


Date: Thu, 3 Jun 1999 15:02:06 +0200
From: "Martin Skjoldebrand"

This is not exclusively a Linux trick of course but here it goes:

If you've changed your web site structure and thereby removed a previous entry page (the first page a visitor comes to) which may have links to it you could link that page to your current entry page.

I removed my foreword.htm some time ago and later found a stale link on a foreign page leading to the missing page. So I simply created a link to my current toc.html. Now whenever someone follows the original link, instead of getting a 404 they get the toc.html.

HTH someone ...
M.


Make modem ignore funny dial tones


Date: Fri, 4 Jun 1999 14:24:18 -0400 (EDT)
From: Matt Willis On my telephone line, I have voice mail. The dialtone is different when there is a message waiting. This causes my modem (USR 56k) to get confused and quit, saying "No dialtone". Effectively, this breaks any automated dialup routines, such as a cron daemon to fetch mail in the middle of the night. To ignore dial tones, I added ATX3 to my modem codes:
#!/bin/sh
#
# This is part 2 of the ppp-on script. It will perform the connection
# protocol for the desired connection.
#
exec /usr/sbin/chat -v                                          \
        TIMEOUT         23                              \
        ABORT           '\nBUSY\r'                      \
        ABORT           '\nNO ANSWER\r'                 \
        ABORT           '\nRINGING\r\n\r\nRINGING\r'    \
        ''              ATZ                     \
        'OK'            ATL0M0                  \
        'OK'            ATX3 \
        'OK-+++\c-OK'   ATH0                    \
        TIMEOUT         50                              \
        OK              ATDT$TELEPHONE          \
        CONNECT         ''                              \
        rname:--rname:  $ACCOUNT                        \
        assword:        $PASSWORD


Tips in the following section are answers to questions printed in the Mail Bag column of previous issues.


ANSWER: Network boot disk for i386 without hd


Date: Fri, 4 Jun 1999 11:13:43 +0100
From: Wim Lemmers

http://www.psychosis.com/linux-router/

Hi, I think this comes close to what you're looking for.

wim


ANSWER: Question about 2 GB max?


Date: Fri, 4 Jun 1999 18:33:15 -0400
From: "Steven G. Johnson" Deirdre Saoirse wrote in the June $0.02 tips:
Traditionally, there has been a 2GB partition size limit (not just a FILE size limit) on PowerPC Linux partitions. I don't know if that will continue to be true with newer versions but it is true of LinuxPPC up to revision 4 and DR3 of MkLinux. I haven't checked if there's a YellowDogLinux specific answer however.

This is no longer true for LinuxPPC (including revision 4) or YDL, although it's still true for MkLinux. The partition limit was due to a kernel problem that disappeared somewhere in the 2.1.x series...use a 2.2.x kernel and you'll be fine. (I am using a 4GB partition quite happily with LinuxPPC R4 right now, with an uptime of several months.)

Cordially,
Steven G. Johnson

Date: Fri, 4 Jun 1999 18:37:26 -0400

From: "Steven G. Johnson"

Whoops, I read further in your June 2 cent tips, and I see that someone else has already replied to her message...although they claim the problem is with e2fsprogs, which I didn't touch on my machine. (Although perhaps there was an upgraded version in the installer image that I downloaded along with the new kernel.)


ANSWER: FTP access methods


Date: Sat, 05 Jun 1999 13:27:44 +0200
From: Ben De Rydt Subject: RE: FTP access methods
And I finally have a good question: In both Window$ and O$/2 I had apps that would treat ftp sites as folders (directories). It worked real well with keeping data in sync off-site. Is there a tool that will allow an FTP site to be mounted under Linux? It seems fairly useful to me, but freshmeat and other resources turned up nada.

Midnight Commander allows you to show an FTP-site in one pane and your local file system in the other. You can acces the FTP-site like you would a local directory (i.c. F5: copy, F6: move/rename, etc...)

Greetings,
Ben


ANSWER: Any inetd wizards out there?


Date: Mon, 14 Jun 1999 18:26:16 +0200
From: Ton Nijkes

On Mon, 03 May 1999 16:33:32 -0500, Pete wrote:

I have been digging for the past several months to try and find any way to bind inetd to one IP / interface. I have a machine with several virtual hosts, and had originally intended for only the main IP / interface to respond to telnet, ftp, etc. The virtuals would only respond via httpd. Unfortunatly, this doesn't seem to be the way it's working - not only can I telnet / ftp to all addresses, it seems like every inetd connection shows up on the LAST IP interface for some reason.

I've looked thru manpages, NAG, websites, and while I know a lot more than when I started looking, I was never able to solve this binding problem.

Anyone have the answer?

Pete,

I think the tcp wrapper daemon (tcpd) should do the trick. In /etc/hosts.allow and /etc/hosts.deny you can use constructs like daemon@host that will accomplish what you need (sort of).

Try:

      man tcpd
      man 5 hosts_access (look for 'SERVER ENDPOINT PATTERNS')

Greetings,
Ton.


ANSWER:


Date: Sun, 27 Jun 1999 14:43:43 -0800
From: Ramon Gandia Subject: Direct Win05-Linux connection

michael@cimmj.freeserve.co.uk wrote in LG #42:

Just read issue 41 and read the great article about direct cable connections between Win95 and Linux, I tried implementing this method but came across a couple of problems running Windows 98. (4.10.1998)

I can get terminal emulation (using HyperTerminal) running at 38400 baud but 115200 crashes at the password prompt. (115200 works with xon/xoff using kermit as the terminal program).

Can't figure out how to get Windows to dial out over the serial line as in your article. I tried creating a new modem using the modems wizard in the control panel using 'standard serial between 2 PC's' and it goes through the process reporting success at the end but no device appears anywhere.

The problem is in Win95/98. It does not come with a null modem driver. Windows assumes that you are using a REAL modem complete with AT commands, etc. If all you have is a null serial cable between the Win95 box and the Linux PPP server, then Win95 cannot be used because it cannot be set up unless you use a modem and a phone line.

However, there IS a null modem driver. You install this driver by copying it to c:\WINDOWS\INF (a hidden directory). You can then install a new modem. Select not to detect, but you will pick it from a list. When you get to the list, it will be at the top of the list of manufacturers, and you can select the generic null modem driver.

This driver has been around the internet for years, but I have put it up on my ftp server. ftp://ftp.nook.net/pub/unix/mdmcisco.inf

I have no problem then using my Win95 computers with terminal servers such as my Livingstons. It works a LOT faster than using a Modem, and communications is typically 115,200.

--
Ramon Gandia ================= Sysadmin ================ Nook Net
http://www.nook.net rfg@nook.net


Published in Linux Gazette Issue 43, July 1999


"Linux Gazette...making Linux just a little more fun!"


PLIP: Laplink Cable Install of Debian 2.1

By Bill Bennet


This installs linux via the parallel port

     Have you got a machine that is ready for linux but it has no CDROM drive? Is it also missing a modem or a network card? If you have a running linux box (with CDROM drive) available to you, then you can easily install linux via the good old laplink cable that plugs into the parallel port. Even old neglected machines come with a parallel port somewhere, so this method will guarantee that you can walk into any installing situation and be ready to roll.

Debian 2.1 can do this thing

     Just about all linux distributions can do this little PLIP console connection. There is a lot of good information available for you as well, but if it is only for RedHat, then as a Debian user you may feel left out. Even worse, too many times an article will assume that you know what you are doing.

     Heck, I do not know what I am doing, but I got this PLIP thing to install Debian 2.1 because I needed a good peer to peer network game player that fits on a small hard disk. Good old Debian has the small purposeful ".deb" packages that you can put in one at a time to make a very small yet powerful server. This article fires up Debian 2.1 and installs a PLIP peer to peer network system with X Windows so we can play netmaze head to head versus a small, bloodthirsty nephew or niece.

     The target system is a "Frankenstein"; a 486/DX66 with no CDROM, no modem, no network card, 16 MB RAM, a 1.44 MB floppy and two tiny hard disks, 110 and 170 megabytes small.

We begin at the running linux box that will serve us our CD through the laplink cable.

Server for the PLIP

We need to login as root and edit these files:


Scripting is the mojo for linux

     Those wacky HOWTO writers are always letting you copy their hard work by giving you a little script to run on your machine.

We need to create this little executable script:


#!/bin/sh
killall -HUP /usr/sbin/rpc.mountd
killall -HUP /usr/sbin/rpc.nfsd
echo re-exported file systems

     Fire up mcedit (it comes with Midnight Commander. You do have it, yes?). Go to the NFS-HOWTO and block out the exportfs script with the F3 toggle and then F9 copy it to the ~/.cedit/cooledit.clip file. Exit with F10.

     Then type mcedit your-new-file-name. It will give you a nice blank page to F9 insert the ~/.cedit/cooledit.clip file. Edit it in your personal way and F2 save it, or F9 save it as your-new-script-command-name.

     Then type chmod 755 your-new-file-name to make it executable. Copy it to /usr/sbin just for fun, but also to make it live in the path.

     For easy mnemonics and to honour the author, we should call it exportfs. Done.

     Huh? What does it do? It exports the currently mountable NFS directories in case you make a change.

/etc/init.d/network

We simply need to tell the machine about the plip1 device and who it connects with via pointopoint networking.


#! /bin/sh
ifconfig lo 127.0.0.1
route add -net 127.0.0.0
IPADDR=192.168.1.5
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
GATEWAY=
#ifconfig eth0 ${IPADDR} netmask ${NETMASK} broadcast ${BROADCAST}
#route add -net ${NETWORK}
[ "${GATEWAY}" ] && route add default gw ${GATEWAY} metric 1
ifconfig plip1 CHGUY pointopoint Salma up
route add -net ${NETWORK} netmask ${NETMASK} dev plip1

/etc/networks

The networks your machine knows about:


loopback 127.0.0.0
localnet 192.168.1.0

/etc/hosts

Well? Who is on this network anyway?


127.0.0.1	localhost
192.168.1.5	CHGUY.chguy.net	    CHGUY
192.168.1.3     Salma.chguy.net     Salma

/etc/hosts.deny

Kind of like being doorman at 54, eh?


ALL: PARANOID: DENY

/etc/hosts.allow

Salma gets in every time.


ALL: 192.168.1.3: ALLOW

/etc/exports

The NFS server will only give out the directories in the exports file.


/cdrom          *.chguy.net

/etc/fstab

This will let ordinary users mount the CDROM drive, which is not a problem at a home LAN (Local Area Network. Yes, that is right, your linux CD gives you a replacement for NT).


# (file system) (mount point) (type)   (options)     (dump) (pass)
/dev/cdrom      /cdrom        iso9660  users,exec,ro   0      0

Turn it on

Time to turn on PLIP on this CD server machine:


ifconfig plip1 CHGUY pointopoint Salma up
route add Salma dev plip1

All is ready for the installation.

  • we will allow the new machine onto the system
  • we have assigned a network address to each machine
  • we will export the /cdrom directory when the installer mounts the CD

Client for the PLIP

     The target for the installation is the client of your server. However, this is PLIP peer to peer networking, so both of the machines act as a server. For this installation we will refer to the target machine as the client, code named Salma.

Make two install diskettes

     Back at the server, mount the B1 CD (B2 for a laptop) and make your two installation floppies.

Simply cd deep into the CD:


cd /debian/dists/slink/main/disks-i386/current
dd if=resc1440.bin of=/dev/fd0
dd if=drv1440.bin of=/dev/fd0

Salma Laptop

     The resc1440.bin is the generic PC booter; resc1440tecra.bin is made for laptops.

     Place your floppy in the drive and power up Salma for the installation.

She boots your machine

     You get a nice menu of jobs with the Debian installer. Please read all of your options and stay loose!

     The Debian installation will go through the usual motions of assigning mount points for "/" (root), "/swap", and your custom placements.

     Now it is time to "Install Operating System Kernel and Modules" and that is how we will enable PLIP on Salma.

     After the kernel is ready to PLIP we can install the base system, which is a nice little ten MB file that fits on seven floppies. Yes, that is right, you can just make seven more floppies and get it on, no need to use PLIP.

     We are here to play netmaze and make a peer to peer network, so read on.

From the floppy

     The installer will ask you how you are going to install the kernel and modules, and even though you are going to use the CD in a minute, you must tell it you are going to install from a floppy.

     In almost every case you will be selecting /dev/fd0: it is the first floppy drive.

     Then it will ask for the resc1440.bin diskette so that you can make a live file system to work from. It booted, so it is already in the drive.

Kernel Modules

You have landed on Community Chest: FOLLOW INSTRUCTIONS ON TOP CARD. Your card says a driver is a kernel module. You no longer will ask for drivers. You will now ask for kernel modules. Do not pass GO.

     When the resc1440.bin diskette is done loading, the installer will ask you to place the Drivers floppy in the drive. It is the drv1440.bin diskette.

Configure for PLIP

     Now the installer wants to jump ahead to Make Linux Bootable Directly From Hard Disk. This is a confounder and we need a volunteer to fix it.

     You need to go down the menu and select "Configure Device Driver Modules". You need your set of modules to be installed in your kernel so that you can network to the CD server.

The set of modules

     There is a basic set of modules needed for your machine. It varies from user to user of course, and the following list is only a suggestion.

    Group     Module

  • block----no, you do not need paride right now. Compile it into your custom kernel in future if you want to use external devices, like CDROM drives, etc.
  • cdrom---no, you will not be controlling the server's CDROM. Get the right one from here if you buy a proprietary CDROM drive in future.
  • fs------YES, please get nfs so you can NFS and export files to your network.
  • fs------YES, please get nls_iso8859_1 for the nice character set to read.
  • fs------YES, please get vfat so you can muck about in the DOS file system and see long filenames.
  • ipv4----YES, please get rarp just in case your network tools need it. I do not know; better safe than sorry.
  • misc----no, you do not use lp right now. Compile it into your custom kernel in future if you want to switch from PLIP to printing. If you have two parallel ports then go ahead.
  • misc----YES, please get serial so you can use the serial port for an external modem in future.
  • misc----YES, please get psaux if you are trying to use USB and it craps out as usual. Then you can still use your ps/2 mouse.
  • net-----YES, please get dummy just in case your network tools need it. I do not know; better safe than sorry.
  • net-----YES, please get plip so that you can do pointopoint networking via the laplink cable.
  • net-----YES, please get ppp for future hookups to your ISP.
  • scsi----no, you might not have any scsi devices. Here is where you get that ppa module for your external Zip drive.

     When you install the modules and the installer gives you a new screen for parameters, you can usually just press Enter to go ahead.

     The plip module will already have the io port address and irq of the parallel port assigned to it, so just press Enter at the parameters page.

Configure the Network

     The installer wants to make linux bootable again, but ignore it and select "Configure the Network".

     It wants the host name of your system, so type in Salma to match the examples in this article.

     Now it wants to know if you are on a network, so answer yes. The domain name is "chguy.net", just like in the examples.

The IP address

     The IP address can be the default (automatic) numbers thrown up on the screen. These are the numbers for a Class C network, the type you have at home.

Your IP address, according to the examples, is 192.168.1.3 for your hostname of Salma.

     The rest of the numbers are automatic and good ones to use for your network. We have used them for the example and it saves you a lot of typing and checking.

Netmask = 255.255.255.0

IP Broadcast address = 192.168.1.255

Gateway = whatever... If you have a modem on the other machine, then make a gateway. Or a Dell, or a Netwinder, your choice.

     Normally from home you will be making a gateway to the internet through your ISP.

Domain Name Service = your ISP nameserver. Here at home, there is no modem on the two PLIP machines, so I have each machine look at its local address and also on the other nameserver.

     So, for the example, all we put in the nameserver places are the two IP addresses of the two PLIP machines.

     We tell the installer that "Another system will be the DNS server".

Nameserver = 127.0.0.1   192.168.1.5

Execute a Shell

     A LeftAlt-F2 will light up a new console for you. A LeftAlt-F3 will show the error messages from your plip attempts. You could also just select "Execute a shell" from that wacky menu.

     At the prompt you can immediately type in the ifconfig command to see what is running.

     Again with the ifconfig, you set up PLIP on Salma with this:


ifconfig plip1 192.168.1.3 pointopoint 192.168.1.5 up

     Wait, you are not set up yet. You better set a route to the CD server machine.


route add 192.168.1.5 dev plip1

Install the Base System

     The menu wants to make linux bootable again. Please select "Install the Base System".

Choose Network Interface

     Now we can select "plip: Parallel-line IP" from the menu of interfaces.

     Yikes! The friendly installer now tells us that it will not create a complete PLIP configuration.

     The installer does not know that you just did it with the ifconfig and route commands. Maybe we can get another volunteer to set up the script to see this step.

     Now we are asked to select the installation medium. Yes, it is "nfs: NFS (network filesystem)".

     The installer will ask you for the address of the server and the directory where the debian archive lives.

"push de button and make it go"

  • the CD is going to be NFS mounted on Salma on /instmnt
  • you have already mounted the CD on /cdrom on the server
  • you have /cdrom listed in /etc/exports and now the installer can find it and use the CD.

     Type in the IP address of the CD server with a colon and the NFS export directory of the mounted CDROM. The screen prompt says "Choose Debian NFS filesystem":


192.168.1.5:/cdrom/debian

We are going live to chguy.net

     On your screen is a little window called "Choose Debian archive path". It only shows up when you are connected and online with the CD server machine. The installer is asking you for the location of the Debian archive directory in the mounted NFS filesystem.

     The answer is /debian. Always is.

Reference reading and links:

BootPrompt-HOWTO - required reading for all linuxians

PLIP-mini-HOWTO - you need the kernel configuration tips and troubleshooter

Kernel-HOWTO - you might need to make a custom kernel with a PLIP module

NFS-HOWTO - you absolutely need the exportfs script and this HOWTO

The Installation Guide for Debian 2.1 - good basic stuff here

"Loadlin.exe Installer", LG#34 - step by step for booting from a logical drive

linux on CD

     Your CHGUY Debian 2.1 CD set is labeled B1, B2, S1 and S2. The B1 CD (#1 of 4) is the installer for a regular desktop machine. The B2 CD (#2 of 4) is slightly tweaked for installing Debian 2.1 on your laptop machine. For installing via dselect, either one can go in the CDROM drive. Those wacky hackers are ready for anything.

X Windows is compact

     When you install the Standard Server package it only fills up 50 megabytes on your hard disk. Add in 32 megabytes for a swap partition and you have only used 80 megabytes from a small, old hard disk. That will leave you enough room for the xbase ".debs" plus your video-card-specific xserver and the rest of the XFree86 window system.

     Both 486s have Apache webserver, anonymous FTP, the TrueType fontserver, full networking, full use of the video card, blazing fast ping times on the PLIP cable (5.2 milliseconds) plus my choice of hot window managers like AfterStep and the icewm.

     Total installed ".deb" space taken up on either machine is 127 megabytes; including sharp little network games like Freeciv, netmaze and crossfire. Who needs bloated systems? Certainly not the linuxians.

New PLIP commands

You can make this into an executable script called plipon with:


chmod 755 plipon

This is the new command plipon:


#!  /bin/sh
ifconfig plip1 CHGUY pointopoint Salma up
route add Salma dev plip1

The above script runs on the CD server CHGUY. Reverse it for Salma.

The new plipoff command:

#! /bin/sh
ifconfig plip1 down

     You can copy it to /usr/sbin just for fun and so that it lives in the path.

Bloodthirsty Midgets

     My six years old nephew Brady "the Mighty Naturta" and his almost-twin cousin Jesse "the NO-baby" absolutely love to play netmaze. It installs in seconds through the laplink cable and it plays for hours. The package is called netmaze and all you do is run the server on one machine and the netmaze client on both. You can even add a few robots to chase you down. The players you see are smiley face spheres. Heaps of fun!

Three primary and one extended

     Please use GNU/linux fdisk to make your new primary partition for "/" (root). Too many of you are too excited to get going and you place linux on an extended partition. Just a reminder, the PC design allows you to get up to four primary partitions per hard disk drive, or three primary and one extended (which you can load up with lots of logical partitions). Do NOT use DOS fdisk if you have just one hard disk. DOS fdisk is disabled so that it will only make one primary partition if you only have one hard disk.

Laplink vs USB

     Get this: A recent advertisement for USB home networking between two machines was crowing over the fact that all you do is plug it in and you can enjoy full networking of two machines. It was "only" $85.00 for your set of newfangled doodads. *Limit of twelve feet of cable.

     I began to think that this was madness! You can get a laplink cable and hook up your two machines for full network connection; plus it is probably a "free" cable that was paid for a long time ago. *Limit of fifty feet, unless you begin to pick up radio signals.

     You plug the laplink cable into the parallel port of each machine; yes, the same place where your printer should go. Your laptop can use this PLIP method to share files with your desktop at work and at home. Plus it is hard-boiled, rock-solid networking through a cable; completely under your control. Methinks this USB thing was just a ploy to sell new machines.


made with mcedit on an i486
running Debian 2.1 Linux 2.0.36
No systems were frozen or crashed during the testing of these procedures.
All references to Salma Hayek are purely lascivious.


Copyright © 1999, Bill Bennet
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


An xdm Session

By Chris Carlson

[Revised to fix a few HTML tags. Originally published in issue #42.]


So, you've got X Windows working on your system, you've set your system to automatically start xdm by setting the default run state to 5 and now you want to customize your personal windows session by having certain applications start automatically after you log in.

At work, I like to log out of my system every evening before I go home so that others may log in when I'm not there. It doesn't happen often, but I don't want someone coming into my office and using a window logged in as me. [You never know when someone gets curious and starts wandering through my saved mail messages.] The problem is, I have certain applications that I want brought up automatically, like my list of things to do and my calendar program.

In this article, I'm going to explain an X Windows session, how it is started and what you can do to customize it. It will show you how to automatically start the window manager of your choice, have applications start automatically and customize colors and fonts to your liking. Since X Windows is pretty much identical on all platforms, much of what I am going to explain can be used on other platforms that use X Windows other than just XFree86 on Linux. As a matter of fact, I will make some comparisons between the version of XFree86 that comes with Red Hat 5.x and what comes with Silicon Graphics IRIX®. You may note that the files I discuss on both systems have the same name but are usually just in different directories.

I realize that other articles have been written about X Windows configuration, for example Jay Ts' fine article in the December issue entitled ``X Window System Administration.'' X Windows is an extremely versatile windowing environment and, because of this, can be very complex. For this reason, I believe it will require many articles that might overlap but each will provide information from a different perspective. This article is intended to be from a user's perspective, rather than from an administrator's.

To start off with and to keep my article from becoming a book in itself, this article is written with the following assumptions:

  1. That you are working with the default configuration of xdm as it is installed by Red Hat (see Footnote). This means that you haven't changed any of the files found in /etc/X11/xdm. (Since I don't have an installation of any of the other Linux vendor releases, I'm presuming their default configuration is identical or similar enough that it won't cause any problems.) With this in mind, I will refer to filenames that are used and referenced by xdm (and their contents) as specified in the installed configuration file. It should be noted, however, that almost all of these filenames can be changed by modifying /etc/X11/xdm/xdm-config or by specifying a different configuration file on the command line when starting xdm. (On the SGI, the configuration file is /var/X11/xdm/xdm-config and I have seen some installations use /usr/lib/X11/xdm/xdm-config.)
  2. That you have a basic understanding of the server/client concept used by X Windows. i.e. The X server handles the display and keyboard and runs as an application. User's applications are clients that request services from the X server to display things and provide input.
  3. That you have some familiarity with X resources and how they are used in the X environment.

User Session Initialization and Termination

When the X server is started automatically via xdm, the user is presented with a login screen. When a user successfully logs in via this screen, xdm starts the ``user session''. This session is a shell script which, when it terminates, ends the user's session and xdm resets the X server and returns to the login screen.

Prior to starting a session, xdm runs a small startup script with root privileges to perform any user initialization that may be required. Currently, this file, /etc/X11/xdm/GiveConsole, changes the ownership of /dev/console to that of the user so messages sent there can be displayed on a window in the user's environment.

In like manner, when the session ends, xdm runs another small exit script with root privileges to clean up anything that might have been set up by the startup script. Currently, this script, /etc/X11/xdm/TakeConsole, changes the ownership of /dev/console back to root.

Note that these two files are /var/X11/xdm/GiveConsole and /var/X11/xdm/TakeConsole on the SGI.

The step of interest to this article is the actual starting of the user session itself. Here, xdm starts a subprocess running the script /etc/X11/xdm/Xsession (/var/X11/xdm/Xsession on SGI) and waits for it to exit. When it does, xdm processes the exit script and returns to the login screen. This session script is run with the user's privileges.

A resource has been set for xdm which causes the parameter ``failsafe'' to be passed to the user session if the user uses the F1 key rather than the Enter key to complete his/her login. This can be very useful if the user makes a mistake in his or her customized session script which makes it impossible to log in. How this feature is taken advantage of is discussed below. It should be noted that I found this resource defined for both Linux and SGI and is used in an identical manner on both.

The Xsession File

The /etc/X11/xdm/Xsession file provided by Red Hat is quite simple, especially when compared to the /var/X11/xdm/Xsession file provided with the SGI. This file is a standard Bourne shell script which performs all the user startup and initialization that the system administrator wants done for all users.

As described above, if the user logs in and pressed F1 rather than the Enter key, the parameter ``failsafe'' is passed to the session file. The first thing the /etc/X11/xdm/Xsession file does is check if this parameter exists and, if it does, exec's an xterm. This bypasses all other initialization and provides the user with a terminal window to work with. Notice that this is a good method of logging in if the user has done something to his/her personal session file that otherwise prevents logging in.

For those that don't understand the function of exec, this is a builtin command provided by all the standard shell programs. It causes the current running shell to be replaced by the exec'd program. Thus, the current running shell never returns from an exec (unless the program referenced fails to start for some reason) and the parent process is not aware of any change in the child process. The exec'd program retains the process ID of the shell and, when it terminates, it is as if the shell terminated and the user session ends.

Presuming ``failsafe'' is not a parameter passed to Xsession, the script continues by redirecting stderr to an error file. If it can write to it, this file will be .xsession-errors in the user's home directory. If the session can't write to the user's home directory or this file is write protected for some reason, the script will attempt to use /tmp/xses-$USER, where $USER is the user's login name.

This error file is useful for determining problems during the user's session. Any errors generated by applications that are started (including the window manager or applications started by the window manager) will be sent to this file. If the user has problems starting a user session after logging in, he/she can perform a ``failsafe'' login (as described above) and look at this file. The error messages may be of some help in determining the problem.

Finally, the standard Xsession file transfers control to one of a set of shell scripts, depending on their existence and if they are executable. It does this with the exec command which means that, whichever program is run, it replaces the Xsession process and becomes the new user session. The shell scripts are:

1. $HOME/.xsession
2. $HOME/.Xclients
3. /etc/X11/xinit/Xclients
Some interesting notes about this compared to the script used on an SGI computer. SGI does not require the scripts to be executable but will run /bin/sh against them if they aren``t. Also, SGI only looks for $HOME/.xsession. If this file doesn't exist, the system Xsession file sets up the default user environment provided by SGI. Red Hat chose to break the default user session into two steps, since the standard installation will provide /etc/X11/xinit/Xclients.

If none of the three files above exist or are executable, then the user``s .Xresources file is loaded (if it exists) and the program xsm is exec'd. xsm is one of the many window managers provided with Red Hat Linux.

User Customized Xsession File

As you may have guessed from the above explanation of the system's Xsession file, the user can create his/her own shell script which will be processed as the user session. This is a very powerful capability and provides each user the ability to do whatever processing they want each time they log in via the X login. In this script, the user can start various applications, set root window resources, set one-time environment variables, change default keyboard definitions and select a window manager.

The easiest way to set up your own personal Xsession file is to copy the system /etc/X11/xinit/Xclients file into your home directory as .xsession or .Xclients (what, in the future, I will refer to as the user's Xsession file) and then edit it as desired. I'm not going to step through the contents of the /etc/X11/xinit/Xclients file, you can do this on your own. I'm going to just explain some of the things one might want to do.

One important thing is to load desired resources into the root window. This is usually done with the following commands:

	resources=$HOME/.Xresources
	if [ -f "$resources" ]; then
	    /usr/bin/X11/xrdb -load "$resources"
	fi
Another thing that the user may wish to do is set the root window background to something different. This is done with the /usr/bin/X11/xsetroot command. For example, I have my background defined as follows:

        /usr/bin/X11/xsetroot -solid DarkSeaGreen4
Note that this command can also be used to set the default cursor and cursor color for the root window, a two-tone plaid pattern for the background or an X bitmap to be used as a pattern.

Also, the command /usr/bin/X11/xset can be used to set the desired bell volume, key click, DPMS (energy saving) features and mouse parameters. This command can also set autorepeat and screensaver parameters.

If you want to define special keys, you can run /usr/bin/X11/xmodmap from this script. For example, I like to be able to access the full ISO 8859-1 character set and insert internationalized characters in my documents. Also, Linux likes to define <Shft>F1 to be F11 and <Shft>F2 to be F12. Since my keyboard has an F11 and F12, I prefer these keys to be set to F13 and F14 respectively. To handle this, I have defined $HOME/.xmodmaprc to contain the following:

	keycode 113 = Multi_key
	keysym F1 = F1 F13
	keysym F2 = F2 F14
	keysym F3 = F3 F15
	...
	keysym F10 = F10 F22
	keycode 95 = F11 F23
	keycode 96 = F12 F24
Then, in my $HOME/.xsession file I have the following:

	if [ -r $HOME/.xmodmaprc ]; then
	    /usr/bin/X11/xmodmap $HOME/.xmodmaprc
	fi
Finally, the most important step is running a window manager. Red Hat likes to run fvwm because it can be set up to look a lot like Windows 95®. Since I use SGI computers a lot, I prefer Motif (which costs money and doesn't come with Linux normally). There is also xsm and twm available. You might want to read the man pages for each to determine which window manager you prefer.

If it is desired, the user can exec the window manager as the last thing in the Xsession file. This will mean that the user has to end the window manager to end their session and return to the login screen. I prefer to run the window manager as a background process and exec an xterm as the last thing. This way, when I exit the xterm session, the user session will end and the login screen will be brought up. Note that the window manager and any window applications will be terminated because the X display will be closed. Any non-window applications started as a background process will not be terminated automatically and could continue after the user's session ends.

I start the Motif window manager as follows:

	/usr/bin/X11/mwm
I start the final xterm with:

	exec nxterm -geometry 80x50+10+10 -ls
This creates a version of the xterm that supports color. It will be 80 characters wide and display 50 lines. The window will be positioned in the upper left corner of the screen (at pixel position 10x10). The last option forces nxterm to run the shell as a login shell.

>From within the user's Xsession file, you can run a number of xterms, xclock or whatever, all of which will start automatically when you login. Be sure to specify a geometry (with the -geometry option) to get each application positioned on the screen where you want it.

Also, remember to run the applications in the background (by terminating the line with ``&'') otherwise, the user Xsession file will wait until that application terminates before continuing.

Important Tricks

Here I want to discuss some more interesting and important tricks that can be done from the user's Xsession file.

All window managers can execute programs from a pulldown menu. Sometimes these programs need special environment variables defined prior to their execution (for example, Netscape may need SOCKS_NS to be defined). Since the user's environment variables are not usually set until a shell is started, the window manager and any programs started from the window manager will not have the user's environment defined. Trying to set them in $HOME/.cshrc, $HOME/.profile or $HOME/.login won't do any good.

One trick is to define these environment variables in the user's Xsession file. It is necessary to set these environment variables before you start the window manager.

Another trick that I like to do is define XUSERFILESEARCHPATH in my user Xsession file. Most applications look for and use a application resource file, usually found in /usr/lib/X11/app-defaults. For example, Netscape uses the file /usr/lib/X11/app-defaults/Netscape for its application resource settings. If you want to change any of these settings for your personal environment, you can copy this file into your home directory and modify it. Next time you run Netscape, it will find the one in your home directory first and use it.

I have found my home directory cluttered with application resource files and wanted to put them into my own private app-defaults directory. I did this by creating the directory and copying all the resource files into it. Then, I set XUSERFILESEARCHPATH to the following in my user Xsession file:

	/home/carlson/app-defaults/%N:/usr/lib/X11/%L/app-defaults/%N:/usr/lib/X11/app-defaults/%N
This makes the application search in /home/carlson/app-defaults for application resource files before going to the default locations under /usr/lib/X11.

One last trick is for those of you that have multiple computers all running X servers. Here at home, I have an SGI O2 and my Linux machine. When I log in remotely to my O2, I want to be able to run X applications and have them use the display on my Linux box. In order to do this, I need to run xhost each time I log in to my Linux box to allow remote logins to access the X server.

As part of my user Xsession file, I have the following line:

	/usr/bin/X11/xhost +moonlight
This sets the X server on my Linux box to allow access from moonlight, the name of my O2.

Conclusion

I hope you have found this information useful and interesting. I've tried to show you how to create your own user Xsession file to start applications, set a special environment and run your own window manager. I'm sure you can come up with many more ideas.

One useful tool that I wrote, based on a similar application provided with SGI, is userenv. This application creates a login shell as a child and has it print its environment. This environment is collected and then printed to stdout in a form that can be executed to create the same environment by a shell.

In my user Xsession file, I have the following line:

	eval `userenv`
This computes my user environment and echos it in a form that the shell can execute the output to create the same environment. The eval command causes the output to be processed by the shell.

You are welcome to a copy of the source for this program from my web site, http://members.home.net/cwcarlson/files/utilities.tar.gz.

Footnote

I am running Red Hat 5.1 but it appears that it hasn't changed significantly for a few years. Also, I find the configuration almost identical with other Unix platforms such as Silicon Graphics IRIX®. The only differences appear to be in what directory files are maintained.)


Copyright © 1999, Chris Carlson
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Getting Involved in Open Source

By Andrew Feinberg


How to join and render help to the Linux community.

Linux has always been maintained by volunteers. In fact, the ``gift culture'' of the Open Source community has always been one of its strong points. However, the majority of users who would like to contribute do not know how to get involved. This article will discuss aspects of becoming active in the Open Source community and contributing to the Linux kernel and other projects, including my experiences with becoming involved in the Debian project.

The Kernel

The contributors file in /usr/src/linux on my home system is huge. My linux-kernel mailing list folder is always full of mail from people, eagerly discussing the ins and outs of improving this operating system. Many people assume that Linus is the sole author of Linux. Not true, I tell them. Linux is the prime example of the ``benevolent dictator'' model of open-source development. A prospective developer submits code to Linus or one of the few ``lieutenants'' such as Alan Cox. They decide what will go into the kernel.

Another scenario is that certain parts of the kernel, such as the kernel NFS system, have a maintainer. Code is submited to them, and they decide what goes into their part of the kernel. Occasionally, Linus or someone will ask for a person to take over a part of the kernel. If you volunteer, make sure you know the code and can handle the responsibility of maintaining it and accepting patches. Be prepared to handle loads of mail if something breaks. Also, make sure you are on the linux-kernel mailing list.

GNOME

One of the more exciting developments in the past year has been the effort to provide Linux with an easy to use desktop. One of the two front-runners in that effort has been GNOME: the GNU Network Object Model Environment. Unlike the kernel, GNOME uses CVS, a version control system, to keep track of code submitted by developers around the world. This eliminates the need for someone to patch sources by hand to create an upgrade. To get CVS access to GNOME, send mail to Miguel de Icaza (miguel@kernel.org). Include a description of what code you will be writing, along with an encrypted password. More information is available at http://www.gnome.org/.

Debian

Debian GNU/Linux is unique in that unlike most other distributions, it is maintained entirely by a team of volunteers from all over the world. Becoming a Debian developer entails your maintaining a package; that is, you will make sure the latest version is on the Debian FTP site and that bugs get fixed as soon as possible fixes are done by you, if you package your own software, or the upstream maintainer, if you package someone else's software. or the software of a project such as GNOME or Mozilla. Because developers can place packages into the distribution tree, Debian is rather strict on security issues, especially when it comes to letting new developers into the project. A PGP key, (or coming soon, GNUPG) key is a must, and this key must be signed by another Debian developer. This may seem Draconian, but it is imperative that they be sure that the developers are who they say they are. After they receive your PGP/GPG key (signed), someone may call you for a telephone interview. This will consist of asking you a few questions, generally about the package(s) you intend to maintain. The whole process takes time, but it ensures the distribution is secure.

Wrapping It Up

The open-source development model allows talented people to collaborate on projects from across the world. If someone feels they have something useful to contribute, they can. This article only touches on three projects. The Open Source movement is truly a ``gift culture''. You are judged by what you have contributed and the quality of your code. Countless projects are out there that can benefit from the assistance of the community. With your help, they can flourish, and you may be ``known by your initials''.

Resources

GNOME:
http://www.gnome.org/

Debian GNU/Linux:
http://www.debian.org/
http://www.debian.org/devel (Developer Information)

The Linux Kernel Mailing List: http://www.tux.org/hypermail/linux-kernel/ (archives)
To subscribe, send mail to majordomo@vger.rutgers.edu with subscribe linux-kernel in the body.


Copyright © 1999, Andrew Feinberg
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Better Web Page Design Under Linux

By Chris Gibbs

Note: The author does not have regular Internet access at this time and may be slow in responding to e-mails.


Contents

Wysiwyg Editors

The Advantage of Linux

Setting up Apache

Starting and Testing Apache

Search Engines

SGML Support

Introduction

Recently an article was published in Linux Gazette entitled Web Page Design Under Linux. This article produced some criticism in later issues. The main criticism seems to have been of the authors preference for hand coding HTML rather than using a HTML editor like the Windows HotDog editor. This is an argument I do not really want to get involved with. Neither do I want to spend much time on style. Whilst in most cases users want simple fast loading, clear pages, there will always be a place for garish eye candy, huge graphics and all kinds of complexities that take forever to download on a 28k modem. What I do want to address are the great things that linux offers. Great things that are free and would cost a fortune to implement on other operating systems. In particular I shall explain how to set your linux box up to be your own intranet server, and thereby fully exploit the abilities Linux offers for designing applications for the Web.

One point I think needs making, and which does not fit in with the rest of this article, is the Plugger Plug-in for Netscape Navigator. In the past many people have complained that Netscape plug-ins are not generally available for Linux. Plugger from http://www.infovav.se/=hubbe/plugger.html, seeks to address this by providing support for many audio/video/image types. [Ed. note: This domain name has disappeared. The author is looking for an alternate URL.]

Wysiwyg Editors

By way of introduction though, I will put my two penny worth into the 'editor argument'. I have never yet found a HTML editor that I like! I am writing this article in StarOffice 5.0. I have never used it to write HTML so this is something of a test. I expect I'll have to edit the source when I finish writing. Another editor that seems as good as any other I have tried is the composer part of Netscape Communicator. I find this irritating, very very irritating. Why? Because I like my text to be fully justified. OK I know that some people think that full justification 'goes against the spirit of HTML', but personally I would rather read text that is fully justified than text which is not. I do not believe I am alone in this preference.

What happens with Netscape is that after I have spent a couple of hours designing some pages until I am happy with them, I load them all into vi and change every occurrence of <P> into <P align=justify>, which can take some time if I've written a lot of text. Now a little later I want to make some changes, so I load the pages into Netscape Composer and I make some changes. But whist Communicator understands <P align=justify>, Composer does not. In fact Composer does not allow <P align=justify> and changes each occurrence back to <P>.... Bummer... I have to re-edit all the source by hand again. If I thought there was some advantage to using Composer, rather than hand writing my HTML I guess I would write a little program to search HTML files for <P> and replace with <P align=justify>. But this is not the only short coming of HTML wysiwyg editors. They just don't seem able to do exactly what I want, how I want.

OK in fairness I am now impressed with StarOffice! Although there is no button to give full justification, it is easy to edit the Text Body style so that full justification is automatic. It is also easy to automatically indent the first line of a paragraph, set double line spacing etc. etc. Maybe I will be converted to using a wysiwyg editor for my HTML after all.

One feature that seems to be missing from StarOffice 5.0, is any easy way to define lists. Tables are well supported, but lists are not. I guess that it should be possible to define some new styles to allow the use of different kinds of list, but one would have thought that a button should be available for them. Also given the different kinds of list available for HTML, one might find that the styles menu becomes cumbersome and more difficult than it should be.

OK simple layouts are quicker with a HTML editor, but if you want full control you have to hand edit at some point. So to my way of thinking if you want to write good HTML you must learn HTML. It is a very bad idea to to think you can skip learning HTML by getting an editor that works like a word processor. You will not have the skills you need to produce good web pages. HTML is very easy to learn. Once you know it then you might find that Netscape or StarOffice provide useful tools to help you. But please do not think such tools replace the need to be able to hand code HTML.

The essential document to read if you want to produce great Web-Pages efficiently is HTML 4.0 (W3C: HTML 4.0 Specification), this is the full Document Type Definition for HTML and SGML. For once I have taken my own advice and read it! The problems I mention above regarding text formatting have all been solved for me! I look at the HTML source StarOffice has given, whilst I am impressed, I am not happy. Again I think that an editor like vi or emacs really is better and more efficient than using a wysiwyg editor.

The reason is that HTML 4.0 allows the use of Style Sheets. This article depends upon the use of a style sheet, special.css. This is a document that says how a browser should render my document. An important feature is that browsers that cannot display certain things (e.g. graphics) are not disadvantaged. All browsers can access this page in the way I intend them to. In the past authors have been forced to use techniques to format their pages that cannot be displayed correctly on all browsers. Propriety HTML extensions, the conversion of text into graphics, the use of images for white space control, the use of tables for layout and even the use of programs, have all been used to format text, all these methods cause difficulty for users and extra work for developers. The correct use of style sheets avoids these problems.

Once you are familiar with the use of style sheets, it will not matter how badly Netscape Composer performs, or how unfamiliar you are with StarOffice, using an editor like vi, really can be simpler than using something like Hotdog. Load my style sheet into your favorite editor and see for yourself how easy it is to change the look and feel of this document.

STOP PRESS.....

Even as I am writing this document, I have found yet another web browser for linux! This one is worh some attention since it is produced by the W3 consortium, the same people who define the HTML specification. In fact this is the browser they use to test their specification. The following text is displayed when you start it for the first time:-

Amaya
is a Web client that acts both as a browser and as an authoring tool. It has been designed with the primary purpose of demonstrating new Web technologies in a WYSIWYG environment. The current version implements HTML, MathML, CSS, and HTTP.

Main Features
With Amaya, you can manipulate rich Web pages containing forms, tables and the most advanced features from HTML. You can create and edit complex mathematical expressions within Web pages. You can style your documents using Cascading Style Sheets. You can publish documents on local or remote servers with the HTTP Put method.
Browsing and authoring are integrated seamlessly. You can browse and edit Web pages at the same time. For that reason, a simple click just moves the caret to allow text editing; to follow a link, you have to double click.

Online Manual
A User's Manual is available online. You can browse it with the Help menu, which displays each section separately. You can also print it: just follow the Online Manual link below. You'll get the front page. Then build the whole book with the "Make book" entry from the Special menu and print the result.

This browser certainly has some advantages. The version I have is still beta (1.3b), so there are some short comings. I found that the File - Open Document dialog can resize its file box so it is non-functional. Also for some reason not all directories can appear in the directory box. At least one can specify the required file in the URL box! The fact that the manual does not come with the package is a definate minus for me.

What is nice about the browser is the pleasent way it renders pages. This page, for instance, uses full text justification, Amaya can actually split words in the traditional manner when required.

The really nice thing about this browser is the fact that you can edit files as you browse them. So if you are creating a document with many pages it is easy to switch between them. The down side of this is that there seems to be no way to to edit or view document source. Something that I would like to see in other browsers is the ability to create a "Table of Contents", with Amaya you can generate one based on the <H...> elements in your document. This will pop up as a seperate window and allow you to easily navigate through a document that has no links of its own.

At about 4.5 Megbytes, this is probably a very good alternative to StarOffice if you do not have the disk space required for StarOffice. I am certainly interested in seeing how this browser develops in the future. If you want to give it a try you can obtain it from the Amaya homepage. Additionaly there was a review of an earlier release of Amaya in Linux Gazette some years ago see issue 15. All I have to add to that review is that improvements must have been made. It seems the same in appearence as the screen shots show. Amaya displays the old style of Linux Gazette Contents pages quite well, but the new style in the last three or four issues is completey garbled. When Amaya starts up it no longer looks for a page on its home site, and I have not seen it seg fault as described. On the whole it does a very good job.

The Advantage of Linux

Now I've got that out of my system I'll get on to my main point. Drum roll please..... With linux it is simple to build a system you can gain http access to. Trumpet fanfare please.

Why is http access to your machine important?

Even if, like me, you are a stand alone machine, with no kind of network, it is easy to start up your favorite browser and http:// yourself. This means you can get into the wonderful worlds of cgi scripts, client server applications, java. etc. etc. etc. Without the need to access a 'real' network you can test any network application you care to develop for the Internet. You can test every aspect of your web design without wasting a phone bill. You can test applications safe in the knowledge that no matter what mistakes are in your code, only the machine you are using will be at risk, the "real" network will be unaffected until you decide your code is working correctly.

Web page design is not just about putting text/graphics and links onto the Internet. Increasingly it is about providing good user interfaces to network applications and providing an efficient means of communication. In the past only the largest corporations could afford to implement a WAN (Wide area network). Today anybody with a modem and pc can join the Internet, or implement their own intranet (a private network that acts in the same way as the Internet).

To illustrate my points consider the following scenario. You own a small tobacconists and live in a village called Tiny. Because the village is small you do not have many customers, so you don't sell items in vast numbers. That means you do not buy in large quantities from your suppliers and you cannot get the kind of discounts larger shops would get. But you have many relations and friends in other, similar villages who also run small tobacconists. If you all clubbed together and ordered your supplies as one entity you could take the discount advantages of bulk buying from your suppliers. The only problem is knowing which shop needs what items at any given time. You know that the discounts you would get would allow you to employ a van driver to deliver to all the shops and still leave each shop a significant saving.

How can web design under linux help you solve this problem?

The 'man with a van' needs information, what to buy in what quantity and where to deliver it. This sounds like a classic database application. Linux offers many sql database solutions. We want to keep costs to a minimum, we also want to maximize security and reliability. So good choices might be ingres or postgreSQL. If we look at these DBMS's we find that postgreSQL comes with a java interface. So lets say we design a suitable database with postgreSQL. This database will be held on a box that will be our server.

What we need is the ability for each shop to communicate with the server to tell it what stock we need to buy in. Shop keepers do not have to be computer literate. They also do not want to spend much money on computer systems. At least at this time it is unlikely that they could be persuaded to learn a UNIX operating system like linux. Cheap boxes already have Windows. An ideal solution is one where each shop can dial into the server, the manager can start up his/her favorite browser and use it to enter information to the server. It should not matter what operating system each shop uses.

What does our server need to do?

The first thing is to get Apache set up and running. Apache is a web server and comes with most if not all linux distributions as standard. What is not always clear is how to set it up correctly. This is something an installation program cannot do (easily) and needs to be done by hand. It is Apache that allows us to http ourselves. Of course, we will also need to allow remote machines to dial into our server, but that is a matter outside the scope of this document.

Once Apache is running we can design a java application to act as a user interface to our database.

We can test both the client and the server parts of our application on our server until we are certain it performs as required.

Then all we need to do is allow the shopkeepers to be able to dial into the server and gain access via their browsers to the java database interface.

The wonderful thing is that at the test stage we only need to use one linux box which acts as both client and server at the same time.

Setting up Apache

If you do not already know, then Apache is one of the most common http servers in existence. A great many ISP's (Internet Service Providers) use Apache to give their clients (i.e. You) access to the world wide web.

This document does not attempt to address the requirements of a true Internet or intranet server. All I am concerned with here is getting Apache up and running on a standalone machine so that client/server software can be tested. In particular I am not concerned with security issues here. If you do not intend to have a permanent network connection then all should be well. If you intend other machines to have access to your http server then you should read all the relevant documentation. Complete configuration of Apache can be a very complex issue which does not fall within the scope of this document.

Modern Linux distributions, such as S.u.S.E., have special requirements for setting up Apache correctly. To avoid confusion please read the documentation that came with both your linux distribution and your Apache distribution. The following steps will work for any Linux distribution, but be warned, if your distribution has special requirements I cannot be responsible for getting your system startup files in a mess.

For instance I shall describe how to start Apache automatically at boot time by adding a line to your /etc/inittab. Whilst some Slackware users will benefit from this approach S.u.S.E. users should find that it is better to edit their /etc/rc.config file in the appropriate manner.

Preparing your machine for Apache

These steps will prepare your machine for the installation of Apache. You might find that Apache is already installed, following the above steps will not hurt such installations.

  1. Make certain you have set your /etc/HOSTNAME correctly. I call my machine Hawklord

  2. Create a new account for the httpd administrator. I use the user wwwrun, whose primary group is nogroup (65534).

  3. Edit your /etc/hosts to reflect the name of your machine. I have the entries
            127.0.0.1 localhost   
            127.0.0.2 Hawklord.Varteg    Hawklord 
  4. Edit your /etc/hosts.allow I have
            ALL:    127.0.0.1  
            ALL:    0.0.0.0
            ALL:    localhost
            ALL:    Hawklord.Varteg
    

If Apache is not already installed, find a pre-compiled version and install it as per the instructions. You should find that configuration files are placed under /etc/httpd, and other files are installed under /usr/local/httpd.

The directory /usr/local/httpd/htdocs should contain the Apache user manual in html format. Actually this directory will become the root directory of our http site, so you may want to move this documentation elsewhere eg. /usr/doc/Apache.

Plan your http site

When you log into a http site, eg http://linux.org, you find yourself at the root of what can be a very complicated directory structure. You can think of a http site as being a file system just like your own root file system. Whilst it is true that to a user the http site will look like a regular file system, the reality on the servers hard disk(s) can be very different. It is important to understand the differences and use them to your advantage.

On my system the document root is at /usr/local/httpd/htdocs, and this is the directory a user lands in when they access http://Hawklord.Varteg. But there is only one file and no sub-directories on my hard disk. I only keep index.html in the physical location /usr/local/httpd/htdocs. All the documentation users can access is held in other locations on my hard disks.

Looking again at /usr/local/httpd you should find other sub-directories, in particular cgi-bin and icons. These directories should seem to be located under your document root because they will contain files that should be available to any html file on your site that requires them. Though a user should not be able to directly access these directories. Much of my documentation is under /usr/doc, so I make that directory appear as /doc to the http server.

What this means is that you can store all your documentation on the server in locations that seem logical to you, you do not need to copy files or even make symbolic links to /usr/local/httpd/htdocs. Instead plan how you want your documentation to appear to a user. Also you can have directories that users cannot directly access, but which html documents can access.

For instance, the directory /usr/doc/ contains

   Linux_gazette    Howto    Ldp    java-documentation

I also want to access files under /usr/hobbies/literature and /usr/src/java/applets

I want my site to have the following structure:

     /    --->    cgi-bin   
                  docs   --->    Linux_gazette 
                                 Howto 
                                 Ldp 
                                 java-documentation  
                                 literature 
                  icons   
                  java_applets
      

Planning your http site in this way will save you headaches in the future!

httpd.conf

/etc/httpd/httpd.conf is the main configuration file for Apache. Some versions of Apache and/or Linux distributions recommend that all configuration information is kept in this file. Other versions recommend that you use all three files I shall mention below. If you want to keep all information in one file, simply put all the information in one file, there is no real difference between the two methods. You will find that the example files will contain sufficient comments to enable you to make the best choices for your system. I am only going to describe the changes you need to make to get Apache to work for you. Careful reading of the files will let you configure Apache better for your needs.

I am aware that a TCL configuration utility called Comanche exists for Apache. However, this is still in an early stage of development, so I do not recommend it for beginners. I found in practice the utility would not function correctly if you use only httpd.conf to configure your system. However it could prove useful for experimenting with different configurations.

For each line in the configuration files you can assume that your example file has a correct or sensible entry, unless I specifically mention it. Back up the examples before you make any changes!

ServerType standalone.
Please use standalone unless you know exactly what you are doing.

Port 80
Unless you have changed something this is correct, so do not change it.

HostnameLookups on
Again, it is probably a mistake to change this unless you know otherwise.

User wwwrun
This entry should refer to the user we set up above to be the httpd administrator.

Group
This entry should refer to the primary group you defined for the httpd administrator.

ServerAdmin root@localhost
This is the address Apache will use to send e-mails with details about problems with the server. Using localhost rather than Hawklord.Varteg seems to be more reliable.

ServerRoot /usr/local/httpd
This should point to the location you installed Apache's main files. By default this is /usr/local/httpd

ServerName Hawklord.Varteg
This should be the fully qualified domain name of the server. It should be the same as the entry you made in /etc/hosts.allow and /etc/hosts above.

Logs
Entries concerning log files should probably be left as they are until you feel confident about changing them. Though you might want to experiment with the loglevel entry if you experience problems.

srm.conf

This file contains site specific information. It is where we define how our site will look to a user.

DocumentRoot
should refer to the directory on our hard disk that will be the root directory of our site. For our example this is /usr/local/httpd/htdocs

DirectoryIndex
is the name of the file that should be loaded by a browser when a user enters a directory without specifying a filename, e.g. http://Hawklord.Varteg/ or http://Hawklord.Varteg/docs/. index.html is a sensible default.

Alias .....
Each line starting Alias will define a virtual directory on our system. For the example above this should include:
      Alias /cgi-bin/                  /usr/local/httpd/cgi-bin/
      Alias /docs/                     /usr/doc/
      Alias /docs/Linux_gazette/       /usr/doc/Linux_gazette/
      Alias /docs/Howto/               /usr/doc/Howto/
      Alias /docs/LDP/                 /usr/doc/LDP/
      Alias /docs/java-documentation/  /usr/doc/java-documentation/
      Alias /docs/literature/          /usr/hobbies/literature/
      Alias /icons/                    /usr/local/httpd/icons/
      Alias /java_applets/             /usr/src/java/compiled/ 
      
ErrorDocument
Error documents are the response the server will give when the user types a wrong URL, or tries to access a restricted file or directory etc. Apache gives good default error documents, but you can override this behavior and provide your own responses. I keep my error documents in the directory /usr/local/httpd/error

access.conf

This file contains permissions for our sites directories. If. when you test your configuration by starting httpd and pointing your browser to (eg.) http://Hawklord, or http://localhost (both will work for the above example), you get a file access error you will need to alter this file. Each directory in your site should have its own entry.

By default Apache has a very restricted set of permissions for the root directory, I have found that changing to:

   <Directory />
       Options All
       Order allow,deny
       Allow from all
       Options FollowSymLinks
   </Directory>

solved some problems for me. It is important to realize that a directory inherits its permissions from its parent directory. So if you want to allow outside access to your site you need to take great care when setting up your directory permissions.



Starting and Testing Apache

Once you are satisfied that you have correctly installed and configured Apache, you will want to test it! Log into your machine as root. At the prompt type:

     #:  httpd &

Now you can log into your machine as any user, start your favorite browser and enter the URL http://localhost. If all goes well you should load the Apache site file index.html. That is unless you moved the Apache documentation and provided your own index.html in /usr/local/httpd/htdocs

Once you ar satisfied that all is well, you will want to have httpd start at system boot time. Some Linux distributions, such as Red-Hat or S.u.S.E. will have a script to start Apache in their init.d directory. If this is the case then you just need to enable the script for sys V init in the normal manner.

As an alternative you can put the following line in your /etc/inittab

      ap:45:once:/bin/su --command=/usr/sbin/httpd

'ap' must be a unique identifier. '45' refers to the runlevels for which the command will be executed. Once is probably safer to use than 'respawn', since if there is a mistake in this line you will see a lot of error messages ;-(

The final part of the line '/bin/su --command=/usr/sbin/httpd', is intended to start up Apache as a process owned by wwwrun. It would be wise to test this command before you put it in your /etc/inittab.

Search Engines

If you have Apache running, and a large linux installation, then you might want to consider implementing a search engine. S.u.S.E. Linux provides htdig, in fact to gain full benefit from the S.u.S.E. Help System you need to use something like htdig. The only problem is the disk space you will need. I have a 1Gig partition devoted to documentation, this may seem a lot to many users! I have a lot of personal documentation, program documentation (increasingly this is HTML), all issues of Linux Gazette, Gimp documentation, java documentation etc. This takes about 500 Meg. The database htdig uses is between 200 - 300 Meg on my system. To update the database I need 200 - 300 Meg spare under /tmp. Actually when I update the database I change the location of /tmp since I do not have enough space on my root partition. Now since I have arranged all the documentation to be available to Apache, it is all referenced in htdig's database. If I have a question about any aspect of linux, or any of my personal subjects, all I have to do is formulate a suitable search pattern. I cannot adequately describe the savings in time this has given me! In the past I would have needed to access newsgroups to find answers to my problems. With htdig I can avoid this 99.9% of the time! Given the low cost of hard disk space, the fact that current program documentation is usually given as html, that most documentation of any kind is available as html, then it makes good sense to use Apache in conjunction with a search engine in order to have a most efficient information retrieval system.

Htdig may not be perfect, if you are used to Infoseek or lycos, it is a bit annoying because you cannot search for a phrase e.g. "starting the x server". Rather a document is searched for that contains all the words you enter. An advantage is that related words are searched for as well, e.g. if you search for 'god' you can also get results for 'gods' and 'godly'. Once you get used to htdig it becomes an indispensable tool. The time it saves you in looking for information is well worth the cost in terms of disk space. (on my system the real cost is about 250 Meg, though I need another temporary 250Meg when re-building the database).

SGML Support

Finally I shall mention Linux's SGML (Standard Generalized Markup Language) support, this is not normally concidered part of web page design since most home users will simpy want to be able to create their own HTML home pages and have no other use for such documents.

However, a great many people will want to produce documents in many formats. The same document might need to be available for publication as a book, or as an info page as well as being available as web pages. The linux documentation project contains many documents that are available in different formats according to users needs.

SGML allows a single source to be used to produce many different kinds of text format. The following package descriptions are taken directly from the S.u.S.E. 6.0 distribution, though they should all be available for other distributions:


Package "sgmltool"

SGML-Tools - a text-formatting package
SGML-Tools is a text-formatting package based on SGML (Standard Generalized Markup Language), which allows you to produce LaTeX, HTML, GNU info, LyX, RTF, and plain ASCII (via groff) from a single source.

This system is tailored for writing technical software documentation, an example of which are the Linux HOWTO documents. It should be useful for all kinds of printed and online documentation.

SGML-Tools is not able to process arbitrary SGML documents; in such a case, give jade_dsl a try and write your own DSSSL scripts (take the docbk30 package as an example).
Package "jade_dsl"

DSSSL-Engine for SGML documents

Jade is an implementation of DSSSL (Document Style, Semantics and Specification Language); pronounce it as "dissl" -- it rimes with whistle.

It has backends for SGML, RTF, MIF, TeX, and HTML.

The parser "nsgmls" and helper tools like "sgmlnorm", "spam", "spent", and "sx" are now included in the separate package "sp".

You'll find the documentation at /usr/doc/packages/jade_dsl/.


Package "sp"

SGML parser tools

The tools of this package provide the possibility to manage SGML and XML documents.

It contains the parser `nsgmls' and the supporting programs `sgmlnorm', `spam', `spent', and `sx'. `sx' is useful as a converting tool from SGML to XML, the comming WWW standard. You'll find the documentation for all the programs under /usr/doc/packages/sp/.


Package "sp_libs"

Libries required for sp and jade


Package "gf"

A "general formatter" for SGML documents

`gf' from Gary Houston is short for "general formatter", i.e., it can work on documents which use the ISO "general" document type definition (DTD). It can convert SGML documents conforming to a small number of DTDs into various output formats: LaTeX, ASCII, RTF and Texinfo. However not every output format can be generated for every DTD.

Apart from the general DTD, gf supports the HTML DTD used in the WWW project and Gary's Snafu DTD. `gf' is not intended as a flexible system for hacking up a formatter for a random DTD, but as a usable document production system for a few DTDs.


Package "jadetex"

JadeTeX - LaTeX macros to process TeX output from Jade (jade_dsl)

With Sebastian Rahtz' macro package `jadetex' is is possible to process the output of the TeX backend of Jade (jade_dsl). Resulting DVI files are viewable e.g., with `xdvi' or printable like any other DVI file.

I have no real experience with SGML so I will leave the appraisal of these packages to the reader. For some people these will prove indespensible tools for producing HTML pages.


Copyright © 1999, Chris Gibbs
Published in Issue 43 of Linux Gazette, July 1999

THIS IS THE STYLE SHEET Special.css, with <PRE> tags around it so the contents can be viewed in HTML.

P { text-indent: 1.00cm; text-align:justify }
H1 { border: solid green; background: blue; color: cyan; text-align: center }
H2 { border: solid brown; background: oldlace; text-align: left }
H2.name { border: none; background: white; text-align: center }
OL { border: solid orange; background: oldlace; list-style-type: lower-roman }
DL { font-style: bold; background: oldlace; border: solid orange }

END OF STYLE SHEET Special.css

"Linux Gazette...making Linux just a little more fun!"


Graphics Muse

By Michael Hammel



muse:
  1. v; to become absorbed in thought 
  2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration
© 1999 by mjh


Button Bar
Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. 

[Graphics Mews][WebWonderings][Musings][Resources]

This column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems.
I've actually had a fun time putting this months column together.  In the past I had been trying to find technical issues to talk about from a layman's point of view - graphics for the masses.  This month, I just sat down and thought about it the way I do things.  I play.  I find something new and fiddle with it.  If it's easy to learn and I can do something useful with it in a few minutes, I keep fiddling.  If not, I lose interest and come back some other time, hopefully when the application has evolved a bit more. 

This month I started out by looking for video editing software for Linux.  Now, don't get your hopes up.  As with many good ideas, it started in one direction and headed slightly off center - I didn't do a write up on video editing software.  Instead, I looked at video viewing software.  This is something I thought the average user might have real use for.  But if you're still hoping to find out what's in store for the video editing world, don't lose hope.  I plan on visiting that arena soon.  We just need the tools that are currently available to mature a little more, and we also need a few more options to choose from for our video editing needs.

So, in this months column you'll find:

  • Interactive Management of Image Maps
  • Linux Video Choices: A review of Xanim, MainView, MpegTV and RealVideo.
The companion site to
The Artists' Guide To The Gimp.
edited by
The Graphics Muse - Michael J. Hammel.
The Artists' Guide to the Gimp
Available online from Fatbrain, SoftPro Books and Borders Books.  In Denver, try the Tattered Cover Book Store.


Other Announcements:
Recent Blender News from June 6 1999
MpegTV Player (mtv) 1.0.9.8
gPhoto 0.3.3
GIMP Imagemap plug-in 1.1.1
< More Mews >
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.

NY Times: Linux Takes Prize - In an Art Competition
"One of the top prizes in a prestigious electronic art competition has been given to a deliberately unusual choice: the Linux computer operating system."   (free registration required)
http://www.nytimes.com/library/tech/99/mo/cyber/articles/01linux.html


ACIS First 3D Modeling Engine To Offer LINUX Port
  LinuxPR

Spatial Inc. a developer of open, component 3D modeling technology and product data access, exchange, and sharing solutions, today announced the availability of ACIS® 3D Toolkit[tm] on Red Hat® Software, Inc.'s LINUX[tm] operating system. This port will arrive in conjunction with Spatial's scheduled release of ACIS 3D Toolkit 5.2 in mid June.
http://linuxpr.com/releases/32.html



XScreenSaver 3.16
  jwz - June 20th 1999, 20:49 EST

XScreenSaver is a modular screen saver and locker for the X Window System.  It is highly customizable and allows the use of any program that can draw on the root window as a display mode. More than 100 display modes are included in this package.

Changes:  Added new demos webcollage and petri, and made it possible to use the vidwhacker demo in a pipeline.
New version of shadebobs, improved image selection in webcollage, and sped it up slightly, made configure find the right version of perl, `make clean' was deleting some things it shouldn't and fixed a typo in the default programs list.
http://www.jwz.org/xscreensaver/



Swift Generator 0.9
  Olivier Debon - June 20th 1999, 20:40 EST

Swift-Generator is a utility 'ala' Macromedia Generator. It aims at dynamically replacing texts, fonts, sounds, images and movie clips in either Template Generator files or standard Flash files. This allows Webmasters to create dynamic content such as stock tickers, news tickers, weather forecasts and the like.

Changes: Text alignment support has been added.
http://www.swift-tools.com/



gd 1.4
  NevaLabs (Claudio Neves) - June 20th 1999, 20:31 EST

gd is a library used to create .GIF images. It has many nice features and can be used in scripts (e.g. PHP) for dynamic image generation.
http://www.boutell.com/gd/



HP Introduces Linux based HP VISUALIZE Personal Workstations
  From NewsAlert

The HP VISUALIZE PL450 and XL550 Personal Workstations will ship with Linux and deliver leading application performance for popular Electronic Design Automation (EDA) software solutions from Avant!, Mentor Graphics and Synopsys, as well as for other technical applications.

Full Story



tkxanim .43
  AaronA - June 23rd 1999, 16:36 EST

tkxanim is a Tcl/Tk front end to xanim which aims to provide a graphical interface that allows the user to configure most, if not all, of xanim's options available from the command line. Since the program is in early alpha development, only a handful of xanim's options are present for configuration.  However, more will be added with each new release. Despite the lack of options at the time being, the program is still very usable and visually appealing.

Changes: Added a couple minor features (Debug Level and Animation Loops entry fields). Also cleaned up the options box a bit.
http://members.yourlink.net/aaron/tkxanim.html



Wacom Driver for XFree86 alpha 7
  Fred - June 22nd 1999, 17:01 EST

This is an XFree86 XInput driver for Wacom tablets. It handles wacom IV and V protocols.

Changes: Corrected the init problem on PenPartner models.
http://www.lepied.com/xfree86/


Did You Know?

...you can create maps using an online tool?  Check out Online Map Creation (http://www.aquarius.geomar.de/omc/).  You can generate a map, download it's Postscript version and/or view and download it's GIF version in your browser.  Equidistant Cylindrical Projections are reported to, after a little trimming, map very well to spheres.

...more information on map projections can be found at http://www.ahand.unicamp.br/~furuti/ST/Cart/CartIndex/cartIndex.html.

...you can use the Iomega Buz with Linux?  Take a look at http://www.lysator.liu.se/~gz/buz/.  The Buz is a multimedia box that allows you to connect video and audio inputs directly into your computer.  At about $200, this is a pretty inexpensive way to get into video editing.  The bad news is that getting it working on Linux requires some fairly technical understanding and willingness to use command line tools (no graphical editing tools yet).  This is not for the faint of heart, the drivers required for this are somewhat bleeding edge.  You'll need to know how to compile kernels and install drivers modules.

...there is a good article on producing movies on LinuxPower.org.  This article is apparently going to be the start of a series of articles on producing movies on Linux.  I'll be interested to see what they say about transferring the images to film/video (something I haven't figured out how to do on Linux yet).  This first article is fairly introductory and regular readers of the Muse should be able to follow it quite easily.  The good news:  it talks about all the tools we've talked about here in the past - so you should already have the tools you need to get started!

...3D Life is a site devoted to 3D character design and animation, linking many sites of artists who deal in 3D characters.  Very good gallery!  http://www.danbbs.dk/~thomcold/3dlife/3dlife.htm

Q and A

Q:  Anyway, I've been experimenting with BMRT and it seems much slower than POVRay, even without using radiosity.  Using BMRT's area lights are really slow (but probably more accurate) compared to POV's, although the difference doesn't seem noticeable.

A:  BMRT renders with 2x2 forced oversampling by default, the adaptive oversampling it uses is not very useful except for very high numbers of samples, because it uses stochastic sampling.  2x2 oversampling is usually sufficient, but slows it down a lot.  As to radiosity, you can start out by setting rsamples to 1, and most of the times the 20 or less iterations are more than enough.  So try

rendrib -samples 1 1 -radio 10 -rsamples 1 -res 640 480
for a test image.  Or use the non-standard Options
Option "radiosity" "steps" [10]
and
Option "radiosity" "minpatchsamples" [1]
If you notice radiosity artifacts (heavy banding) on large uniformly colored areas, increase the rsamples value (this chops each face into at least this number squared patches).  If your modeller supports this you may also set the subdivision on a per object basis using the non-standard attribute
Attribute "radiosity" "patchsize" ps "elemsize" es "minsize" ms
For details see the BMRT documentation.

Bernd Sieker <bsieker@techfak.uni-bielefeld.de>
From the IRTC-L mailing list

Q.  I have a simple image I made with BMRT and would like to see how it would look illuminated with radiosity.  Does anyone have any tips on using the radiosity settings with BMRT?

For simple scenes radiosity is quite quick, and remember that it's not dependent on the image size.  If it takes too long you can exclude certain objects from the radiosity calculations using the non-standard attribute

Attribute "radiosity" "zonal" zonalval
Bernd Sieker <bsieker@techfak.uni-bielefeld.de>
From the IRTC-L mailing list

Reader Mail

Seth Burgess wrote:
Regarding the user question [in last months TheGimp.com], there was one:
2. can the space [that Gimp Swap files] consume be limited?

You answered:
2. Reduce the number of levels of undo.  I'm not sure if they can be turned off or not - check the Preferences dialog.

However, if the user has plenty of RAM, upping the tile cache size from 10MB to something larger (say 64) should drastically reduce the swap file size as well.

And there's the obvious - work on smaller images.

Seth
sjburges@gimp.org

'Muse:  Thanks Seth.  I'm not sure why I didn't include that, but that's exactly what I've done on my system.  It's certainly faster working in memory that with disk swap files.

Regarding the GRiNS port to Linux (GRiNS is a Graphical SMIL editor - see Did You know in the June 1999 Muse), I asked Jack Jansen:  are there any plans for a Linux port at this time?  I'd like to point my readers to resources on SMIL for which they could make some use, and this would be an interesting start.

Jack replied:

There are definitely plans for a Linux port, but no firm dates yet. The basic functionality is indeed reasonably easy to port, but handling of audio and video is something that still needs some investigation. And given that we have  only limited resources we have to prioritize the things we take on.

Jack Jansen
Jack.Jansen@oratrix.com

Paul Took wrote
My name is paul took, in Melbourne Australia. I recently started a course with interim technology (formerly computer power) and came across Graphics Muse. I'm considering doing a second course at another college which involves web page design (HTML/Javascript etc.) and graphic design/animation (use of Adobe, Photoshop etc.)

In your expert opinion: is it worth doing a structured course like this or buying a couple of web design books and learning at home??

'Muse:  This is highly dependent on your own motivation and learning habits.  I, personally, learn much more on my own than in a class, but often take a class when just starting a new topic to get me pointed in the right direction (like photography, which I just got into recently).

HTML is easy enough to learn on your own - there really isn't that much too it.  If you need to learn a slew of specific applications it often helps to take a class (it's often harder to learn to use the applications than just writing the HTML yourself).  Java is a language unto itself and I'm certain a structured course would help.  Design animation as a class covers a very broad range of topics - colors, structures, paint and animation techniques, procedural animation, etc.  That's not a class, really - it's a whole degree.  Using Photoshop or some other specific tool is like what I said previously, it helps to take a class if the tool is sufficiently complex.  I don't think Photoshop is hard to learn (the Gimp is easier - you could always buy my book on how to use it, of course).  But learning what buttons do what functions in only a small part of the job.  The bigger part is how to use those buttons creatively to produce interesting effects, sometimes to the point of being able to reproduce the effect quickly (like drop shadows for logos, which is a very common requirement from clients) and in the same manner each time.

If you're just learning web design for fun, or even for your business, and are confident in your own ability to teach yourself new topics, then skip the class.  But if, like me, you find a little push in the right direction helps, then take the structured class.

Of course, if it makes any difference, I've never taken any classes on HTML or computer graphics.  It's all self taught (except for some minor OpenGL experience, but I never really used what I learned).

Hope that helps.

Now, on the subject of image resolution and printing, I found this post from Brian Reynolds on one of the Gimp mailing lists:

David Fokos has written a very good paper on creating half-tone digital negatives for contact printing.  You can find it at Bostick & Sullivan's web site at:
http://www.bostick-sullivan.com/Technical%20papers/Digital%20Info/Dave_Fokos/davetech.htm

Besides discussing all the details about making negatives for contact printing, this paper has a very good explanation of the resolution metrics (dpi, ppi, lpi) for the various types of equipment used for digital input and output and how they relate to each other.  The paper assumes you are using Photoshop, but gives general enough descriptions that you aren't tied to it (as opposed to another book on digital negatives that assumes Photoshop is the only software available).
Brian Reynolds
reynolds@panix.com

I read the paper and Brian is right - you can apply the digital techniques David discusses to the Gimp just fine.  You might need a bit of background on photography for this paper, but it's well worth the read.



Interactive Management of Image Maps

One of the tools lacking from the Linux arsenal these days is a really good interactive Web page builder.  I use Netscape Composer for all my pages, but this lacks any sort of integrated graphics editor.  You can configure it to launch an external editor, however, and this is where the ever popular Gimp comes in.

The Gimp is, of course, the best raster image editor on Linux.  Not only does it have support for many different effects and filters, it also has a dynamically extendable interface through the use of plug-ins.  One of the latest plug-ins to gain popular attention is the Image Map plug-in from Maurits Rijk.
The Image Map plug-in, shown with a sample image and the
Areas List (the list of URL links) window disabled.
Image Maps, for those unfamiliar with their use, are an HTML construct that allows an Web page author to specify regions within a single image to be used as links to different URLs.  Regions can be specified using rectangular, oval and polygonal coordinates.  Both server and client side maps are possible, although client-side image maps are the more popular of the two types.  This column uses a client-side image map for navigation at the top of the page (upper left corner of the page, just below the Graphics Muse logo).  By providing a method of mapping the single image into multiple links, image maps reduce the overhead that multiple images positioned using tables would require.

The current version of the Image Map plug-in is 1.1.1.  This version includes recent support for HTML onBlur and onFocus tags.  Although the interface is fairly well designed, the program currently provides no documentation.  Building from source (which is how this plug-in is distributed) is simple enough:  just unpack it and type make.  There shouldn't be any editing of Makefiles or other configuration files necessary.  After compiling you can either do a make install or simply copy the binary (named imagemap) to your $HOME/.gimp/plug-ins directory and restart Gimp.  Once installed, the plug-in can be accessed via the Filters->Misc submenu of the Image Window menu.

The interface consists of a scrollable window on the left and the set of URL links on the right.  The scrolled window is a full size copy of the original image.  Two menu bars are provided - one using traditional pull down text menus and the other an icon based version of the same features.  An additional icon based menu of region shapes (rectangular, oval, polygonal and so forth) runs along the left side.  The icon menus are all detachable - you can click on the rough edged left side of each and drag it out of the main window, although what advantage this might provide I don't know.
Grid
Icon
In order to start specifying regions for the image map, you might first consider turning on the grid lines.  This can be done quickly using the Grid icon in the icon menu bar, but you'll probably also want to adjust the granularity of the grid.  This can only be done by selecting Goodies->Grid Settings from the text menus.  This will open a dialog box where you can specify the width and height of the grid boxes, the method for displaying the grids (lines, crosses or hidden), an offset from the upper left corner in which to begin the grid and, most importantly, whether region shapes are snapped to grid intersections.  This last item is what will make creating your image maps rather quick and painless.

[ More Web Wonderings ]


Impress Follow-up

Linux Video Choices: A review of Xanim, MainView, MpegTV and RealVideo.

I don't do much video work on Linux yet.  I have a sufficiently fast box for it, I just haven't had much more than a passing interest in it since there aren't many video editing tools available yet.  Still, viewing animation's (in something other than GIF format on a browser) or streaming video has become an important part of the Internet in the past few months.  So I thought I should at least take a look at what tools are available from a viewers perspective.

Now, there are a probably a couple dozen projects underway for viewing video and animations on Linux.  I can't review all of these, there just isn't enough time in the day to do them justice.  So I've chosen four viewers that I think represent varying aspects of digital video as well as varying support for different video formats.  The four tools are Xanim by Mark Podlipec, MainView by MainConcept, MpegTV by MpegTV, and RealVideo from RealNetworks.

In order to test these I decided to download a series of RealVideo, MPEG, and Quicktime files, both with and without audio, and see how each tool that supports them performed.  For RealVideo and MpegTV, I used appropriate URLs.  The test system was configured with 256Mb of memory using a TrueColor visual under the Xi Graphics Accelerated X server with a Matrox Mystique 4Mb video card and the commercial Open Sound System drivers for a Generic MAD16 Pro (OPTi 82C929) soundcard.  For animation's or streaming video/audio which were used in these tests and for which I know a URL, I have provided links to the test files.  I can't post the video files here since the Linux Gazette (which is the main location for the Muse column) gets distributed to a lot of places that wouldn't be happy downloading 2Mb+ video files.

A note about file types

If you're not familiar with the codec types, just look for animation files with suffices like .mov and .anim( both are versions of Quicktime, I believe), .fli (FLI/FLC), .ram, .rm and .rv (RealVideo files), and .mpg (MPEG animation's).

Xanim
Latest version: 2.80.1

Long before the others arrived on the scene, Mark Podlipec's xanim was serving up video files to the masses.  Supporting AVI, Quicktime, FLI/FLC, Amiga, and JFIF file formats along with GIF and DL Animation's as well as a number of audio formats, the X Windows System based xanim can play just about any popular animation files you might find on the Internet.
 
 
Xanim, playing an E! Quicktime interview.
Xanim is provided in source format for the main engine, with binary dynamically loadable libraries (DLLs) provided for various codecs for which the copyright owner would only provide information if Mark signed an NDA.  In a sense, I think Mark's solution to the proprietary vs. open problem is probably not a bad compromise.  In any case, the source is portable to many Unix (and other) platforms.  Building the source is fairly easy for Linux systems.  Unfortunately the package doesn't support autoconf  based compilation, but I'm not one to complain much about that (considering my own XNotesPlus doesn't support it either - who has time to learn all these tools?).  Mark provides a build based around imake, which isn't too bad a substitute for autoconf.  The Imakefile only needs one modification for building on Linux - in section IVb add this line:

EXTRA_DEFINES = -I/usr/X11R6/include/X11
This is necessary, even though the Imakefile says it shouldn't be required, because Mark doesn't prefix his use of the X header files with "X11/<header file>" but the standard imake templates assume that applications do so.  Since Mark apparently does his builds on Linux too, the rest of the Imakefile should probably work just fine as it is.  You then run "xmkmf; make xanim" to build the program.  Then just copy it to an appropriate directory, such as /usr/local/bin.  Installation, from build to running my first animation, took about 10 minutes.

The interface for xanim is rather small, but it supports starting, stopping, rewinding and audio levels.  You can step through a video by clicking various mouse buttons in the display window.  Most of the options supported by xanim are accessible only from the command line.  You can find what options are available using the traditional --help command line option.  There is a remote interface available that allows external programs to control xanim and I believe there are GTK, TK and KDE based front ends to xanim now, although I didn't specifically look for them.  Note that there is no built-in help facility to xanim.  You'll need to read the documentation or visit the Web site for details.  But for most animation's, especially on systems with TrueColor visuals (i.e. 16.7 million color displays), you simply run "xanim <filename>".  Pretty straight forward, really.

Xanim played all 14 of the videos I tried with absolutely no problem.  I tested Quicktime, MPEG, FLI and IFF animation's.  There was little jitter or no obviously skipped frames and the sound was perfectly synchronized with the animation's which came with audio (which, as it turns out, was just the Quicktime files).  Tests were run in both TrueColor and 256 color modes.  Xanim had no problems mapping the full color videos to the lower bit planes.  In fact, it did a better job of doing it than I could using various command line color related options.  By default xanim will loop through the animation indefinitely.  You can change this behavior using command line options.

By supporting dynamic loading of video codecs, Mark has made it easier for end users to add support for any new codecs that might come along.  Now you can simply download the appropriate binary codec from his site, unpack it, and restart xanim.   Recompilation is no longer necessary.  Despite it's apparent visible simplicity, xanim is still the best all around video player for Linux.

MainView
Version 2.06

In trying to figure out a topic for this month's Musings, I started to look around for video editing software.  I'd heard a few packages were available, but had never tried any of them.  One package I did run across was a new commercial package (currently freely available as a beta distribution) from a German company called MainConcept.  This package included a video display tool called MainView.
MainView, running an MTV 
sponsored clip of a Garbage 
video for their single "Happy".

MainView is actually an external viewer application to the larger MainActor Video Editing system.  It can, however, be run independently of MainActor.  The interface is even more sparse than xanim's, but doesn't appear as cramped.  Run time options can be accessed through a menu which you can open by right button clicking with your mouse over the animation window.  Options include changing the speed of the animation and various audio options.  Audio, unfortunately, didn't work at all on any of the animation's I tried.  It always played very loud and completely distorted audio.  I ended up turning audio off after testing it on all the files for audio support so I could continue testing video playback.

Video support is much better than audio, fortunately.  All 14 of the animation's I tried played flawlessly under a TrueColor display.  When I started MainView the very first time, I noticed that it complained about requiring the XFree86 DGA (Direct Graphics Extension) extension, but still started anyway.  The extension, it turns out, is only needed if you want to run in full screen mode.  As long as you're not trying to do that, the video portion of MainView works fairly well.

One nice feature of MainView is that it remembers the last directory you were in between sessions.  I like this because I can launch MainView from my FVWM2 GoodStuff bar and have it be in a directory where I save animation files.  MainView will start by providing a file browser window from which you can select an animation to view.  It then closes the file browser and starts the video playback window.  There doesn't appear to be a way to return to the file browser, however.  That sure would make it easier to browse through multiple video files without having to restart MainView each time.  MainView also doesn't automatically loop through videos.  In fact, I couldn't find a way from within MainView itself to get a video to loop.  MainActor does allow you to set a repeat count, but not an infinite loop.
 

The test frame displayed by 
xanim.   The picture here  is
a little less grainy than the
MainView display.

MainView's version of the test frame.  The contrast is a little better
here - you can make out  more detail, but at the expense of image
quality, I'd say.
MainActor, the Video Editor package for which MainView works, does attempt to provide online help which it tries to launch in a Netscape Window.  The HTML help files had been installed with the RPM distribution, but MainActor failed to get Netscape to open the HTML files.  It simply started a new instance of Netscape (even if you already had a version of Netscape running).  I had to give a file: URL to open the files manually.

Comparing MainView and Xanim under a 256 color display

Here are screenshots of both xanim and MainView displaying the same frame of the Garbage video under a 256 color display.  The xanim version appears to have a little better dithering than MainView, but if you watch the entire video with both players you can hardly tell the difference.

Although you can currently download this product for free, MainView and MainActor are commercial products.  The price for the product listed on the company's Web site could only be found under a Press Release - $80US for the package without documentation, $115US with documentation plus some other extras.  The current version is distributed in binary and is available for Linux on x86 platforms only and only in RPM format.  Recently, MainConcept announced that MainActor would be bundled with the Linux Media Labs  LML33 Video Capture Card.  To my knowledge, this is the first bundling of a Linux oriented hardware peripheral with a Linux specific application.  Things are looking up for off-the-shelf solutions.
 
 

MpegTV
Version 1.0.9.4

As the name implies, MpegTV only plays MPEG animation files.  However, unlike the previous two applications, MpegTV can handle both static and streaming files, both locally and across a network.  This program actually comes in two pieces - the command line oriented mtvp program and it's GUI interface, mtv.  The latter requires the XForms library, which is not shipped with any Linux platforms currently but is free for private use and can be downloaded from the XForms Web site.  For certain features you may also need the SDL library as well.  Both can be found via links on the MpegTV download page.  Installation instructions are not included with the downloaded package.  You have to go to the MpegTV web site to get them.
 
The MpegTV UI.  The 
control panel's volume 
controls work well with 
the OSS sound system I 
have installed.  The video 
playback, however, was 
a bit grainy.  This was 
probably the fault of the 
recording and not the 
player, since the other 
MPEG files I tried didn't 
seem to have this 
problem.

MpegTV is shareware for personal use, with a shareware price of $10.  It requires a commercial license for commercial use.  The version I downloaded would pop up the usual annoying "please register" window common for shareware applications.  Personally, this doesn't bug me much since I don't have any problem with people trying to sell their software.  If it's worth it, I pay for it.

Unfortunately for MpegTV, this dialog did pose a problem.  Half way through the Star Wars trailer (which I downloaded from their site as part of my testing) the Registration dialog popped up.  At that point the sound quit and the main control window wasn't redrawn and no longer accepted user input.  The video, however, kept playing.  The only way to exit the program after this was to use "kill -9" on the mtv and mtvp processes.  Since I had to run the program multiple times to try to get screen shots and try various features, this bug became a real annoyance.  I'm hoping that the registered version doesn't do this (since you should never see the registration screen).

SDL - Simple Direct Media Layer -  is the same library used by Loki for their port of Civilization: Call To Power.  It provides a layer between X applications and various low level multimedia API's, including XFree86's DGA extension.  I suspect you'll be seeing this library being used, and required by, quite a few applications in the future.  For MpegTV, SDL is only required to run MpegTV in full screen mode.  One problem I had with this was that the SDL installation tool installs the library under /usr/local/lib by default (you can change this during the installation process).  MpegTV requires that the library be installed under /usr/X11R6/lib.  I installed the library under /usr/local/lib and added symbolic links under /usr/X11R6/lib.  This should have worked, but for some reason MpegTV failed to load the libraries.  As far as I can tell, there is nothing wrong with the symbolic links so I suspect that the library must really be under /usr/X11R6/lib in order to work with MpegTV.

the company's Web site offered two test animation's, a short animation of bouncing boxes and an old Star Wars trailer.  Both of these played just fine.  There is also a link to a site with more links to MPEG animation's on the net.  I bounced around a few of those but couldn't find anything more interesting than the 3 other MPEG animation's I already had.  MpegTV played them all just fine (if you ignore the Registration dialog problem).  Additionally, MpegTV can also play Video CDs, such as the video portion of music CDs.  It doesn't play DVD, however.  Although my RH 5.2 system appears to have the VCD patch applied, and xreadvcd does appear to read the video cd contents, I couldn't get MpegTV to read the CD nor could I get xreadvcd to write the MPEG stream to a file.  There is something wrong with my kernel configuration, apparently, so I couldn't really test the Video CD support in MpegTV.

Interestingly enough, after downloading the two test MPEG files from the MpegTV site and trying them with mtv, I then went back and tried them with xanim.  I couldn't play either of them correctly with xanim.  I then tried some of the other mpg files I had used with xanim under mtv.  They all played about the same except for one - monopoly.mpg.  Under xanim this played rather slow, with distinct stoppage between frames.  Under mtv this played just fine.  The frames flowed by seamlessly.  So, mtv appears to deal with MPEG files better than xanim, although mtv appears to have some nasty bugs, at least in the unregistered version.

MpegTV will play MPEG streams direct from the Internet if you supply a URL on the command line or through the Play From URL option from the File menu in the control window.  I tried this with one site but found the stream to be too slow to play interactively.  After the 20 minute download, which did play while it was being downloaded even though it looked like only one frame every so often was playing, I tried to replay it and save it.  I could do neither.  I don't know if this is a limitation in the unregistered version or not, however.
 

RealVideo
Linux G2 Beta version

One of my favorite tools to be ported to Linux is the RealVideo G2 player from RealNetworks.  While working for Samsung in Dallas, and forced to use an NT box for email, I got hooked watching and listening to Bloomberg TV financial reports.  I was able to do this only because I was stuck with that NT box (which sat to one side and collected dust most of the time) and G2 didn't run on my Solaris box.  Now that I'm working from home, I'm thrilled to be able to view this same content from my Linux box.
The G2 Player, running a clip from the Wild, Wild West.  Note that the video 
window includes links to other movies.  These are all part of the new SMIL 
(Synchronized Multimedia Integration Language) page design supported by the G2.

The G2 player can play any of the streaming video and audio formats from RealNetworks.  This includes the older .ram and .rm audio files as well as the new Synchronized Multimedia Integration Language - SMIL, yet another of the HTML-style formatting languages - files, suffixed with .smi.  It doesn't play MPEG or any of the formats the other players support, however.  So you need to find sites that support the RealNetworks formats.   Fortunately, these sites abound on the Internet.  RealNetworks was one of the first to provide a usable streaming media format for the Internet and it caught on very fast.  Many news sites support RealVideo these days.

The Linux version is still in beta, at least to my knowledge.  I've had no serious problems with it although sometimes the video window can get visual artifacts when you switch sites.  It also had a few problems with refreshing the video window when another window had partially hidden the G2 player, and then the other window was moved away.  These problems only happened with static parts of the video window - any animation's forced window updates and so they appeared to work just fine.

Playing of the streams has been pretty good.  I think I have more problems with network delays than with playing the streams.  The G2 player comes with a host of options to configure the player for best performance.  It can work behind firewalls if you're network administrator permits passing the right port numbers.

The player itself is made up of a primary video display window surrounded by associated controls.  The NT version includes a scrolling icon-based playlist on the left of the video window but the Linux version lacks this currently.  I don't think there are any serious technical reason they can't add it in the future, though.  Information about the clip currently playing can be scrolled through the Clip Info window, or this can be disabled to help increase performance just a bit.  The audio support is very good - RealNetworks chose to allow skipped video frames in exchange for a fluid audio performance.  I find that appealing as I often just listen to the streams while doing other work.

Streaming video is still a jumpy affair.  You don't get the smooth frame-by-frame animation's you get when playing an MPEG or Quicktime file directly from your hard disk.  But the format does support moving anywhere within the stream at any time.  I can jump to the middle and pick up playing from that point if I choose.  Or I can rewind or start over at any point within the currently playing stream.  And I don't have to wait for the entire file to download in order to do this.  I still think streaming audio is a better media for this technology due to general limitations in bandwidth to the user, but once we all have higher speed connections, streaming video will offer choices that TV and cable never could.

Other tools

One other player I tried was XMovie.  This is a program that runs off of a library built to run Quicktime movies.  It's part of a series of tools that include another video editor called BCast2000.  However, there are licensing limitations with Quicktime that XMovie can't get around.  I don't know if that was the reason or not, but XMovie couldn't play any of the animation's I tried.  Whatever codecs it supports, it's not the ones being used in the video files I found on the Internet.

Places to find video files online

You can always check many of the entertainment sites online, such as E! Online, and Comedy Central's download site.  Additionally, you can find clips and links to other online sources of video files at Jesse's Movies, and Yahoo!'s set of movie clip links.  Streaming MPEG and MPEG files sites can be found at MPEG.org's MPEG Bitstreams page.  RealVideo clips can be found at the RealVideo Showcase site and a their Real Guide site.

Keep in mind that playing movies like this doesn't require huge amounts of hardware - a 32Mb Pentium 133 should work just fine, although some animation's may play a little slow and audio might not sync all that well.  But you certainly don't need the 256Mb of memory I used, nor do you need the latest CPU.  And you certainly don't need a 3D accelerated video card.  These animation's are basically all just a series of individual raster images played very fast.  It's like using a flip book of pictures - the faster you can flip through the pages, the faster the animation appears to work.  Except on computers and with the right player, you have more control over the speed.

I have to admit, I'm still a big fan of xanim over any of the other players I've tried.  For 95% of the animation's out there it's just the right tool for the job.  But it doesn't, to my knowledge, support streaming video/audio.  Since I don't have cable television anymore (what a waste of money that is), I get my news and information online.  I find myself listening and even watching streaming audio and video with RealVideo quite often these days.  Since the information streams, I can leave it running while I work and just listen to the bits and pieces of what ever interests me.

Like whether or not my shares of Disney are ever going to go back into positive territory.  Maybe if they released all their films as streaming MPEGs....

Since I've always been a fan of visual media, I find the opportunity to build my own webcasts rather enticing.  Streaming video and audio are the best future for online video because they don't require the user to download the entire file to play it.  At some point in the future, I hope to be able to put together some live interviews for webcasting, but I have to investigate what that will cost and where it can be hosted.  In the mean time, at least I have the right tools to view other webcasts and online video.

[ More Musings ]

A TrueColor Visual is just the X Windows System terminology for a display that can handle up to 16.7 million colors.  Most modern video cards can handle this, especially if you have 2Mb or more of video memory on the card.
 

The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.
 
Online Magazines and News sources 
C|Net Tech News
Linux Weekly News
Linux Today
Slashdot.org
TheGimp.com

General Web Sites 
Linux Graphics
Linux Sound/Midi Page
Linux Artist.org

Some of the Mailing Lists and Newsgroups I keep an eye on and where I get much of the information in this column 
The Gimp User and Gimp Developer Mailing Lists
The IRTC-L discussion list
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.graphics.api.opengl
comp.os.linux.announce

Future Directions

Next Month:  A return to 3D Modellers.

Let me know what you'd like to hear about!



Copyright © 1999, Michael Hammel
Published in Issue 43 of Linux Gazette, July 1999

© 1999 Michael J. Hammel
indent
 Recent Blender News from June 6 1999
 MpegTV Player (mtv) 1.0.9.8
 gPhoto 0.3.3
 GIMP Imagemap plug-in 1.1.1
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.



Recent Blender News from June 6 1999

- Blender at SIGGRAPH 1999
NaN will give the first public presentation of Blender at the world's most famous CG show, this year in LA, august 10-11-12.   Visitors will get a sneak preview of the 2.0 Game-Blender.
http:/www.siggraph.org/s99/

- Complete Key features
Blender 1.63 now has radiosity, environment mapping and DXF output included as C-key  features. Version 1.64 will have a text editor (free versions too).  We expect to introduce Python scripting in V 1.65.
Check out http:/www.blender.nl/complete/

- Linux 3D h/w acceleration coming?
We hope to demonstrate the first accelerated Linux Blenders at Sig '99.  There's still not a working version here... patience!

-Ton-



MpegTV Player (mtv) 1.0.9.8
  Kerberos - June 27th 1999, 15:06 EST

MpegTV Player (mtv) is a real-time software MPEG-1 Video+Audio Player and VCD Player. It supports full screen mode, can play from file, pipe, network URL, or Video CD.
http://www.mpegtv.com/download.html



gPhoto 0.3.3
  Paul S Jenner - June 28th 1999, 12:40 EST

gPhoto enables you to take a photo from any digital camera, load it onto your PC running a free operating system like GNU/Linux, print it, email it, put it on your web site, save it on your storage media in popular graphics formats or just view it on your monitor. gPhoto sports a new HTML engine that allows the creation of gallery themes (HTML templates with special tags) making publishing images to the world wide web a snap. A directory browse mode is implemented making it easy to create an HTML gallery from images already on your computer.

Changes: Various bug fixes
http://www.gphoto.org/



GIMP Imagemap plug-in 1.1.1
  Maurits Rijk - June 28th 1999, 12:38 EST

The GIMP Imagemap plug-in enables The GIMP (GNU Image Manipulation Program) to create clickable imagemaps in CSIM, CERN or NCSA format.

Changes: Fixed serious bug that made version 1.1 segfault at start-up.
http://home-2.consunet.nl/~cb007736/imagemap.html
indent
© 1999 by Michael J. Hammel

© 1999 Michael J. Hammel
indent
Impress Follow-up
more musings...



Impress Follow-up

Chris Cox dropped me a note to clarify a few points from my Impress article last month:

My permanent email is cjcox@acm.org, not ccox@acm.org (hope ccox@acm.org doesn't get too much mail).

The Dinosaur is Steve Cowden's dinosaur he did using Aldus Freehand.  It's copyrighted....I tried to contact him about using it....hope there's no problem.  In the impress_complete package there's a tool called epsfilt and one called pstoedit (you can actually download the latest copy from its home site....it has my mods already).  You can take EPS to Tk using these tools:

$ epsfilt <myfile.ai >myfile.eps  # imperfect tool, may have to hand edit!
$ pstoedit -f tk myfile.eps myfile.tk
Then just open the myfile.tk in ImPress.  ImPress doesn't handle clipping regions though....so the neat dithering stuff you find in commercial packages won't work.

Larry Ewing's penguin was converted to Postscript by somebody (?).  I have since imported a better one by using Adobe Streamline running under Wine to convert the actual raster penguin to vector format.  The postscript one scales better than the one converted from the raster image though.

Another thing you might want to try (though people question its practical implications) is to download the Tcl/Tk plugin from www.scriptics.com and install it into a Netscape browser under Linux.  Then edit the plugin.cfg file as indicated in the documentation to allow you to load a tclet from www.ntlug.org and you can then execute the demo showing a document being retrieved off the web for editing inside a web browser.  It can be saved to any local disk (which could be a samba mounted area for example).  (there are some Netscape bugs which prevent "saving" from working real well)

Thanks for taking the time to look at it,
Chris Cox
indent
© 1999 by Michael J. Hammel

© 1999 Michael J. Hammel
indent
With the grid turned on you can create rectangular, oval or polygonal areas that are snapped to the grid.  With any of these shaped areas you can edit the attributes of the area using the Edit->Edit Area Info menu option.  This option opens a dialog with three pages:  Link, Shape, and JavaScript.  The Link page allows you to specify what URL the region should link to.  With the 1.1 release, you can now drag from Netscape's Location icon directly into the URL field of this page and the link will be dropped in for you.
The Selection portion of the main
dialog.  Notice the URL #'s.  The
higher the number, the higher the
precedence that area takes for
regions that overlap.
The Shape page will allow you to edit attributes of the shape itself - height, width, location of vertices, radius and so forth.  The JavaScript page allows you to specify 4 types of JavaScript event handlers:  onBlur, onFocus, onMouseOver and onMouseOut.  The text input regions for these is a little small - only one text line.   But the line can go on (apparently) indefinitely.

To create a new rectangular or circular shaped area, click once to drag the shape then click again to anchor it.  Doing this opens the area settings dialog so you can specify the URL to associate with the area.  The polygonal area shape works similarly, except that a double click is required to end the shape and open the attributes dialog.  Additionally, with polygonal shapes a left mouse button click will anchor a new vertices for the shape.  A right mouse button click will delete the vertices in reverse order.

To edit the shapes, click on their URLs in the Selection window or click on the shapes outline while the Select Existing Area button (the button with the arrow along the left side of the dialog window) is enabled.  Note that when you edit the shapes size by dragging one of the handles on the shapes outline that the snap to grid function is no longer in effect.  I don't know if this was intentional or not, however.  Don't be suprised to see this change to match the current grid settings in later releases.

Once you've set the URLs for shapes, you can change their positions in the map.  The lower in the list the higher precedence that area takes.  This is important for areas which overlap.  Notice in the example of the Selection window that there are 4 areas defined.  Area 4 will take precedence over area 3 any place the two areas overlap.

Now that you have a number of areas defined, what does the HTML code for this image map look like?  Here is the source for the example at left:

<IMG SRC="/home/mjhammel/src/graphics/scenes/stock/ttu-ontour-97.pnm" WIDTH=300 HEIGHT=359 BORDER=0 USEMAP="#">

<MAP NAME="">
<!-- #$-:Image Map file created by GIMP Imagemap Plugin -->
<!-- #$-:GIMP Imagemap Plugin by Maurits Rijk -->
<!-- #$-:Please do not edit lines starting with "#$" -->
<!-- #$VERSION:1.1 -->
<!-- #$AUTHOR:Michael J. Hammel -->
<AREA SHAPE="RECT" COORDS="15,15,285,135" HREF="mailto:thisguy@home.org">
<AREA SHAPE="CIRCLE" COORDS="75,135,67" HREF="http://blah.org">
<AREA SHAPE="RECT" COORDS="60,285,255,345" HREF="http://blah.blah.net">
<AREA SHAPE="POLY" COORDS="285,15,195,15,195,60" HREF="ftp://somewhere.com">
<AREA SHAPE="DEFAULT" HREF="http://www.graphics-muse.org/blah/ttu-ontour-98.jpg">
</MAP>

Notice that the source images URL is taken to be the path to the image I've opened in the Gimp.  I looked around the plug-in but couldn't find a way to change this.  You apparently have to edit the HTML code manually.  There is an option to set a default URL, using the Info icon, but you can't change the source reference for the image.  One you have this HTML output to a file you'll need to find some way of importing it into your real HTML file.  The Image Map plug-in does not currently allow you to place this code directly into an existing HTML file.

Unfortunately, I found, along with quite a few bugs with the 1.1.1 release, that the Image Map plug-in doesn't do many of the things I would like.  First, the polygonal area tool is really difficult to use.  For some reason, it won't accept my double clicks to end a polygon shape definition unless I double click and don't move my mouse for a split second afterwards.  If I double click and move too quickly I get all sorts of line drawings from the last vertices created to the current mouse location.  It looks something like a star burst shape.  In any case, it's not what was expected.  The unwanted star burst design will go away if you cover and then uncover the Image Map window with another window (forcing a window update for the Image Map plug-in).  However, getting out of that mode seems to require backing completely out of the polygon by right mouse clicking until all vertices have been removed.  At this point, the extra line drawing seems to stop.

Another nit pick of mine is that the up and down buttons for URLs moves the selected item to the top or bottom of the list, not up or down one list item.  I feel a little like I'm playing the Towers of Hanoi trying to arrange the URLs.  And I can't specify the path to the image source without hand editing the saved HTML that the plug-in outputs to file.

Zoom takes a little while to work - it zooms the whole image and doesn't appear to let you specify the area to focus in on.  I suspect it uses it's own zooming algorithm and not the Gimp's since the Gimp's zoom is considerably faster.  Or maybe it just takes longer to recompute image map area shapes.  I also don't like the fact that changes to width, height or other shape attributes don't automatically update the display.  I think automatic updates should probably be a user configurable preference.

As for bugs, there were a number that were difficult to recreate.  Aside from the problem with the polygons and double clicking, I noticed at one point that the height of my first rectangular selection, as displayed in the settings dialog, was only 1 pixel high.  But most of the bugs I found were minor and self correcting.  None seemed to cause the final HTML output to be corrupted in any way.

Although I might prefer changes to certain aspects of the interface, I find Maurits' Image Map plug-in for the Gimp to be stable, easy to use and produces syntactically correct HTML.  If you do a fair amount of image map work, or even if you'd just like to create one just for grins, you owe it to yourself to take a look at this plug-in.  It may just easy your life just a bit.
indent
© 1999 by Michael J. Hammel

"Linux Gazette...making Linux just a little more fun!"


Into the Belly of the Beast

By Norman M. Jacobowitz


ESR goes to Microsoft ... and lives to tell about it!

It was not a normal day here in Seattle. Eggs were balancing on end. The city was shrouded in a most un-summerlike mist and fog. And ESR was speaking at Microsoft.

That's right. Eric S. Raymond was the invited guest of Microsoft Corporation, and gave a speech to their research group. June 21st was indeed a freaky Summer Solstice day here in the Northwest.

Eric went into the belly of the beast ... and lives to tell about it. He was kind enough to share his impressions of what went on, via this e-mail interview.

Q: Can you give us a general overview of how and why you came to be invited to speak at Microsoft's Redmond campus?

A: I was invited there by a member of one of Microsoft's research groups that I met at PC Forum 99. She seemed OK, and offered an inducement far more interesting than a speaker's fee (about which more below) so I accepted.

Q: Were you offered a tour of the campus, and/or were you introduced to any of the "big name" executives of Microsoft?

A: No campus tour, no big names. Though I suppose they might have been watching the video feed....

Q: What was the venue like, and how many people showed up for the event?

A: It was a small auditorium. It looked to me like about 200 people showed up; it was standing room only, with people stacked against the walls and sitting in the aisles.

Q: What were the general themes of your speech/presentation? How were they received?

A: All the usual ones for anyone who has heard my talks. Better reliability through peer review, how Linux beat Brook's Law, open-source project ownership customs and the reputation incentive, the eight open-source business models, scaling and complexity effects.

Q: A confidential informant tells me the event was broadcast to all 20,000-plus Redmond employees of Microsoft over their internal network. This same informant also says a fair percentage of those in actual attendance became somewhat belligerent towards you and your Open Source message. Is this true? If so, would you mind elaborating on which parts of your presentation they took issue with? For example, were they most perturbed at the insinuation that Open Source products like Linux are better in the long run than proprietary systems like MS Windows 2000?

A: Yes, there were a few belligerent types. Typical was one guy who observed that Oracle has a partial open-source strategy, then triumphantly announced that Microsoft's earnings per employee are several times Oracle's, as though this were a conclusive argument on the technical issues.

It was kind of amusing, really, fielding brickbats from testosterone-pumped twentysomethings for whom money and Microsoft's survival are so central that they have trouble grokking that anyone can truly think outside that box. On some subjects, their brains just shut down -- the style reminded me a lot of the anonymous cowards on Slashdot.

One of the Microsoft people, who knew the faces in the audience, observed to me afterwards that the people from the NT 2000 development group were particularly defensive. So, yes, I think my insinuations were perturbing.

Q: Did you notice an overall "mood" or general level of receptivity held by attendees towards what you had to say?

A: More positive than I had expected. The flamers were a minority, and they occasionally got stepped on by other audience members.

Q: Anything else interesting to report from your Microsoft visit?

A: Yes. One of its co-authors gave me an autographed copy of "The Unix-Hater's Handbook" :-) But that doesn't quite mean what you think it does -- I had been one of the manuscript reviewers.

Q: Of course, many may gather that perhaps the most fun and exciting aspect of your visit was your dinner with science/speculative fiction authors Greg Bear and Neal Stephenson. Was that as fun as it sounds to the rest of us?

A: Sure was. Those dinner plans were what seduced me into going to Redmond, and I wasn't disappointed. George Dyson (author of "Darwin Among the Machines", and Esther Dyson's brother) was there too. We spoke of many things; science fiction and AI and Turing-computability and cryptography. Oh, and Neal solicited my advice on the proper firearm for dealing with cougars while hiking with his kids.


Belligerent Win2k developers. An outspoken advocate of Open Source. Put them together in a room, and what do you get? Rumor has it there were fireworks. Who knows what galactic alignments were knocked off kilter -- it was the Solstice, after all. We'll never know exactly what happened over there, at least until a sympathetic mole over in Redmond e-mails us a RealVideo/MPEG copy of ESR's speech. Illiad's User Friendly offers us some food for thought.

Thanks very much to Eric S. Raymond for sharing his Microsoft experience with the Linux/Open Source community.


Update!

Matthew Dockrey offers his eyewitness account of ESR's Microsoft speech.

Monday morning, a friend of mine at Microsoft mentioned he got a mailing about the ESR presentation and thought he would swing by. Being an opportunist, I convinced him to sneak me in. Luckily, they weren't checking badges at any point. Considering how much they value trade secrets, their security is really quite lax.

The presentation was in a conference room in Building 31 (Research). It was far too small for the turnout, although my friend reminded me that this was supposed to be for just the research group. Getting there 20 minutes late after missing the bus, we were left trying to catch a peek through the crowd. There was a live video feed as well, and we ended up watching the first half from 10 meters down the hall on someone's computer.

The audience was a very odd mix. Most of the people seemed very serious and were even taking notes. I did notice someone with a KMFMS t-shirt, though. Some were very obviously hostile towards the Open Source approach, but not all. (Not everyone who works at Microsoft actually uses their products at home, remember.) On the way to the presentation, I saw an office with Linux Journals and O'Reilly Linux manuals laying about, so not everyone there is ignoring us.

Overall, it was a good presentation. I was generally impressed with ESR's skills as an orator. He spent most of the time giving a sociological explanation for why OSS works, or exists at all. Nothing all that revolutionary (to us): Open Source is a variant of the "gift-culture" that often forms when groups of people are not greatly bounded by material limitations (such as coastal Pacific Native Americans and really rich people) and therefore take to giving away wealth as demonstration of their worth. He also detailed the culture of Open Source projects, the general patterns and taboos (a project is owned by someone; you don't fork the project unless you have very good reasons, etc.) and compared this to territoriality, especially the way we view land ownership. You can homestead land (start a project), buy land (have the previous project owner give it to you) or squat on unused land (take over a long-idle project).

I felt the presentation lost a bit of its focus when he moved from the abstract sociological viewpoint to actual justification for Open Source in a business model. I think this was largely because he based some of his arguments on sweeping claims about OSS being generally better than proprietary, and the audience challenged this. His point would probably have been better made without being quite so confrontational here.

He did make a very good point that 95% of software development is for internal use only, although there was an amusing moment when his survey of this particular audience did not reflect this. He also touched on the fact that most revenue from software is based in support, not the original sale. He mentioned what happened with Zope, but failed to pursue it. Of all the business arguments for OSS (and I admit I lean towards RMS's moralism over ESR's practicality), this seems to be the most relevant.

Overall, it was a very good presentation, and the audience seemed generally receptive to his ideas. There were some good-natured laughs on both sides, such as ESR admitting that most of the gift cultures had been destroyed by disease, or ESR stating a desire to live in a world "where software doesn't suck" as a valid reason for working on an OSS project. I found it particularly amusing when, halfway through the presentation, someone started handing out freshly printed copies of Sunday's User Friendly comic.


Copyright © 1999, Norman M. Jacobowitz
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


XFce3: Now 100% Free Software!

By Norman M. Jacobowitz


The long-awaited and much-needed "third choice" in Desktop Environments for X ...

One of the biggest debates in the Free/Open Source Software community over the past year has been over KDE and GNOME. Perhaps the major bone of contention between the two camps was the issue of licensing, specifically the proprietary nature of the Qt library used by KDE.

During these debates and outright flame wars, an alternative was lurking in the background. Called XFce, it was a lighter-weight desktop environment. One could reasonably consider it a middle-ground solution: more configurable than running a window manager such as FVWM, but not the behemoth of KDE or the then-nascent GNOME. Unfortunately, XFce suffered from the same flaw -- a fatal flaw in the eyes of many -- as KDE: XFce was based on the Xforms library, a proprietary widget set for the X Window System.

Well, there is now some very good news for Free Software enthusiasts! Olivier Fourdan, author of XFce, has taken the dramatic step of rewriting the whole project, using the GIMP toolkit. Finally, we have what many consider the "holy grail" of desktop environments for X: a lightweight, highly configurable, reliable, attractive and 100% free alternative to KDE and GNOME.

Recently, Olivier was kind enough to agree to an e-mail interview and discuss these important developments.

Q: When and why did you first decide to write XFce?

A: In late 1996, I started to work as a help desk analyst. As part of this job, I was working with HP X terms running CDE. I really loved that environment, and tried to find something similar on Linux. Unfortunately, the only thing I found was the commercial port of the real CDE to Linux, and it was really much too expensive for me.

Then in early 1997, I started to play with XForms and fdesign, the GUI designer. One real cool thing about fdesign is its ability to generate compilable C code from scratch. The XFce project had started, but as usual, I really didn't think it could go that far! I just started coding a very basic toolbar with Xforms, and when I released the first version on SunSITE (now called Metalab), people started asking for more and more features.

Initially, XFce was just the toolbar, without the window manager and all the goodies. In 1998, I released XFce 2.x with xfwm, the window manager. The rest of the goodies came from release to release ...

Q: What compelled you to rewrite XFce using the GIMP toolkit? Was it for technical reasons, licensing reasons, or some mix of both?

A: I was thinking of porting XFce to GTK+ (the GIMP toolkit) for a long time. When the GNOME project started, somebody from the team sent me a mail from Mexico telling me they were starting a new desktop project with the GIMP toolkit and were looking for such a toolbar. Unfortunately, I did not know anything about GTK+ at that time and my skills in X programming were not as good as they are today.

Last year, when I released XFce 2.x, I talked with the people from Red Hat to see if they could use XFce in their distribution, but they did not want Xforms-based applications because of the license the library uses (it's free for private use and free applications, but the source code is not available).

As time passed, more and more projects were being based on the GIMP toolkit. I had to make something really new with XFce, include drag and drop, native language support, improve configurability, etc. So, at the end of March 1999, I decided to start XFce 3.0 and rewrite it entirely from scratch with GTK+.

Now I'm really glad I did that, XFce 3.0 is still fast and stable, and it features all I wanted for XFce, under the GNU General Public License, based exclusively on GNU tools (NLS, autoconf, automake, etc.)

Q: Do you know which Linux distributions ship with XFce? And do you know of any that will now ship with XFce3?

A: I think Red Hat and SuSE both ship XFce 2.x on their additional software packages, and Kevin Donnely has made a package for Debian. But still, as XFce 2.x was based on Xforms, none of these distributions include XFce in their base system. I know FreeBSD also provides XFce 2.x as an additional package.

XFce 3.0 is now all GPL, but I guess it is still much too recent to be included in any distribution -- although I really hope some distribution will include XFce 3.0 in their base packages, among other choices for the user.

Q: What do you think of GNOME and KDE, in general? Can you briefly summarize the relative merits of each versus XFce3?

A: KDE is the first attempt to provide Linux with a fully integrated desktop environment. I've been impressed by KDE 1.1! Unfortunately, KDE is too close to Microsoft Windows; I really don't like the "Start menu" style. Sometimes you have to go through several submenus to launch what you want (but this is a matter of taste). Moreover, KDE uses a lot of system resources. For example, I was not able to use KDE on an X terminal through a 10MB local network, whereas XFce works like a charm in such a configuration.

I don't know much about GNOME, as I could not make it work on my computer. But what I saw from it was very close to KDE, so the same remarks apply to GNOME. It seems to be so close to KDE that I don't understand what the need was for two similar environments on Linux.

I believe the desktop environment should be made to increase user productivity. Therefore, the goal is to keep most system resources for the applications, and not to consume all memory and CPU usage with the desktop environment. For example, does KDE or GNOME fit on a 1.44 MB floppy?

GNOME and KDE both provide a lot more integrated tools than XFce (although most of the time, separate tools are more powerful than the integrated ones; for example, I believe NEdit is better than any other Kedit or whatever). The exception is KFM, the KDE File Manager, which is far ahead the best program of all in KDE, in my opinion.

Some people say XFce is for the little systems, while GNOME and KDE are for bigger ones. I don't agree; the more memory and CPU you save for your applications, the better it is. And if you still want to use KDE and GNOME tools, because they are convenient for you, you can use them under XFce, as its window manager is supposed to be compatible with these applications, too.

Q: XFce3 -- could it be your new desktop environment?

A: If you are looking for an alternative to KDE or GNOME, I strongly recommend investigating XFce3. It's small and efficient. It's functional and attractive. And now, it's 100% GPL software. Olivier has just completed an upgrade to the XFce main page. It includes links for downloading mirrors and the HTML on-line manual for XFce3.

Hopefully, all of the distributions will start shipping XFce as an optional desktop environment. It would be even better if at least one of the distributions would ship XFce3 as the default option. The Free Software community is famous for giving users a choice. XFce3 is now a fantastic choice for people who want a free option other than KDE or GNOME. By shipping XFce3 as a default desktop, perhaps one of the smaller, more up-and-coming distributions could make itself stand out from the KDE/GNOME crowd. XFce is a natural fit for any distribution trying to make itself known as a faster, lighter-weight Linux option.

In any case, XFce3 is worth a look.

XFce3 Resources

Olivier Fourdan's XFce3 home page, with the download section and on-line manual. Available are sources and pre-compiled platforms for Linux and other platforms.

The GIMP Tool Kit home page, with important information about this free library.

Special thanks go to Chuck Mead of Moongroup Consulting, who hosts and maintains XFCE.org and the XFce Mailing List. Highly recommended whether you are a novice or expert user. To subscribe, email xfce-list-request@xfce.org with the word "subscribe" (no quotes) in the subject line. Chuck Mead is also a board member of the Linux Professional Institute.


Copyright © 1999, Norman M. Jacobowitz
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Mark's Kickstart Examples

By Mark Nielsen


If this document changes, it will be available at The Computer Underground: http://www.tcu-inc.com/mark/articles/Kickstart.html.
  1. Resources.
  2. What is kickstart?
  3. Gripes and Complaints
  4. Cdrom upgrade example.
  5. Advanced -- Ftp example
  6. Conclusion -- it is good
  7. My perl script

Resources.

The Kickstart HOWTO is very very good.

  1. On the Redhat 6.0 cdrom, look at /doc/README.ks
  2. man mkkickstart
  3. http://www.redhat.com/corp/support/manuals/RHL-6.0-Manual/install-guide/manual/doc129.html [31-Jul-2001: document has been removed. -Ed.]
  4. http://www.redhat.com/mirrors/LDP/HOWTO/KickStart-HOWTO-6.html
  5. http://redhat.google.com/redhat?q=kickstart&search=redhat

What is Kickstart?

One note, I think it would be a good if someone made a follow-up article for other Linux distributions. I would be more than happy to help.

KickStart for RedHat makes it so you can do a quick installation of RedHat 6.0 without having to go through all the installation menus. It automates it for you from start to finish (assuming nothing goofs up in between). This is handy for multiple installations. All you have to do is create a "ks.cfg" file on the RedHat 6.0 boot disk. Then boot off of the disk, type in the command "linux ks=floppy" on the first screen, and if you are lucky, when it starts the installation process, it should be able to grab that file and do the whole installation for you.

Here are the key points you need to be aware of,

  1. If you don't have a bootdisk for RedHat 6.0, you need to make one. Either use "rawrite" that comes on a RedHat 6.0 cd while in DOS, or mount the cd in Linux and assuming you mounted the cd at /mnt/cdrom and you have a floppy disk in (that doesn't have anything important on it),
    cd /mnt/cdrom/images
    dd if=boot.img of=/dev/fd0
    ## OR if you are installing off the net
    dd if=bootnet.img of=/dev/fd0
  2. The RedHat 6.0 boot disk is formatted as an msdos disk, thus you can copy the "ks.cfg" file to it in Windoze or in Linux. If you need to mount the floppy disk in Linux, try,
    mkdir -p /mnt/floppy
    mount -t msdos /dev/fd0 /mnt/floppy
  3. Kickstart can be installed using a cdrom, over the net through ftp or nfs, and for multiple computers you can use DHCP. It has other features as well. For this example, we will use a static ip addresses.
  4. It is assumed we are using standard IDE hard drives and cdroms. If you have SCSI devices, or other weird devices, you will have to modify the kickstart file so that it will work.
  5. Something weird happened on one of my installs. After installing RedHat 60, I tried to do an upgrade and somehow it found over 100 megs of programs to install as an upgrade even though I didn't do anything to the system. Weird. I did it again, and it installed 115 packages which had a total of 112 megs. I don't think it had to install those rpms again, but that it was an automatic thing with the upgrade option , whether or not you have those rpms installed.

Gripes and Complaints

Well, I have some gripes and complaints. Overall, the kickstart disk is pretty cool and has improved since 5.2, but still, it seems a little braindead or it doesn't give you the absolute power you need sometimes. I want the ability to shoot myself in the foot if I want to. It might sound weird, but here are my points:
  1. I couldn't find a "select all rpms" option. This makes it annoying since I have to list all the rpms. Perhaps I was blind, but I didn't find that option.
  2. I can't tell it to force a graphics card and monitor settings with a certain resolution. This is annoying. Xconfigurator doesn't work for me half the time, and I need to configure Xwindows after an installation. An easy way to do this, in a manual installation, is to set the computer to VGA 16 and to install the drivers for the true card later. If you set it to VGA 16 and it thinks it is something else, kickstart will error and stop the installation process and ask you for more info.
  3. I wish I could force the name of the computer so that it doesn't stop and ask you what the name of the computer should be if it doesn't find it in the DNS server. I would like the ability to shoot myself in the foot and put in incorrect settings just because I want to.
  4. It is pretty stupid about making Linux partitions. It only wants to see one primary partition and to put everything else into logical partitions. This is silly especially in multi-boot systems. It just grabs the rest of the hard drive space after the primary partition and puts it all into one extended partition. I would like the ability to specify primary or logical partitions. It would be nice if someone were to help make it so you can create a script which would automate the commands for fdisk. Perhaps there is and I just don't know about it.
  5. Is there a way to tell it to use a partition that is already defined? It seems as though it only can use the partitions that it creates. It would be nice if I could define the partitions ahead of time, and in the kickstart disk tell it which partitions go to which directories.
Don't get me wrong, kickstart is a cool way of doing things, I just would like to see it brought to a different level where it is "very" cool. Cool = give me all the power to shoot myself in the foot just so I can see what happens. A little bit of pain never hurts anyone. Also, I am not a real expert at Kickstart, so perhaps I just didn't catch some of the options I am griping about. I have only been using it for about 2 months at the time of this document.

Cdrom upgrade example

Copy what is between the lines to a file called "ks.cfg" to your boot disk. Then, when you get to the first screen of the Linux installation, type "linux ks=floppy". Also, before you do this, make sure the BIOS in your computer is set to boot off of the floppy drive.

#         Copy this file as "ks.cfg" to the boot disk that comes with
#		RedHat 6.0. Or make your own boot disk. Anyways, the
#		 bootdisk is formatted msdos, so you can use a Windows	
#		computer to copy this file to it. Also, when the prompt
#		comes up after you boot off of the floppy disk, type
#		linux ks=floppy
#		and press enter. 

#  This is just the configuration you need to do an upgrade using a cdrom. 
#
#

	### Choose the most popular language in the world.
	### This always upsets the French. 
lang en

        ### Tell it to use the cdrom to get the rpms
cdrom

        ### Tell it to use a us keyboard
keyboard us

       ### Tell it this is an upgrade and not a regular install 
upgrade

	### Tell it to install Lilo at the master boot record
lilo --location mbr

%post

echo "Hey dude, this is an example of a post install command using echo."
echo "You probably won't be able to see it though if perl isn't executed."

PATH=$PATH;/usr/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/root/bin
export PATH

ldconfig

perl -e '$A = 50; print "\nSleeping for 20 seconds. Test variable = $A\n"'
perl -e 'sleep 20'



An installation over ftp

Well, this configuration uses ftp, and if you wanted to use nfs, you could comment the ftp option and uncomment the nfs option. All you need is a floppy drive and a network card. Makes things a little easier if you can cut out a cdrom drive in each of your computers. Also, use a 100 mbit network. My installations using ftp were actually faster than installing off of the cdrom when I had a 100 mbit network. A 40 speed cdrom still isn't as good as a fast network with a few computers. A 100 mbit hub is really cheap these days.

#----------


#         Copy this file as "ks.cfg" to the boot disk that comes with
#		RedHat 6.0. Or make your own boot disk. Anyways, the
#		 bootdisk is formatted msdos, so you can use a Windows	
#		computer to copy this file to it. Also, when the prompt
#		comes up after you boot off of the floppy disk, type
#		linux ks=floppy
#		and press enter. 

	### Choose the most popular language in the world.
	### This always upsets the French. 
lang en

	### Tell it what the ethernet cards settings should be
	### If your nameserver 10.0.1.15 doesn't exist or doesn't
	### have 10.0.1.21 listed in it, it will ask you for a name.
network --bootproto static --ip 10.0.1.10 --netmask 255.255.255.0 --gateway 10.0.1.15 --nameserver 10.0.1.15

	### Uncomment this if you have nfs setup on the server
        ### If so, comment the "url" line. 
	### If you decide to use an nfs server, make sure you
        ### put "/home/ftp/RedHat60" in your /etc/exports file.
# nfs --server 10.0.1.15 --dir /home/ftp/RedHat60

   ### Make sure /home/ftp/RedHat60 exists on the server
   ### Also, have the cdrom mounted to /home/ftp/RedHat60 on the server.
   ### You can do this with 
   ###          mkdir -p /home/ftp/RedHat60 
   ###		mount /dev/cdrom /home/ftp/RedHat60  
   ### In case you don't have a nameserver, use numbers for the url
url --url ftp://10.0.1.15/RedHat60

        ### Tell it to use the cdrom to get the rpms
# cdrom

	### Tell it to use an Intel 10/100 EtherExpress ethernet Card
device ethernet eepro100
	### Tell it to use a us keyboard 
keyboard us
	### Tell it to blow away the master boot record on the hard drive
zerombr yes
	### Tell it to do a dumb move and blow away all partitions
clearpart --all
	### Make a swap partition, and unfortunately, this will go on a
	### logical  partition.
part swap --size 100
	### It will make a primary partition for root, 2 gigs 
	### I couldn't find a command to check for bad sectors. Is there one?
        ### I don't think "grow" will work here, unfortunately 
part / --size 2000 
	### Make a directory /OtherDir which is at least 1 gig and grows
	### to fill out the rest of the hard drive
	### Since "Kickstart" is literally an unintelligent program, 
	### it will put this partition on a logical partition
	### even though there are plenty of primary partitions to use. 
part /Backups --size 1000 --grow

	### Tell it to use a MicroSoft compatible mouse  
mouse --kickstart microsoft --device ttyS1
	### Let it know we are doing an install and not an upgrade	
install 
	### Tell the timezone we are in
timezone --utc US/Eastern
	### Tell it to use standard VGA 16, but sometimes it croaks
	### at this point and asks you anyways to select a card
	### I really wish you could force to install for a particular
	### card so that you can "configure" Xwindows after installation
	### Xconfigurator gets it wrong %50 of the time for my cards
xconfig --server XF86_VGA16
	#### Give it a dumb root password 
rootpw MyPassword
	#### Tell it to use shadow passwording
auth --useshadow
	### Tell it to install Lilo at the master boot record
lilo --location mbr

	### Now let us install packages, is there a simple command for all?
%packages

	### Select the packages to install, I think this is all of them
        ### Unlike the installation program which only installs one X Server,
	### my options installs all of them in case you switch video cards,
	### which I do a lot. 	
@ Base
@ Printer Support
@ X Window System
@ GNOME
@ KDE
@ Mail/WWW/News Tools
@ DOS/Windows Connectivity
@ File Managers
@ Graphics Manipulation
@ Console Games
@ X Games
@ Console Multimedia
@ X multimedia support
@ Networked Workstation
@ Dialup Workstation
@ News Server
@ NFS Server
@ SMB (Samba) Connectivity
@ IPX/Netware(tm) Connectivity
@ Anonymous FTP Server
@ Web Server
@ DNS Name Server
@ Postgres (SQL) Server
@ Network Management Workstation
@ TeX Document Formatting
@ Emacs
@ Emacs with X windows
@ C Development
@ Development Libraries
@ C++ Development
@ X Development
@ GNOME Development
@ Kernel Development
@ Extra Documentation
AfterStep
AfterStep-APPS
AnotherLevel
ElectricFence
GXedit
ImageMagick
ImageMagick-devel
MAKEDEV
ORBit
ORBit-devel
SVGATextMode
SysVinit
WindowMaker
X11R6-contrib
XFree86-100dpi-fonts
XFree86
XFree86-3DLabs
XFree86-75dpi-fonts
XFree86-8514
XFree86-AGX
XFree86-FBDev
XFree86-I128
XFree86-ISO8859-2
XFree86-ISO8859-2-100dpi-fonts
XFree86-ISO8859-2-75dpi-fonts
XFree86-ISO8859-2-Type1-fonts
XFree86-ISO8859-9-100dpi-fonts
XFree86-ISO8859-9
XFree86-ISO8859-9-75dpi-fonts
XFree86-Mach32
XFree86-Mach64
XFree86-Mach8
XFree86-Mono
XFree86-P9000
XFree86-S3
XFree86-S3V
XFree86-SVGA
XFree86-VGA16
XFree86-W32
XFree86-XF86Setup
XFree86-Xnest
XFree86-Xvfb
XFree86-cyrillic-fonts
XFree86-devel
XFree86-doc
XFree86-libs
XFree86-xfs
Xaw3d
Xaw3d-devel
Xconfigurator
adjtimex
aktion
am-utils
anonftp
apache
apache-devel
apmd
arpwatch
ash
at
audiofile
audiofile-devel
aumix
authconfig
autoconf
autofs
automake
awesfx
basesystem
bash
bash2
bash2-doc
bc
bdflush
bin86
bind
bind-devel
bind-utils
binutils
bison
blt
bootparamd
byacc
bzip2
caching-nameserver
cdecl
cdp
chkconfig
chkfontpath
cleanfeed
comanche
compat-binutils
compat-egcs
compat-egcs-c++
compat-egcs-g77
compat-egcs-objc
compat-glibc
compat-libs
comsat
console-tools
control-center
control-center-devel
control-panel
cpio
cpp
cproto
cracklib
cracklib-dicts
crontabs
ctags
cvs
cxhextris
desktop-backgrounds
dev
dhcp
dhcpcd
dialog
diffstat
diffutils
dip
dosemu
dosemu-freedos
dump
e2fsprogs
e2fsprogs-devel
ed
ee
efax
egcs
egcs-c++
egcs-g77
egcs-objc
eject
elm
emacs
emacs-X11
emacs-el
emacs-leim
emacs-nox
enlightenment
enlightenment-conf
enscript
esound
esound-devel
etcskel
exmh
expect
ext2ed
faces
faces-devel
faces-xface
faq
fbset
fetchmail
fetchmailconf
file
filesystem
fileutils
findutils
finger
flex
fnlib
fnlib-devel
fortune-mod
freetype
freetype-devel
ftp
fvwm
fvwm2
fvwm2-icons
fwhois
gated
gawk
gd
gd-devel
gdb
gdbm
gdbm-devel
gdm
gedit
gedit-devel
genromfs
gettext
getty_ps
gftp
ghostscript
ghostscript-fonts
giftrans
gimp
gimp-data-extras
gimp-devel
gimp-libgimp
gimp-manual
git
glib
glib-devel
glib10
glibc
glibc-devel
glibc-profile
gmc
gmp
gmp-devel
gnome-audio
gnome-audio-extra
gnome-core
gnome-core-devel
gnome-games
gnome-games-devel
gnome-libs
gnome-libs-devel
gnome-linuxconf
gnome-media
gnome-objc
gnome-objc-devel
gnome-pim
gnome-pim-devel
gnome-users-guide
gnome-utils
gnorpm
gnotepad+
gnuchess
gnumeric
gnuplot
gperf
gpm
gpm-devel
gqview
grep
groff
groff-gxditview
gsl
gtk+
gtk+-devel
gtk+10
gtk-engines
gtop
guavac
guile
guile-devel
gv
gzip
gzip
hdparm
helptool
howto
howto-chinese
howto-croatian
howto-french
howto-german
howto-greek
howto-html
howto-indonesian
howto-italian
howto-japanese
howto-korean
howto-polish
howto-serbian
howto-sgml
howto-slovenian
howto-spanish
howto-swedish
howto-turkish
ical
imap
imlib
imlib-cfgeditor
imlib-devel
indent
indexhtml
inews
info
initscripts
inn
inn-devel
install-guide
intimed
ipchains
ipxutils
ircii
isapnptools
isicom
ispell
itcl
jed
jed-common
jed-xjed
joe
kaffe
kbdconfig
kdeadmin
kdebase
kdegames
kdegraphics
kdelibs
kdemultimedia
kdenetwork
kdesupport
kdeutils
kernel
kernel
kernel
kernel-BOOT
kernel-doc
kernel-headers
kernel-ibcs
kernel-pcmcia-cs
kernel-smp
kernel-smp
kernel-smp
kernel-source
kernelcfg
knfsd
knfsd-clients
korganizer
kpilot
kpppload
kterm
ld.so
ldconfig
less
lha
libPropList
libc
libelf
libghttp
libghttp-devel
libgr
libgr-devel
libgr-progs
libgtop
libgtop-devel
libgtop-examples
libjpeg
libjpeg-devel
libjpeg6a
libpcap
libpng
libpng-devel
libstdc++
libtermcap
libtermcap-devel
libtiff
libtiff-devel
libtool
libungif
libungif-devel
libungif-progs
libxml
libxml-devel
lilo
linuxconf
linuxconf-devel
logrotate
losetup
lout
lout-doc
lpg
lpr
lrzsz
lslk
lsof
ltrace
lynx
m4
macutils
mailcap
mailx
make
man
man-pages
mars-nwe
mawk
mc
mcserv
metamail
mgetty
mgetty-sendfax
mgetty-viewfax
mgetty-voice
mikmod
mingetty
minicom
mkbootdisk
mkdosfs-ygg
mkinitrd
mkisofs
mkkickstart
mktemp
mkxauth
mod_perl
mod_php
mod_php3
modemtool
modutils
mount
mouseconfig
mpage
mpg123
mt-st
mtools
multimedia
mutt
mxp
nag
nc
ncftp
ncompress
ncpfs
ncurses
ncurses-devel
ncurses3
net-tools
netcfg
netkit-base
netscape-common
netscape-communicator
netscape-navigator
newt
newt-devel
nmh
nscd
ntsysv
open
p2c
p2c-devel
pam
passwd
patch
pciutils
pdksh
perl
perl-MD5
pidentd
pilot-link
pilot-link-devel
pine
playmidi
playmidi-X11
pmake
pmake-customs
popt
portmap
postgresql
postgresql-clients
postgresql-devel
ppp
printtool
procinfo
procmail
procps
procps-X11
psacct
psmisc
pump
pwdb
pygnome
pygtk
python
python-devel
python-docs
pythonlib
qt
qt-devel
quota
raidtools
rcs
rdate
rdist
readline
readline-devel
redhat-logos
redhat-release
rgrep
rhl-alpha-install-addend-en
rhl-getting-started-guide-en
rhl-install-guide-en
rhmask
rhs-hwdiag
rhs-printfilters
rhsound
rmt
rootfiles
routed
rpm
rpm-devel
rsh
rsync
rusers
rwall
rwho
rxvt
sag
samba
sash
screen
sed
sendmail
sendmail-cf
sendmail-doc
setconsole
setserial
setup
setuptool
sgml-tools
sh-utils
shadow-utils
shapecfg
sharutils
slang
slang-devel
sliplogin
slocate
slrn
slrn-pull
sndconfig
sox
sox-devel
specspo
squid
stat
statserial
strace
svgalib
svgalib-devel
swatch
switchdesk
switchdesk-gnome
switchdesk-kde
symlinks
sysklogd
talk
taper
tar
tcl
tclx
tcp_wrappers
tcpdump
tcsh
telnet
termcap
tetex
tetex-afm
tetex-doc
tetex-dvilj
tetex-dvips
tetex-latex
tetex-xdvi
texinfo
textutils
tftp
time
timeconfig
timed
timetool
tin
tix
tk
tkinter
tksysv
tmpwatch
traceroute
transfig
tree
trn
trojka
tunelp
ucd-snmp
ucd-snmp-devel
ucd-snmp-utils
umb-scheme
unarj
units
unzip
urlview
urw-fonts
usermode
usernet
utempter
util-linux
uucp
vim-X11
vim-common
vim-enhanced
vim-minimal
vixie-cron
vlock
w3c-libwww
w3c-libwww-apps
w3c-libwww-devel
wget
which
wmakerconf
wmconfig
words
wu-ftpd
x11amp
x11amp-devel
x3270
xanim
xbanner
xbill
xboard
xboing
xchat
xcpustate
xdaliclock
xdosemu
xearth
xfig
xfishtank
xfm
xgammon
xinitrc
xjewel
xlispstat
xloadimage
xlockmore
xmailbox
xmorph
xntp3
xosview
xpaint
xpat2
xpdf
xpilot
xpm
xpm-devel
xpuzzles
xrn
xscreensaver
xsysinfo
xtoolwait
xtrojka
xwpick
xxgdb
yp-tools
ypbind
ypserv
ytalk
zgv
zip
zlib
zlib-devel
zsh
	### Anything after %post gets interpreted as a post install command
	### and will be chrooted to the mount point of "/" for the
	### new installation 
%post 

# add another nameserver
echo "nameserver 10.0.1.10" >> /etc/resolv.conf
echo "10.0.1.10		server.local	server" >> /etc/resolv.conf


Conclusion -- it is good

I think the RedHat Kickstart stuff is good. I never did find out if the other distributions have it, didn't have enough time. Oh well. It still needs a lot of work to make it a really cool way of doing installations. The problem is, instead of just bitching about it, if it needs work, and you like it, contribute to the cause and help develop kickstart for RedHat or for any other Linux distribution. I wrote this article, now, you help out and contact the RedHat folks or other people to make the process even more cool!

Overall, I give Kickstart method a "B" for 6.0, and a "C" for the 5.2 version. It is cool compared to the corrupted commercial alternative which forces you to reboot 10 times before it is done installing. The potential for Linux is amazing compared to commercial closed sourced alternatives. When software is written not because of profit but because it is cool and/or because people want it done right, the long term potential far outweighs the short-term profit-minded mentality of people who cannot write good software or who just want things done "just to get it to work". Kickstart has the potential to reduce the overall cost of consulting and the time it takes to install Linux systems on a mass scale (which benefits the novice and expert computer consultant). It is funny to watch commercial companies who are trying to emulate what Linux and other UNIX systems can do. I see commercial products out there which attempt to do mass installations for closed sourced corrupted operating systems and I laugh at all the hard work they have to do. I want to stress something, this method I have shown today is not the easiest method to install Linux. I will explain this in another article someday. It will make any commercial or closed sourced operating system look like garbage compared to Linux.


My perl script

I used this perl script to extract a list of rpms from the /RedHat/RPMS directory on the cdrom. I only had about 6 corrections to make and figured it would take me longer to fix the perl script that it would be to fix the list. There has to be a simpler way of getting the list with exact results, but often it is just a decision between what you know works, and taking the time to figure out a better way. It only took me 5 minutes to write this script. Most of the time wasted was booting off of the kickstart disk to see how many errors I got.
#!/usr/bin/perl

my @RPMS = </home/ftp/RedHat60/RedHat/RPMS/*.rpm>;
my $Dest = "/tmp/List_2.txt";

open(FILE,">$Dest");

foreach $Rpm (@RPMS)
  {
    ## Would be easier if I had used ? instead of /
  $Rpm =~ s/\/home\/ftp\/RedHat60\/RedHat\/RPMS\///;

  if ($Rpm =~ /\-[0-9]+\-[0-9]+\./) 
    {($Rpm,$Junk) = split(/\-[0-9]+\-[0-9]\./, $Rpm,2);}
  elsif ($Rpm =~ /\-[0-9]+\./) 
    {($Rpm,$Junk) = split(/\-[0-9]+\./, $Rpm,2);}
  else {($Rpm,$Junk) = split(/\-[0-9]/, $Rpm,2);}

  print FILE "$Rpm\n";
  }

Mark works as JALG hardware assistant (shorts and tee-shirt) under Mike Hunter at The Computer Underground and as a professional (suit and tie) consultant at 800linux.com. In his spare time, he does volunteer stuff, like writing these documents.


Copyright © 1999, Mark Nielsen
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Configuring XDM -- a graphical login interface for Linux or UNIX

By Mark Nielsen


If this document changes, it will be available at The Computer Underground: http://www.tcu-inc.com/mark/articles/Kickstart.html.
CHANGES:
  1. Changed XServers File located at /etc/X11/xdm/Xservers by adding these lines to get 4 xdm sessions running so that 4 different people can log in. It seems like all the neat stuff only happened on the last session. It is probably easy to fix this. Perhaps I will mention it in the next article.
    :0 A local /usr/X11R6/bin/X :0
    :1 B local /usr/X11R6/bin/X :1
    :2 C local /usr/X11R6/bin/X :2
    :3 D local /usr/X11R6/bin/X :3
    

  1. Resources
  2. What is XDM? -- the graphical login interface
  3. My configurations
  4. Conclusion

Resources

  1. Chris Carlson's article in the Linux Gazette.
  2. man xdm
  3. My old xdm cheat sheet back in 12/1996 when I was just a hacker (in the good sense). I guess this was around right when the Gazette got started? How time flies. I should have posted this there at that time.

What is XDM?

To put it simply, xdm is just a graphical login screen so that you can impress your boss or friends that you don't have some boring console to look at when your computer starts up. It just makes Linux a little bit more cool than someone might have previously thought.

In theory, most of configurations here should work for any Linux distribution. This is configured for RedHat 6.0 though. RedHat 6.0 uses gdm instead of xdm when it starts its graphical login screen. However, I haven't got gdm to work in the exact way I want it, even though it seems much better than xdm. Once I figure out a few things, I will write a brief article on gdm also.

Here are some things to note ,

  1. If you want xdm (or gdm) to start when you computer starts, you need to make sure this line
    id:3:initdefault:
    Looks like this
    id:5:initdefault:
    in the file "/etc/inittab". Please, whatever you do, get Xwindows working before you set xdm to start at bootup. If Xwindows doesn't work, xdm won't work, and that can cause problems.
  2. RedHat 6.0 switched to gdm instead of using xdm which is apparent at the bottom of the /etc/inittab with the line
    x:5:respawn:/etc/X11/prefdm -nodaemon
    thus, change that line to this,
    x:5:respawn:/usr/bin/X11/xdm -nodaemon
  3. All the files I am changing are in "/etc/X11/xdm".

My config files

I am only interested in a few files, namely
/etc/X11/xdm/Xsetup_0
/etc/X11/xdm/Xresources
/etc/X11/xdm/GiveConsole
/etc/rc.d/rc.change_graphic
/etc/rc.d/rc.local
/etc/inittab <-- this was explained above
and the gif files in /etc/X11/xdm/graphics/

Here is the goal, I want to change xdm to make it so it has xeyes, santa, a clock, a graphics image, and my choice of background color on the desktop before someone logs in. After they log in, I want santa to die. Cruel huh?

Okay, let us do this in order:

  1. Copy my graphics perl script to "/etc/rc.d/rc.change_graphic". This changes the image which appears on the screen. Images are stored in /etc/X11/xdm/graphics as gif files.
  2. Copy my kill santa perl script to "/etc/X11/xdm/KillXsnow". This kills santa. Santa slows down the desktop.
  3. Copy my Xsetup script to "/etc/X11/xdm/Xsetup_0". Programs to run with the graphical login screen.
  4. Copy my Xresources script to "/etc/X11/xdm/Xresources". How the setup of xdm should look like.
  5. Copy my GiveConsole script to "/etc/X11/xdm/GiveConsole". Stuff to get executed before Xwindows is handed over to the user. Also, changes the background image for xdm.
  6. Add this command to "/etc/rc.d/rc.local". Make sure we get a graphics image to look at when we boot up.
  7. Copy my gif files to "/etc/X11/xdm/graphics/" and issue this command on the file
    tar -C / -zxvf xdm.tgz
    These are just my silly images I use.

Here are the rest of my config files:


Change graphics perl script

Located at "/etc/rc.d/rc.change_graphic". Issue the command "chmod 755 /etc/rc.d/rc.change_graphic" after it is copied.
#!/usr/bin/perl

@Files = </etc/X11/xdm/graphics/*.gif>;

#print @Files;

$Length = @Files;
$Seconds = `date +%S`;
chomp $Second;
$Frac = $Seconds/60;
if (!($Frac > 0)) {$Frac=1}

$Random = $Frac*$Length;
($Random,$Junk) = split(/\./, $Random,2);

if (($Random < 1) || ($Random > $Length -1))  {$Random = 1} 
$File = $Files[$Random]; 

$Rand2 = rand $Length;
($Rand2,$Junk) = split(/\./, $Rand2,2);

$Random = $Random + $Rand2;
if ($Random > $Length - 1) {$Random = $Random - $Length + 1;}

#print "$Length $Random $File\n";

if (-e "/etc/X11/xdm/xdm_front.gif") {system "rm /etc/X11/xdm/xdm_front.gif"}

if (@Files < 1)
  {
  ## Some sort of error messege should be here.  
  } 
else
  {system "ln -s $File /etc/X11/xdm/xdm_front.gif";}

Kill Santa perl script

Located at "/etc/X11/xdm/KillXsnow". Issue the command "chmod 755 /etc/X11/xdm/KillXsnow" after it is copied.
#!/usr/bin/perl

     ### I had to add the -a option between RH 5.2 and 6.0
@Temp = `cd /proc; grep -a ^/usr/X11R6/bin/xsnow /proc/[0-9]*/cmdline`; 

if (@Temp > 0) 
  {
  $Xsnow = shift @Temp;
  ($Junk,$ProcJunk,$No,$RestOfJunk) = split(?/?,$Xsnow);

     ## I am really paranoid that I want to kill the right pid 
  if (($No > 0) && ($Xsnow =~ ?^/proc/$No/cmdline:/usr/X11R6/bin/xsnow?))   
    {
#    system "echo \"Killing pid $No for Xsnow.\n\" > /tmp/1.txt\n";
    system "kill $No";
    }
  }

Xsetup_0 script

Located at "/etc/X11/xdm/Xsetup_0". Issue the command "chmod 755 /etc/X11/xdm/Xsetup_0". after it is copied.
#!/bin/sh
# $XConsortium: Xsetup_0,v 1.3 93/09/28 14:30:31 gildea Exp $
/usr/X11R6/bin/xconsole -geometry 480x130-0-0 -daemon -notify -verbose -fn fixed -exitOnFail
#/usr/X11R6/bin/xbanner

    ### Santa and snowflakes
    ### On some installs, I am missing xsnow for some reason. 
/usr/X11R6/bin/xsnow -snowflakes 50 -santa 2 -unsmooth &

    ### Load the random image 
/usr/bin/X11/xloadimage -onroot -at 1,210 /etc/X11/xdm/xdm_front.gif -border brown  &

    ### A clock would be nice to see
/usr/X11R6/bin/xclock -digital -update 1 -fn -adobe-times-medium-i-normal--34-240-100-100-p-168-iso8859-1 -geometry +410+1 &

    ### Let us turn on xeyes
/usr/X11R6/bin/xeyes -geometry +410+100 &



Xrsources file

Located at "/etc/X11/xdm/Xresources". Issue the command "chmod 755 /etc/X11/xdm/Xresources" after it is copied.
! $XConsortium: Xresources /main/8 1996/11/11 09:24:46 swick $
xlogin*login.translations: #override\
        CtrlR: abort-display()\n\
        F1: set-session-argument(failsafe) finish-field()\n\
        CtrlReturn: set-session-argument(failsafe) finish-field()\n\
        Return: set-session-argument() finish-field()
xlogin*borderWidth: 3
xlogin*geometry: 400x200+1+1
xlogin*greeting: CLIENTHOST  
xlogin*namePrompt: login:\040
xlogin*fail: Login incorrect
#ifdef COLOR
xlogin*greetColor: CadetBlue
xlogin*failColor: red
*Foreground: black
*Background: #fffff0
#else
xlogin*Foreground: black
xlogin*Background: white
#endif
XConsole.text.geometry: 480x130
XConsole.verbose:       true
XConsole*iconic:        true
XConsole*font:          fixed

Chooser*geometry:               700x500+100+100
Chooser*allowShellResize:       false
Chooser*viewport.forceBars:     true
Chooser*label.font:             *-new century schoolbook-bold-i-normal-*-240-*
Chooser*label.label:            XDMCP Host Menu  from CLIENTHOST
Chooser*list.font:              -*-*-medium-r-normal-*-*-230-*-*-c-*-iso8859-1
Chooser*Command.font:           *-new century schoolbook-bold-r-normal-*-180-*

GiveConsole file

Located at "/etc/X11/xdm/GiveConsole".

All you have to do is add "/etc/X11/xdm/KillXsnow & " as the first command in the file. Mine looks like this,

#!/bin/sh
# Assign ownership of the console to the invoking user
# $XConsortium: GiveConsole,v 1.2 93/09/28 14:29:20 gildea Exp $

# By convention, both xconsole and xterm -C check that the
# console is owned by the invoking user and is readable before attaching
# the console output.  This way a random user can invoke xterm -C without
# causing serious grief.

/etc/rc.d/rc.change_graphic &
/etc/X11/xdm/KillXsnow &

chown $USER /dev/console
/usr/X11R6/bin/sessreg  -a -w "/var/log/wtmp" -u "/var/run/utmp" \
-x "/etc/X11/xdm/Xservers" -l $DISPLAY -h "" $USER


/etc/rc.d/rc.local file

Add this to the /etc/rc.d/rc.local file.
/etc/rc.d/rc.change_graphic

Conclusion

XDM is pretty cool. This is the old way of doing things. I recommend going to gdm or something else. For XDM, I give it a B-. It just lacks some of the stuff I always wanted to see that gdm has.

I will explain GDM next time. Gdm has the nice capability of letting you choose which desktop environment you want. In RedHat 6.0, you can choose KDE, GNOME, or other desktop environments when you login, which is pretty cool. Overall, I give gdm a B+, and if it becomes better documented, an A. Again, I hope to make an article about gdm for the August issue.


Mark works as a receptionist (shorts and tee-shirt) under Mike Hunter at The Computer Underground and as a professional (suit and tie) consultant at 800linux.com. In his spare time, he does volunteer stuff, like writing these documents.


Copyright © 1999, Mark Nielsen
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Syslog-ng

By Balazs Scheidler


1. Introduction

One of the most neglected area of Unix is handling system events. Daily checks for system messages is crucial for the security and health conditions of a computer system.

System logs contain much "noise" - messages which have no importance - and on the contrary important events, which should not be lost in the load of messages. With current tools it's difficult to select which messages we are interested in.

A message is sent to different destinations based on the assigned facility/priority pair. There are 12+8 (12 real and 8 local) predefined facilities (mail, news, auth etc.), and 8 different priorities (ranging from alert to debug).

One problem is that there are facilities which are too general (daemon), and these facilities are used by many programs, even if they do not relate each other. It is difficult to find the interesting bits from the enourmous amount of messages.

A second problem is that there are very few programs which allow setting their "facility code" to log under. It's at best a compile time parameter.

So using facilities as a means of filtering is not the best way. For it to be a good solution would require runtime option for all applications, which specifies the log facility to log under, and the ability to create new facilities in syslogd. Neither of these are available, and the first is neither feasible.

One of the design principles of syslog-ng was to make message filtering much more fine-grained. syslog-ng is able to filter messages based on the contents of messages in addition to the priority/facility pair. This way only the messages we are really interested in get to a specific destination. Another design principle was to make logforwarding between firewalled segments easier: long hostname format, which makes it easy to find the originating and chain of forwarding hosts even if a log message traverses several computers. And last principle was a clean and powerful configuration file format.

This article tries to give you an overview on syslog-ng's internals, for more detailed information see http://www.balabit.hu/products/syslog-ng and select the documentation link.

2. Message paths

In syslog-ng a message path (or message route) consist of one or more sources, one or more filtering rules and one or more destinations (sinks). A message is entered to syslog-ng in one of its sources, if that message matches the filtering rules it goes out using one of the destinations.

2.1. Sources

A source is a collection of source drivers, which collect messages using a given method. For instance there's a source driver for AF_UNIX, SOCK_STREAM style sockets, which is used by the Linux syslog() call.

Different platforms use different means of sending log messages to the logging daemon, and to be useful on all operating systems, syslog-ng has support for the most common methods. Tested support exists for Linux, BSDi, experimental support exists for Solaris (as of version 1.1.22)

2.2. Destinations

A destination is a message sink, where log is sent if filtering rules match. Similarly to sources, destinations may include several drivers which define how messages are dispatched.

For instance there's a file driver, which writes messages to the given file, but support exists to send messages to unix, udp and tcp sockets as well.

2.3. Filters

Filters perform log routing inside syslog-ng. You can write a boolean expression using internal functions, which has to evaluate to true for the message to pass.

An expression may contain the operators "and", "or" and "not", and the following functions:

  • facility()
  • level()
  • program()
  • host()
  • match()

Each of the above functions check the corresponding field of a log message for matching (e.g. program() checks whether the given program sent the message, or not). You can use extended regular expressions for matching.

2.4. Log statements

Now you have sources, destinations and filters. To connect these together you need the log statement:

log { source s1; source s2; ... 
      filter f1; filter f2; ... 
      destination d1; destination d2; ... };

Messages coming from any of the listed sources, and matching against all the listed filters (which effectively ANDs them) are sent to all of the listed destinations.

3. Example configuration

This configuration file shows the possibilities and features of syslog-ng. It receives messages from the network, and also handles local messages. Three distinct output files are used: one for the messages from sendmail, a second for messages coming from host1, and a third for messages coming from host2.

options { long_hostnames(on); sync(0); };

source src { udp 0.0.0.0,514; unix-stream /dev/log; internal; };

filter f_sendmail { program("sendmail"); };
filter f_host1 { host("host1"); };
filter f_host2 { host("host2"); };

destination sendmail { file /var/log/sendmail; };
destination host1 { file /var/log/host1; };
destination host2 { file /var/log/host2; };

log { source src; filter f_sendmail; destination sendmail; };
log { source src; filter f_host1; destination host1; };
log { source src; filter f_host2; destination host2; };

4. References

Syslog-ng is a product of BalaBit Computing, and is distributed under the GPL. If you are interested, please visit http://www.balabit.hu.


Copyright © 1999, Balazs Scheidler
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Artificial Intelligence on Linux

By Anderson Silva


Artificial Intelligence is a very controversial subject, but the way I will approach it in this article is simple and fast. The way I have been approaching AI is not through the philosophical or biological aspect, but just as a computational subject. When humans want to fly, they don't need to study the birds to learn how to do it, they just get into an airplane. This is my way of approaching AI. We want to solve puzzles and games through a computer without really comparing the way a human accomplishes tasks differently from a computer.

For the first time in the history of my school, there was going to be offered an Artificial Intelligence (AI) class. I was very excited about this class because you hear a lot about AI, but you don't really see a lot of material for it on magazines and online articles.

Probably the greatest example of an AI application is Turing's Test. The test consists in a person being a room with a computer terminal, and this person would start to chat with the computer. At the end the person would have to figure out if he talked to a real person on the other end of the terminal or with a computer program. And if the user confuses the person with the computer then we would have reached AI.

At, LU we chose Prolog to be the implementation tool for AI. Our labs at school are Windows NT based and we have only one linux machine which is designated to students. But I have been a linux user for almost 2 years, and I wanted to implement all my Prolog assignments in Linux.

I did some research on the web and I found a great Prolog compiler for linux. Prolog is like linux in a certain way, there are several flavors that you can pick from. The one I chose was SWI Prolog (http://www.hio.hen.nl/faq/SWI-Prolog.html). Prolog is a very flexible language. Unlike other languages like C, C++ or Java, Prolog is based on formal mathematical logic, in this case: Predicate Calculus. A Prolog program is normally made of facts with a set of rules. To reach the final solution it has to satisfy this set of rules. Interpreting these rules allows the computer to deduce the solution by itself. In Prolog the facts are normally stored on a separate file called the knowledge base, and rules on another file that is the actual program.

Allow me to show a very basic search algorithm known as the Depth First Search (click for image).


The Program below is the representation of the graph above in Prolog.

% Name:   Anderson Silva
% Date:   March 10, 1999

% ================================
% A graph that will be used for a
% Depth First Search Algorithm
% Knowlodge Base.
% ================================

% linked/2
% A nodes and its children

linked(a, [b,c,d]).
linked(b, [e,f]).
linked(c, [g,h]).
linked(d, [i,j]).
linked(e, [k,l]).
linked(f, [l,m]).
linked(g, [n]).
linked(h, [o,p]).
linked(i, [p,q]).
linked(j, [r]).
linked(k, [s]).
linked(l, [t]).
linked(m, []).
linked(n, []).
linked(o, []).
linked(p, [u]).
linked(q, []).
linked(r, []).
linked(s, []).
linked(t, []).
linked(u, []).

% arc/2
% A rule that checks to see if
% there is an arc between two given nodes.

arc(X,Y):- linked(X,L), member(Y,L).

The algorthim that searches the graph for a specific goal:

% Name:   Anderson Silva
% Date:   March 10, 1999
% ================================
% This is the Depth First Algorithm
% implemented in Prolog that will
% use the graph.pl knowlodge base
% ================================

% reverse_write/1
% Inverts the order of the stack.

reverse_write([]).
reverse_write([H|T]):-reverse_write(T), write(H), nl.

% solve/2
% Gives the path in the reverse
% order since dfs is implemented as
% a stack

solve(INode, Solution):- consult('graph.pl'),
                         query_goal,
                         dfs([], INode, Solution),
                         reverse_write(Solution).

% query_goal/0
% Creates the goal to be reached
% during execution
% We start with abolish, so if solve is ran more
% than once, it will make sure it
% forgets the old goals and only look for the
% new on.

query_goal :- abolish(goal(Node)),
              write('Goal? [Followed by a period]'),
              nl,
              read(Node),
              assert(goal(Node)).


% goal/1
% When the program runs for the frist time
% query_goal needs to abolish at least one goal
% and that is why goal(standard) is used.

goal(standard).

% dfs/3
% The Actual recursive algorithm for the
% Depth First Search

dfs(Path, Node, [Node|Path]):- goal(Node).
dfs(Path, Node, Sol):- arc(Node, Node1),


Copyright © 1999, Anderson Silva
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


IP MASQ Setup with Ipchains Quick Start

By Terry 'Mongoose' Hendrix II and Anderson Silva


Last Month, my brother and I decided that we were going to setup a small network at my house, so that we could connect more than one computer to the internet with only one modem and one phone line.  My machine is the one with the modem and it is also running Linux (server) . My brother's machine is running Windows 95 (Client). I did some research and found some documentation about private networking on the web. I decided to try the technique of IP Masquerading with our little network at home.
IP Masquerading is the technique to assign your computers internal IP addresses (in my case 10.0.0.1 for the server and 10.0.0.2 for the client) and share your machines internet connection with the other clients without having to assign them a external IP address. I read a lot of the documentation and I did actually understand the whole process, but I could not get it running right on my computer. So, I entered the #Linux IRC channel on Undernet.org and found a guy nicknamed Mongoose to help me.
He gave me a link to a quick tutorial he had written to get IP MASQ running with ipchains* in no time.

* Ipchains is a program that is bundled with RedHat 6.0 and is used to set up firewalls and ip masquerading.

After reading Mongoose's tutorial I had my private network running in less than 10 minutes. That is why I got in touch with him and he agreed to let me publish his tutorial to the Linux Gazette.

Bellow is the Tutorial:
 

----------------------------------------
NOTES
----------------------------------------
The following example has:

  0.0.0.0 the IP of the gateway to the internet.
 10.0.0.1 the IP of the ip masq gateway's eth0.
 10.0.0.2 the IP of the ip masq client0's eth0.
 10.0.0.3 the IP of the ip masq client1's eth0.
 

NETWORK IP MASQ GATEWAY SETUP
----------------------------------------
1. Load ethernet card modules ( if needed ).

        /sbin/modprobe ne2k-pci   (each card has a specific name)

2. Bring up the device.
   ( add to /etc/rc.d/rc.local if you don't have standard interface scripts)

        /sbin/ifconfig eth0 10.0.0.1 netmask 255.255.255.0 up
        /sbin/route add -net 10.0.0.0 netmask 255.255.255.0 eth0
        /sbin/route add default gw 0.0.0.0 eth0

3. Allow your IP MASQ clients to use your inet.
   A. Add this to /etc/hosts.allow at the end:

       ALL:10.0.0.2
       ALL:10.0.0.3

   B. Add the ips to any other configs it requires.
      i. I suggest you use the squid ftp/http proxy for speed.
 

NETWORK CLIENT SETUP ( 10.0.0.2 client0 )
----------------------------------------
1. Load ethernet card modules ( if needed ).

        /sbin/modprobe ne2k-pci

2. Bring up the device. ( add this to /etc/rc.d/rc.local if you don't have standard interface scripts)

        /sbin/ifconfig eth0 10.0.0.2 netmask 255.255.255.0 up
        /sbin/route add -net 10.0.0.0 netmask 255.255.255.0 eth0
        /sbin/route add default gw 10.0.0.1 eth0
 

TESTING NETWORK
----------------------------------------
1. Ping 10.0.0.1 from the the clients and vice versa.

2. Use /sbin/ifconfig to see packet traffic from each host.

3. You should be able to use telnet/ftp between machines now.
   A. If you can't telnet from clients to gateway, then check hosts.allow.
 

IP MASQ GATEWAY IP MASQ SETUP
----------------------------------------
1. IP forwarding setup.
   A. Enable ip forwarding for the IP MASQ gateway.

         echo "1" > proc/sys/net/ipv4/ip_forward

   B. Make ip forwarding enabled every boot:
      i. For RedHat modify /etc/sysconfig/network as follows:

         FORWARD_IPV4=true

     ii. For other distros add this to /etc/rc.d/rc.local at the end:

         echo "1" > proc/sys/net/ipv4/ip_forward

   C. To make sure no one smurfs your network add this to rc.local:

         echo "1" > /proc/sys/net/ipv4/tcp_syncookies
 

2. Now setup routing.  You can add these to rc.local to load every time.
   A. Deny all ip forwarding by default.

         /sbin/ipchains -P forward DENY

   B. Allow ip forwarding for your IP MASQ machines 10.0.0.2 and 10.0.0.3.

         /sbin/ipchains -A forward -s 10.0.0.2/24 -j MASQ
         /sbin/ipchains -A forward -s 10.0.0.3/24 -j MASQ

   C. Add any masq modules you'll need.

         /sbin/modprobe ip_masq_ftp
         /sbin/modprobe ip_masq_quake
         /sbin/modprobe ip_masq_irc
         /sbin/modprobe ip_masq_user
         /sbin/modprobe ip_masq_raudio
         ...


If you follow this tutorial your network should work just fine. One other problem that I encountered after setting up my IP MASQ was that my client could only access servers on the net with their IP addresses. So, I set up DNS on my linux box, so my clents could do a domain lookup. All you need to do is to set /etc/resolv.conf with your nameservers, and make sure that you have the named daemon is activated. And that should solve the problem.

And if you have done all of these steps you should be all set to run your private network. If you want to learn more about IP MASQ and Firewalling please refer to the HOWTOs Documentation at: http://metalab.unc.edu/linux/HOWTO/HOWTO-INDEX-3.html#ss3.1


Copyright © 1999, Terry 'Mongoose' Hendrix II and Anderson Silva
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Using Linux to Teach Kids How to Program

By Anderson Silva


I was in 5th Grade when I took my first computer class in Rio de Janeiro, Brazil. I was going to start taking a course in LOGO running on Commodore 64. Soon enough LOGO was known to us as the "turtle game".

LOGO is a programming language developed at the MIT labs in the late-60's, and its main purpose is to make a programming language for children. LOGO has its own syntax and semantics, but what really makes it fun for kids is its Graphical Environment.

LOGO has a "programmable" cursor that draws on the screen whatever you programmed it to do. That cursor is known as the turtle.
With the turtle you can make animation, draw houses, cars, or any of the primitive geometrical figures.

For example:
To make the turtle go forward 10 pixels you would give the command:
FD 10

To make the turtle go backwards 120 pixels:
BK 120

To turn turn right or left on a 90 degree angle:
RT 90
LT 90

To make a simple arc or circle use the command arc and the degrees of the circumference and the radius.
ARC 360 120

To set the color of the background and of the drawing of the turtle:
SETBG Color# - (The color number varies from system to system)
SETPC Color# -
(It sets the color of the turtle's drawing)

To clean (=clear) the screen:
CLEAN

Today I am 21 years old, and I still like playing around with LOGO. And I will use it to teach my son a little bit about programming and discrete math concepts. Now, my challenge was to find a version of LOGO for Linux. I have been using Linux for about 2 years now, and just a couple of weeks ago I started searching the web for a free version of LOGO for Linux. It took me about 30 minutes, but I was able to find the Berkeley LOGO version, which is really good. It runs in many Unix flavors and works with MS Operating Systems as well.

I was able to compile it with Red Hat 5.2 pretty fast and it worked great, but I did encounter several errors with Red Hat 6.0, and i am still trying to figure out some incompatibilities. The Berkeley distribution of LOGO can be downloaded it at: ftp://anarres.cs.berkeley.edu/pub/ucblogo/

The distribution comes with a pretty good user manual, and it has at least 3 different versions available for download. I do encourage any programmers that have children to download LOGO and start having fun with your children, they will have a blast, and you can even take the opportunity to teach them some basic Linux tasks.


Copyright © 1999, Anderson Silva
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Setting Up Mail for a Home Network Using Exim

By Jan Stumpel

[Revised at author's request. Originally published in issue #42.]


1 Introduction

Setting up a home network with Linux and Win95, using Samba, IP Masquerading, and diald has been described many times, also in the Linux Gazette, but so far I have not found a recipe for setting up mail on a small network with only one dial-up e-mail account. In this article I want to explain how I did it. With this system:

  • users on the network can send local mail to each other, and reply to it, also locally.
  • outgoing mail has a proper From: address, so the outside world can reply to it.
  • the e-mail account is shared by the users, but each only receives his/her personal mail.
  • users on the network receive a notification (a pop-up window) when personal mail for them arrives.

This is realized on my system (running Debian Linux 2.1) using the following programs:

  • exim as the mail transfer agent (it is much easier to configure than sendmail).
  • fetchmail for collecting the mail from the ISP.
  • pine as the mail client on the Linux side (but other clients can be used as well, including mail).
  • Microsoft Internet Mail on the Windows side (but other clients can be used as well).
  • qpopper as the POP3 server, for moving mail from the Linux system to the Win95 machine.
  • smbclient and Winpopup for mail notification.

I have this set up for two machines (1 Linux + 1 Win95) but it will probably also work for a somewhat larger network, and may be sufficient for a small office. Note: this article is Debian-oriented. If you use another distribution, change where appropriate!

2 The network and the names

For this article I assume the following names (change these to correspond with your own situation):

  • the owner / system administrator is called Joe Bloggs.
  • the Linux machine is called heaven.
  • the Win95 machine is called earth. It is mostly used by Emily Bloggs.
  • Joe's user name on heaven is joe.
  • Emily's user name on heaven is emi.
  • Emily's user name on earth is also emi; her Linux password on heaven and her 'password for Microsoft networking' on earth are the same.
  • Joe has a dialup account (dynamic IP address) with an ISP called isp.com. Mail from the ISP can be collected using POP3.
  • Joe's account name at the ISP is jbloggs.
  • Joe's e-mail address (also used by Emily) is joe.bloggs@isp.com.
  • Joe's password for collecting POP3 mail is zaphod.
  • The ISP's mail server (for sending mail) is smtp.isp.com.
  • The ISP's POP3 server (for collecting mail) is pop3.isp.com.
  • heaven and earth belong to a domain called home. This domain name is meant for use only inside the home network; Joe has not registered his domain name and it cannot be recognized by the outside world.
I also assume that the local networking works, and that there is on-demand dialup access using diald. There is no name server on heaven. /etc/resolv.conf contains the addresses of two name servers supplied by the ISP. These same addresses are entered into the TCP/IP configuration on earth.

/etc/hostname on heaven is

heaven

/etc/hosts on heaven is

127.0.0.1 localhost
192.168.1.1 heaven.home heaven
192.168.1.2 earth.home earth

On earth there is a file c:\windows\hosts with the same contents as /etc/hosts.

3 Mail addresses

Mail messages can have more than just the address in the 'To:' and 'From:' lines, for instance :

To: Emily Bloggs <joe.bloggs@isp.com>

'Emily Bloggs' in the above example is the 'real-name part'. It is set in the e-mail program which composes the message. This 'real-name part' can be used for delivering Emily's mail to her. Note: if the 'real-name part' has dots in it, it must be quoted using " characters ("Joe C. Bloggs"). See also man mailaddr.

4 Configuring exim

On a Debian system this is done by running eximconfig. It asks a number of questions which you can answer as follows:

  • your system is an Internet site using smarthost.
  • the 'visible mail domain' is home
  • other names apart from home and heaven.home: answer heaven:localhost
  • you don't want to relay for any non-local domains.
  • you want to relay for the local network 192.168.1.0/16
  • RBL (spam filter database): whatever you like. I said n
  • The smarthost, handling outgoing mail, is smtp.isp.com
  • System administrator mail should go to joe (not to root!)
In MS Internet Mail (or whatever mail client you use on Win95) heaven must be entered both as the SMTP server and as the POP3 server. Under 'pop3 account' and 'pop3 password', enter the username emi and her Linux password. Enter the the name, Emily Bloggs, and the e-mail address, emi@home, in the appropriate place. Note that the e-mail address must be in the local domain!

On the Linux side, nothing special has to be set. /etc/pine/conf and the users' ~/.pinerc can be used 'out of the box'. The mail client (pine) constructs local addresses using the hostname together with user information from /etc/passwd.

With the above setup, local users can happily send mail to each other and reply to it. For instance, in pine at heaven, user joe sends mail to user emi. Automatically, pine changes this to:

To: Emily Bloggs <emi@heaven.home>

The message is delivered immediately (as you can see if you run eximon, the exim monitoring utility). emi (should she log in to heaven) would see the message as coming from

From: Joe Bloggs <joe@home>

So home really functions like a local domain within which messages can be exchanged. The problem is sending messages to the outside world. A From: address like <joe@home> is no good because nobody on the outside could reply to an address in the non-existent domain home.

5 Fixing the From: address

We must change the local From: address into a valid e-mail address (the e-mail account at the ISP), but only in the case of outgoing messages. With exim, we can do this by means of a 'transport filter'. The outgoing mail passes through this filter, and the From: address is changed. Local mail will not be affected.

The following filter will do the trick, provided we are sure that the address that we want to change is always between < and > signs. This is not guaranteed, but very common: pine, mutt, and mail, as well as MS Internet Mail all generate such addresses.

#!/usr/bin/perl
$address = 'joe.bloggs@isp.com';
while (<>) {
    if (/^From: /) { s/<.*>/<$address>/; print; last; }
    print; }
while (<>) { print; }
Don't forget to change the e-mail address to yours! Call this program outfilt, do chmod +x outfilt and put it in /usr/local/bin. Now we must add a line to /etc/exim.conf, so the last lines of the TRANSPORTS CONFIGURATION section read:

remote_smtp:
   driver = smtp
   headers_remove = "sender"
   transport_filter = "/usr/local/bin/outfilt"
end
Actually, we added two lines. The headers_remove line is also new. This prevents exim from adding a Sender: header to the message (as it would do with this setup, if you use pine). The Sender: line can cause trouble with some (badly configured) mail destinations.

With these changes to /etc/exim.conf, whenever anyone sends an e-mail message to the outside world it is now delivered properly by exim. Exim (through diald) opens the outside line at once. In a home situation this is probably what you want. In a small office, with a lot of e-mail traffic, you may want to defer messages and send them as a bunch at certain times, to save phone costs. This is possible, but I don't need it myself and have not looked into it. You could look at the 'Linux Mail-Queue mini-HOWTO'.

6 Fetchmail configuration

At the command fetchmail diald opens the line and the mail from the ISP is collected (and passed to exim for local delivery). Only users who have a .fetchmailrc, owned by themselves, in their home directory can run fetchmail. This file can be created using the configuration tool fetchmailconf. You get something like:

# Configuration created Sun Mar 28 03:15:20 1999 by fetchmailconf
set postmaster "postmaster"
poll pop3.isp.com with proto POP3
       user "jbloggs" there with password "zaphod" is joe here options fetchall warnings 3600
The .fetchmailrc files belonging to the various users could all be copies of each other, but with the ownership set to the user concerned. It is not so nice that every user has the password in plain view. Maybe there is a better way, but in a home situation it does not matter.

The main point is that whoever runs fetchmail, the mail must always be delivered to the same user mailbox (joe's mailbox in this case).

7 Removing exim's delivery limit

Exim by default does not deliver more than 10 messages at a time. I am sure there are circumstances where this makes perfect sense, but having a dialup account is not one of them. To get rid of this restriction, you must put into the MAIN CONFIGURATION section of /etc/exim.conf, before the end statement, a line

smtp_accept_queue_per_connection = 0

8 Delivering personal mail

Through fetchmail and exim, all mail from the outside is by default delivered to Joe's mailbox (var/spool/mail/joe) at heaven. In Joe's home directory he puts a file called .forward, containing the following text:

# Exim filter
if $header_to: contains Emily then deliver emi endif

The .forward file must have permissions -rw-r--r--. If you're not sure, give the command chmod 644 .forward.

If mail contains 'Emily' in (the 'real name part' of) the To: address (and this will almost always be the case when her friends send her mail) it will now go into her mail account on heaven, not into Joe's. She can move the mail to her own machine using POP3 (see below).

Delivery to other users than Emily can be arranged with elif ... then clauses in the .forward file. Actually, exim's .forward files can perform a lot of complicated functions. See the text "Exim's user interface to mail filtering" which comes with the exim doc's.

9 Transferring mail with qpopper

To let heaven act as POP3 server for earth, qpopper can be installed. I installed the Debian package qpopper_2.3-4.deb. Installation is automatic; no configuration is necessary. If Emily presses 'get/send messages' in MS Internet Mail, the contents of her mailbox on heaven get transferred to earth (and all mail, local or outside, which she has written gets delivered).

10 Automatic mail notification

Emily likes to be notified if mail arrives for her at heaven. If Samba is installed on heaven and Winpopup on earth, this is easy. Mail notification on earth can be done using smbclient (a program which comes with Samba).

Joe (as root) has put a Perl program called mailwarn into usr/local/bin:

#!/usr/bin/perl
open  POPUP, "| smbclient -M $ARGV[0] >/dev/null 2>&1";
print POPUP "YOU'VE GOT MAIL! GO AND CHECK IT!\n";
print POPUP "It's from $ARGV[1]";
close POPUP; 

This file was, of course, first made executable using chmod +x mailwarn.

In Emily's home directory at heaven there is now also also a .forward file (emi is the owner; permissions are 644):

# Exim filter
unseen pipe "/usr/local/bin/mailwarn earth $header_from:"

If mail (local or from the outside) for Emily arrives at heaven, a window will now pop up on earth telling her this.

11 Manually collecting the outside mail

Thanks to a 'shortcut' on earth's Win95 'desktop', which does a telnet to heaven, Emily can log into heaven and start fetchmail by hand. That is, if she does not want to wait for the scheduled cron times when fetchmail runs. After the mail has been transferred from the ISP, she can press 'get/send messages' to move any mail from her heaven mailbox into the earth one.


Copyright © 1999, Jan Stumpel
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


Creating A Linux Certification Program, Part 6

By Dan York


It's been quite a crazy ride during the last weeks within the community effort to build a certification program known as the Linux Professional Institute (LPI). There are a great many news items to pass along. This month's article will address:

If you are interested in reading previous articles in this series, please see the links at the bottom of the page.


What You Can Do

There are a great number of ways you can help make the LPI project a reality. Here is a quick list:

1. HELP ANALYZE DIFFERENCES BETWEEN LINUX DISTRIBUTIONS

We need some more help documenting the detailed characteristics and differences between the distributions of Linux. Faber Fedor <revf2@interactive.net> has agreed to coordinate a project to document the distributions. In particular we are looking for someone to complete and maintain the SuSE document, and an additional few hands to help complete the Slackware document are welcome too. Visit:

http://userweb.interactive.net/~revf2/LPI/

to see the work that has been done to date by Faber and other volunteers. Please send any feedback you can to Faber (preferably as an edited HTML document, with the changes highlighted). You can also visit the archives for the "linux-cert-program" list at:

http://lists.linuxcare.com/linux-cert-program/threads.html

to see messages relating to the project. All of this work will go into creating the distribution-specific exams that are part of our first level of certification.

2. WRITE QUESTIONS FOR OUR EXAMS

Very shortly, Scott Murray, our director of exam development, will be putting out a call for "item writers" to write questions for our first level exams. Item writers will receive some instruction in the types of questions we are seeking and then will write items that will be submitted to an extensive technical review and evaluation process. Writers of items that survive the initial screening and alpha testing will receive compensation for their work.

If you have ever had concerns that people who completed other certification programs weren't really qualified, then please join with us and help make sure our program is of the appropriate quality.

Please contact Scott Murray (scott@lpi.org) now if you would be interested in being an item writer when we begin the process.

3. HELP FIND FINANCIAL SPONSORS

Creating a high-quality certification program such as this costs a significant amount of money. We recently unveiled our sponsorship program for both businesses and individuals at:

http://www.lpi.org/sponsorship.html

We've already had several major sponsors come on board (Caldera Systems, Linuxcare and SuSE are the largest) and we are actively speaking with others. If either your company or you individually would be interested in donating, we would certainly be interested to talk to you. If you think your company might be interested, but are uncomfortable bringing the matter up, please feel free to contact us and we'll be glad to make the approach.

Please direct all sponsorship inquiries to either myself (dan@lpi.org) or Chuck Mead (chuck@lpi.org).

Please note that while we are incorporating as a nonprofit corporation, we are not (yet, anyway) a "tax-exempt" entity by U.S. IRS tax guidelines. Donations to LPI are not tax-deductible in the U.S. as charitable contributions. (But there may be other ways to deduct such contributions - contact an accountant for details - and hey, it's for a good cause anyway!)

4. HELP DEVELOP A COURSEWARE APPROVAL POLICY

As we've had a great number of courseware vendors come to us asking about "approving" their materials, we are considering implementing such a program. Chuck Mead has just kicked off a discussion in our Corporate Relations committee. Join the list, or view the web archive at:

http://lists.linuxcare.com/linux-cert-corprel/threads.html

The CorpRel committee will be working on a proposal to send to the Board.

5. HELP WITH OUR PUBLICITY WATCH

With so many information technology publications out there - both in print and on the web, it's next-to-impossible to stay up-to-date on what's being said all over the place about Linux certification. We'd like your help. When you see an article (either in print or online) about Linux certification, can you please send us the info (article title, publication, date, URL if on the web). Either send it directly to "linux-cert-pr" if you are a member of that list (if not, consider joining!) or email it to Evan (evan@lpi.org)

You can see the list of what has been posted at:

http://lists.linuxcare.com/linux-cert-pr/threads.html

Messages sent to linux-cert-pr appear on the archive within an hour.

6. WRITE OR SPEAK ABOUT LPI AND LINUX CERTIFICATION

To date, much of the writing and speaking about LPI and our efforts to create a Linux certification program has been by members of the Steering Committee such as Evan and I. But please know that we are not at all exclusive about that and would encourage others to help out with writing and/or speaking about LPI and Linux certification.

Many web sites are looking for people to write articles and we'd love to see more articles out there about Linux certification and LPI. If you know of a site looking for articles, please go ahead and write one. We'll be glad to provide some information or assistance if you need it. Also, if you'd like to write articles, drop us a note and we may be able to steer you to places where they are looking for writers (usually the online sites are looking for writers who will write for free).

Print publications are good, too. For instance, I had an article in the June ";login:" published by USENIX and then an article in the July "Linux Journal." Scott Murray & Alan Mead are planning to submit an article on their survey process to a psychometrics journal. Articles might be appropriate for local or regional newsletters for training organizations, user groups, etc.

Also, if you're interested in speaking to local groups, we will soon be making some presentations available online (in Applixware format so far, probably HTML, too) that you could use. For instance, Faber Fedor recently spoke to a local association of technical educators in his state.

If you are interested in writing or speaking, please feel free to contact either Evan (evan@lpi.org) or I (dan@lpi.org). Feel free, too, to go ahead and just write... we'll be glad to look over articles if you'd like us, too.

7. IF IN GERMANY, HELP WITH THE NEW GERMAN "CHAPTER"

As described further down in this article, there is now an effort underway to create a German "Chapter" of LPI that would help translate informational materials into German and also write/speak about LPI certification within Germany and in the German language. More details will be available soon, but in the meantime, please contact Mark Semmler at: mark.semmler@frontsite.de

Please read the text below for more information.

8. JOIN OUR MAILING LISTS

If you haven't joined one of our committee mailing lists, where more of the work is going on, visit:

http://www.lpi.org/involved.html

and sign up to help us out!


Board Membership Changes

After being on our Steering Committee list since its formation, Josh Arnold recently indicated he needed to step aside because of other committments. He intends to stay subscribed to the mailing lists and hopes to be able to contribute to the Program Committee on an ongoing basis. We thank Josh for his willingness to step forward and help lead, and we do hope he can continue to be involved as our efforts move on.

With Josh's departure, the Steering Committee/Board took some time to identify what roles we needed to fill. We next considered who had contributed to the LPI effort in some meaningful way and spoke to a couple of people about joining the Board. At this time, we are pleased to announce Jared Buckley's addition to the LPI Board. Jared stepped forward to lead the Naming committee and has been an active contributor to several of our mailing lists. During the day, Jared works for Texas Instruments in Dallas, TX, where he supports the WAN and administers several covert Linux servers.

Please join us in welcoming Jared to the Board. His new responsibilities will include helping create newsletters such as this and generally helping to coordinate volunteer participation. He may be reached online at "jaredb@ti.com". (He will soon have an "lpi.org" address as well.)


Completed Level 1 Objectives

Over the last month, a dedicated group of participants worked to finalize the objectives for the first exam (T1) and the generic portion of the second group of exams (T2). These objectives were finalized in early June, allowing courseware developers and publishers to know what they should orient their materials toward.

We will soon have a web page online providing a simple list of the final objectives. In the meantime, you can visit our management system at:

http://www.lpi.org/cgi-bin/poms.py


Advisory Council Expansion

We are pleased to announce our expanded Advisory Council. We continue to receive great interest from a wide range of organizations. Our current Advisory Council membership includes (listed alphabetically by company, asterisk (*) indicates recent addition):

Jim Higgins, Caldera Systems, Director of Education Services
*Chris Tyler, Canadian Linux Users' Exchange, Certification Representative
*Jim Lacey, CompUSA, Director of Operations
*Fiaaz Walji, Corel, Certification Program Manager
*Stephen Solomon, Course Technology, Senior Acquisitions Editor
*Phil Carlson, ExecuTrain, VP of Business Development
*Katalin Wolcott, IBM, Mgr of Linux Services Development, IBM Global Services
*R.J. Bornhofen, Global Knowledge Network, Linux/Web Curriculum Manager
David Mandala, Linuxcare, VP of Education and Certification
Mark Bolzern, LinuxMall, President
Jim Dennis, Linux Gazette, "The Answer Guy"
Jon "maddog" Hall, Linux International, Executive Director
Phil Hughes, Linux Journal, Publisher
*Stuart Trusty, Linux Labs, President
*Julie Rowe, New Horizons Computer Learning Centers, VP of Products & Programs
Nancy Maragioglio, New Riders Publishing, Editor
Lonn Johnston, Pacific HiTech, VP North America
Donnie Barnes, Red Hat Software, Director of Technical Programs
*David Conran, SAGE (USENIX) Certification Committee
*Anita Booker, SGI, Global Customer Education Manager
Patrick Volkerding, Slackware, principal developer
Marc Torres, SuSE, Inc., President
*Dr. Lindsay F. Marshall, UK Unix User Group, Chairman
Deb Murray, UniForum, VP Professional Training & Development
Ken Kousky, Wave Technologies, CEO

We keep this list up-to-date on our web site at:

http://www.lpi.org/ac.html

We had a great meeting with our Advisory Council members at LinuxExpo (you can see the pictures at http://www.lpi.org/expopix.html ) and appreciate all their support!


German "chapter" forming

When I knew I was going to be in Munich teaching some classes, (yes, Linuxcare expects me to actually do some work for them, too, and not just work on LPI!) I sent out a note asking if anyone would be interested in meeting. Juergen Off from frontsite AG contacted me and we arranged to meet.

Juergen and his colleague Mark Semmler brought me to the Biergarten in the beautiful Englisher Garten part of Munich and we had a very enjoyable evening eating, drinking beer and flipping somewhat randomly between talking in English and German.

Along the way, Juergen and Mark commented that there really wasn't much discussion of Linux certification within the German media and Linux community. They asked "what can we do to help spread the word here in Germany?" As we talked, Juergen suggested the idea of having a local German "chapter" of LPI... all three of us were intrigued by the idea and discussed the idea at length.

After that night, we exchanged more email and also shared the idea with others we knew in Germany. The others at frontsite AG were quite interested, as were the folks at SuSE with whom I communicated.

The LPI Board considered the subject and we, too, thought it was a great idea and gave the go-ahead to the frontsite AG folks to start a discussion and make it happen. Since there was so much interest, we thought this German chapter would be a great pilot program to see how this idea can work.

Mark Semmler wasted little time and sent off a note to the "linux-cert" mailing list asking if people would be interested. His full message, which includes his German text, can be found at:

http://lists.linuxcare.com/linux-cert/msg00038.html

Since this newsletter is going to a global audience and is written in English, I'm including only Mark's English text below so that you get a sense of what he and the others are proposing.

I know already there is great interest in meeting at the upcoming LinuxTag in Kaiserslautern, Germany on June 26th & 27th. It will be great to see what comes out of discussions there.

Please contact Mark at "mark.semmler@frontsite.de" if you are interested in becoming involved with this effort.


------ Begin Included Message ------
Subject: LPI Germany?! 
Date:    Thu, 24 Jun 1999 01:32:19 +0200 
From:    Mark Semmler <mark.semmler@frontsite.de>
<German text snipped>
Hello everybody!
We are tracking very interested since a couple of weeks the mailinglists of 
the LPI and we are just fascinated by the thoughts and the dynamic of this project.
During a meeting with Dan York early this month in Munich, the thought was born, 
to create a german-speaking chapter of the LPI.
The goals of this "chapter" should be:
 - promote the idea of LPI Linux certification through local
      media, conferences, publications, etc.
 - maintain German-language mailing lists and a web site
       (in German) to promote discussion of Linux certification issues
 - translate English LPI marketing and information materials into
      German for distribution within Germany
 - assist in identifying people/companies that can perform
      the translation of exam items (questions) into German
 - organize discussion groups and local meetings that bring
       together key players within the German training and larger
      information technology industry to move Linux certification
      forward
 - communicate ideas from the German-language discussion groups/lists
      back to the English-language discussion groups/lists
 - translate LPI news releases into German and distribute them
      to appropriate German news media
I'm sure there will be other tasks which develop as well.
This chapter should be organized like the LPI. 
Means: non-profit and independent.
We offer to host and to maintain the mailinglists and the webspace for 
such a project on our servers. The domainname could be "(www.)de.lpi.org" and/or "(www.)german.lpi.org".
Who is interested?
Juergen Off, Mark Semmler, Jens Kiefer, Heiko Franssen, Thorsten Linstead
------ End Included Message ------



Logo Contest Results

Those of you who have followed our web site and the past newsletters are aware of the Logo Contest that we ran for quite some time. There were some pretty amazing entries submitted by some very talented people. We set up a poll on our web site to get some idea of what viewers thought. In the end, the simplicity of Jorge Otero's design seems to have captured people's attention. You can check out all the designs at:

http://www.lpi.org/logo-results.html

The LPI Board has not made a formal decision yet, but is leaning strongly toward using Mr. Otero's design.

Thank you to all the people who took time out of their day to create some artwork for us. It all is great!


Development Plan

On our web site, you can now find our plan for the development and implementation of our first level of certification. The plan, developed primarily by Scott Murray and Tom Peters, is available at:

http://www.lpi.org/public_plan.html

The document will be updated over time as our plans evolve. Please check it out and send any feedback to Scott (scott@lpi.org) and Tom (tom@lpi.org).


Mailing List Archives

FYI, archives of all LPI mailing lists can be found at:

http://lists.linuxcare.com/

Messages posted to an LPI list are posted to the web archives within an hour. Note that we are still working on restoring the historical archives after a server crash, so they only contain recent information.

There is, however, an archive of the "linux-cert" mailing list going back to last November when this all began. It is still active and can be found at:

http://linux.codemeta.com/archives/linuxcert_archive/

but again, because most of the activity has moved to the committee mailing lists and the web site, the archive does not reflect the full range of LPI activity going on today.

This second archive is now searchable at:

http://linux.codemeta.com/archives/archive_search.html

On the page, you must choose "Linux Certification" from the select button in order to search our archive.

We also just recently added our mailing lists to the archives at http://www.mail-archive.com/ - all of which are searchable archives.

linux-cert: http://www.mail-archive.com/linux-cert%40linuxcare.com/

linux-cert-program: http://www.mail-archive.com/linux-cert-program%40linuxcare.com/

linux-cert-corprel: http://www.mail-archive.com/linux-cert-corprel%40linuxcare.com/

linux-cert-pr http://www.mail-archive.com/linux-cert-pr%40linuxcare.com/

Thanks are due to Matthew Rice for pointing out the availability of mail-archive.com.



Final Thoughts

We're nearing the end of the first phase of our development. By the end of this month, people should be taking our first beta exam in VUE testing centers all around the globe. Our psychometricians will be analyzing data. Others will be finalizing our second (T2) set of exams. Another group will be working on translation of the exams into other languages. Yet another group will be starting work on the objectives for the Level 2 exams. Marketing programs will be underway.. we'll be gearing up for LinuxWorld in San Jose in August... it's going to be a crazy and exciting time!

I hope you'll visit our web site at www.lpi.org and join in the fun and excitement. It's only through the power of MANY people working together that we've been able to make this happen!

Thank you all for your continued support.



Previous ``Linux Certification'' Columns

Linux Certification Part #1, October 1998
Linux Certification Part #2, November 1998
Linux Certification Part #3, December 1998
Linux Certification Part #4, February 1999
Linux Certification Part #5, Mid-April 1999


Copyright © 1999, Dan York
Published in Issue 43 of Linux Gazette, July 1999

"Linux Gazette...making Linux just a little more fun!"


AbiWord's Potential

By Larry Ayers


Introduction

There is a tension in the Linux community between developers, who tend to be comfortable with their text editors and mark-up formatting systems, and users who want the sort of word processor common in the Mac and Windows worlds. This tension periodically sparks discussions in newsgroups and mailing lists, but an Open Source project has yet to produce a finished, fully usable word processor. The release of GPLed source last year for the Maxwell word processor failed to draw enough programmer interest to result in an ongoing and dynamic project to complete the program, possibly because of Maxwell's reliance on the Motif widget set.

Of course the commercial products StarOffice, WordPerfect and Applix Words are available for Linux. These are large applications; I'm under the impression that many users desire something quick to load and less complex, a word processor suitable for formatted business letters and other shorter documents. Another factor mitigating against the above commercial applications is the lack of community involvement. I've noticed that closed-source applications don't seem to generate mailing list and newsgroup postings as readily as do various free software projects. I rarely write directly to developers involved in the various free software projects I follow, but I know who they are and if the need happened to arise I wouldn't hesitate to make contact. Free software projects typically attract a secondary level of co-developers and skilled users who often frequent the various net forums answering questions and providing assistance.

It has been suggested that writing a good word processor is such a difficult task that it is beyond the capabilities of an Open Source development process. More likely, I think, is that a large enough group of programmers ardently desiring such an application just hasn't ever coalesced. Perhaps this sort of project is suited for a hybrid approach, one involving both a commercial firm and independent free-software programmers. AbiSource, Inc. is giving this idea a try.

AbiWord So Far

Is it possible for an ambitious Open Source project to thrive and produce useful results under the sponsorship of a for-profit corporation? The Mozilla project is one such undertaking. After over a year of source availability much has been accomplished but the current binary releases, while intriguing, aren't yet as usable as the current releases of Netscape Communicator. The bulk of the new code still seems to be primarily coming from paid Netscape programmers. This might indicate that free software programmers prefer working on projects which aren't under a corporate aegis; another possible reason is the sheer size and complexity of the Mozilla code-base. Many programmers might lack the time and/or skill to comprehend such a project, and starting from scratch with a relatively new widget-set (GTK) must further increase the difficulty.

The programmers who started AbiSource, Inc. don't seem to be daunted by the dearth of efforts to mix business with Open Source from the very beginning of a project. Mozilla already had a massive source tree when its development was opened to the outside world last year, while Eric Allman's Sendmail business followed years of non-profit and open development; Eric had written a proven and widely-used piece of software before he formed a company to provide service for corporate users of Sendmail.

It should be kept in mind that these are still early days in the intersection of the free software and business worlds. Another year or so of experimentation with the various trials and ventures ought to make evident which approaches have managed to make money without driving away the developers and users in the free software community. AbiSource is a new company gambling that its ideas will prove viable and useful.

AbiSource's goal is to provide basic Open Source business applications for Linux, Windows, and BeOS users. Their idea is to give their applications away and charge for service and customization. Abi's first product is a GTK-based word processor, AbiWord. Outside programming help is welcomed and all of the usual paraphernalia of an Open Source project, such as mailing lists, CVS servers, and bug-reporting mechanisms, are available from the AbiSource web-page, http://www.abisource.com. The number of non-Abi volunteer programmers contributing code isn't mentioned on the site, but I believe that the completion of the BeOS port was largely due to outside Be programmers.

It's interesting that while the source code is under the GPL and thus freely available and modifiable, the names AbiSource and AbiWord are copyrighted. This is intended to protect whatever reputation and name-recognition the company might gain if their services become popular.

The most significant difference between AbiWord and nearly every other word processor available is the nature of the native file format. An *.abw file is written in XML and thus is also in ASCII format; the files can be read by any text editor. This is quite a break with word processor tradition and ensures that when you write a document with AbiWord you don't run the risk of being strictly tied to one particular word processor, which may not even run on machines five years from now. AbiWord can also save in the HTML and RTF formats, both of which are accessible with word processors such as MS-Word and WordPerfect. Due to limitations of HTML and RTF some formatting information is lost (such as the specific fonts used), but attributes such as bold and italic font styles and tab-settings are retained. If XML really does become a widely-used and open data-format (as its proponents predict) AbiSource might be in a good position to gain users and clients.

Many Linux users would like to be able to read MS-Word files with a Linux word processor. StarOffice, Applix Words, and WordPerfect all come with filters for the ubiquitous format; these filters usually work well with simple documents but more complex documents with embedded macro routines are another matter. AbiSource has chosen to avoid this particular can of worms; the RTF support should ensure that simply formatted files can be exchanged with Word users.

Linux users and developers in academia, with its strong unix traditions, have less of a need to be able to deal with MS-Word files than do the growing numbers of users coming to Linux from the "real" world, the larger world of commerce and corporations. Until the nearly universal usage of the MS-Word format for even the simplest documents begins to decline, alternative word processors will have to struggle to gain market-share. The fact that AbiWord is free should be of some help, though there still exists a common idea that free software is somehow suspect.

With the release of 0.7 (and most recently 0.7.1) AbiSource began to make binaries freely available on their web-site and have even pressed CDs which are available at a nominal price. This would seem to indicate that the program has reached a state of usability. I've been trying out the latest release; it's serviceable but basic and seems to be stable. I've not had it crash once. Few of the paragraph and document formatting functions have been enabled at this point, but font-changes and tab-settings work well. Zooming (enlarging the apparent size of the document on the screen) is enabled. The fonts can be changed either from a drop-down selector or with the spiffy GTK font-selector dialog-box. Here is a screenshot of version 0.7.1:

AbiWord window

Looks like a normal word processor, doesn't it? Notice the red squiggly lines beneath certain words; this is supposed to indicate misspelled words. I have yet to find a way to turn it off. AbiWord comes with its own dictionary, but there doesn't yet seem to be a way to spell-check a document. Many of the menu-items are non-functional. Clicking on one of these summons a message-box stating that "the [function] dialog hasn't been implemented yet" followed by a pointer to lines in the source file which need the work, a thoughtful hint to a prospective code contributor.

If you give AbiWord a try, create a new file with a few lines of content, save it, then examine the resultant *.abw file with a text editor. Your content will be readable in this file, with surrounding XML tags indicating formatting specifications. As an example, here is the last line of the file used in the above screenshot:

<p props="line-height:1.5; margin-right:1.8125in">
<c props="font-family:Century Schoolbook; font-size:14pt; font-style:normal;
font-weight:normal">Variable line-spacing is now working.  This is set now for
one and one-half rather than single-spacing.</c></p> 

As you can see, the formatting tagging is comprehensible and could even be modified "by hand", in an editor rather than in the word processor. The actual content is accessible, a welcome difference from the usual binary word processor format in which the content is immersed in a sea of unreadable binary symbols.

The source distribution contains some interesting examples of *.abw files but these files were omitted from the binary packages.

In the Linux version, and I assume in the Windows and BeOS versions as well, printing is handled by the existing print system. On my system the file seems to be converted to Postscript format, then is passed to Ghostscript for processing by my print filter. AbiWord uses standard Postscript Type 1 fonts, but for some reason they need to be located in an Abi-specific directory. Several standard fonts are supplied with AbiWord, but more can be added as long as both the *.afm and the *.pfa files are supplied for each font. As in standard X Windows font installation, the index file fonts.dir must be updated as well.


Conclusion

In its current state AbiWord is useful for writing short, simply formatted documents, but lack of paragraph and document formatting templates, as well as the lack of functional image insertion, limit its scope. It seems to me that AbiSource has developed the base structure of the word processor solidly, and the hooks for completion of the feature-set are in place in skeletal form and just need to be fleshed out. The decision to use an XML file format should appeal to users who would like to use something other than the exclusive binary file-formats of typical word processors. Whether AbiSource will be able to keep the development process alive until revenue is generated remains to be seen, but at least the source code will remain available should they fail.


Copyright © 1999, Larry Ayers
Published in Issue 43 of Linux Gazette, July 1999

Linux Gazette... making Linux just a little more fun!

Published by Linux Journal


The Back Page


About This Month's Authors


Stephen Adler

While not building detectors in search of the quark gluon plasma, Steve Adler spends his time either 4 wheeling around the lab grounds or writing articles about the people behind the open source movement.

Larry Ayers

Larry lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP.

Chris Carlson

Chris has been developing software for various systems and hardware since 1973. He worked for 8 years as a Developer's Support Engineer for Silicon Graphics, Inc. based in Southern California. He is now working for DataDirect Networks assisting in the development and test of SGI and Linux device drivers. He lives in Orange County, California.

Jack Coats

Jack (is a consulting UNIX administrator for Collective Technologies. Personal activities include his family, church, leading a local UNIX users group in Houston (HOUNIX), and hacking computers.

Jim Dennis

Jim is the proprietor of Starshine Technical Services and is now working for LinuxCare. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/Peter Norton Group and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

Andrew Feinberg

Andrew has been using Linux for about three years and computers for even longer. He is a Debian GNU/Linux developer and an organizer of the High School Linux User Group site (http://hs-lug.tux.org/). He can be reached at andrew@ultraviolet.org.

Michael J. Hammel

A Computer Science graduate of Texas Tech University, Michael J. Hammel, mjhammel@graphics-muse.org, is an software developer specializing in X/Motif living in Dallas, Texas (but calls Boulder, CO home for some reason). His background includes everything from data communications to GUI development to Interactive Cable systems, all based in Unix. He has worked for companies such as Nortel, Dell Computer, and Xi Graphics. Michael writes the monthly Graphics Muse column in the Linux Gazette, maintains the Graphics Muse Web site and theLinux Graphics mini-Howto, helps administer the Internet Ray Tracing Competition (http://irtc.org) and recently completed work on his new book "The Artist's Guide to the Gimp", published by SSC, Inc. His outside interests include running, basketball, Thai food, gardening, and dogs.

Terry "Mongoose" Hendrix I

Terry has a web page at http://www.westga.edu:80/~stu7440/.

Norman M. Jacobowitz

Norman is a freelance writer and marketing consultant based in Seattle, Washington. Please send your comments, criticisms, suggestions and job offers to normj@aa.net.

Sean Lamb

[Sean wrote the Caldera review in last month's LG, issue #42.] I am a computer science major and LAN Admin at Lakeland College's Madison, WI, campus as well as a member of the Wisconsin DOT Help Desk and Server Backup teams. My previous Linux experience was solely with RedHat until installing Caldera 2.2. I am a member of MadLUG (the Madison Linux User Group, at http://madlug.jvlnet.com) and an active contributor to the user group's web presence. When I'm not playing with Linux, I'm building and running my model railroad. I can be reached at slambo42@my-dejanews.com.

Mark Nielsen

Mark founded The Computer Underground, Inc. in June of 1998. Since then, he has been working on Linux solutions for his customers ranging from custom computer hardware sales to programming and networking. Mark specializes in Perl, SQL, and HTML programming along with Beowulf clusters. Mark believes in the concept of contributing back to the Linux community which helped to start his company. Mark and his employees are always looking for exciting projects to do.

Anderson Silva

Anderson is a Senior at Liberty University majoring in Computer Science. Originally from Brazil, now he works at the University's Information Technology Center. He is also a member of the Lynchburg Linux User Group in Lynchburg, Virginia.

Jan W. Stumpel

Jan lives in Oegstgeest, The Netherlands.

Jeff Wall

Jeff is Production Manager of Mahaffeys' Quality Printing in Jackson, Mississippi. He helped start the Linux Users Group of Jackson (http://www.lugoj.org) "because we didn't have one". Happily married, he has a yellow Labrador named Buckminster Fuller and entirely too many computers. He'll discuss his Linux performance testing the drop of a hat at jefferson1@linuxman.net .

Dan York

Dan has been working in the corporate training field for 9 years and is currently employed in the Education department of Linuxcare. He has been working with the Internet and UNIX systems for the past 13 years and with PCs since the first Apples in 1977. He is currently the Chair of the Board of Directors of the Linux Professional Institute and is very grateful that Linuxcare allows him to spend part of his day working on LPI issues. Dan is also the maintainer of www.linuxtraining.org. He enjoys spending his almost-non-existent free time with his wife and their greyhound and cat at home in New Hampshire.


Not Linux


I want to give a big thanks to our authors this month for giving us such a substantial issue. This issue has almost twice as many articles as last month, and I learned a thing or two from some of them.

This issue is a little milestone for me. It marks my first time publishing an ezine without outside help.

Dealing with the one letter where we didn't have a common language to communicate with got me thinking about my interest in Esperanto. Because if we all had a common, simple auxiliary language to fall back on (one much easier to learn than English!), these things wouldn't happen. Are there any Gazette readers who speak Esperanto? If so and you'd like to chat, write me. Cxu estas iuj ajn legantoj je la Gazette, kiuj parolas Esperante? Se jes, kaj se vi volus babili, skribu al mi.

Have fun!


Michael Orr
Editor, Linux Gazette, gazette@ssc.com


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Back


Linux Gazette Issue 43, May 1999, http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1999 Specialized Systems Consultants, Inc.