LINUX GAZETTE

September 2000, Issue 57       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Penguin Computing
Red Hat
Tuxtops
eLinux.com
LinuxCare
LinuxMall
VMware

Table of Contents:

-------------------------------------------------------------

Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Michael "Alex" Williams, Don Marti, Ben Okopnik

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2000 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


From the Editor


Our authors have been especially prolific this month. Fernando Correa contributed four articles, Pat Eyler two, and Mark Nielsen two. In addition, Shane Collinge contributed six HelpDex cartoons.


Linux Gazette is now under the Open Publication License (OPL)
( http://www.opencontent.org/openpub/). This will not limit anyone's ability to distribute the Gazette and its articles in the usual manner--via mirrors, LDP mirrors, FTP files, commercial and non-commercial CD-ROMs, user-group newsletters, xeroxed copies for your class, etc. We remain committed to allowing LG to be distributed as widely as possible.

The reason for this change is that we have been receiving a few articles under the OPL. In some cases, the author has been willing to switch to the traditional LG license. However, in two cases the author was under contractual obligation with a publisher to release only under the OPL. We at LG wish to use a single license as much as possible to make things easier on our republishers. Therefore, because OPL is (1) compatible with the traditional Linux Gazette license, (2) gaining respect in the publishing world, and (3) more precise than the old license, we decided to adopt it. This also affirms our support for the OPL and similar licenses, and we hope they get used more widely.

As always, each article is copyright by its author. If you copy an article, you must include its copyright notice. A link back to www.linuxgazette.com is requested but not required.

As the OPL states, if you modify an article to the point that its meaning is changed, you must clearly explain what you changed, who changed it, who the original author was, and how to obtain a copy of the unmodified original.

If you republish an article or a modified version, you may not impose additional restrictions on its distribution.

Note that the author, being the copyright holder, is not bound by the license. S/he is free to republish the article, or allow it to be republished, under any license s/he desires.

LG's official copying statement is at http://www.linuxgazette.com/copying.html. In addition, the LG FAQ has been updated to reflect the change.

If anybody has any questions or concerns, please contact the Editor at gazette@ssc.com.


www.linuxgazette.com will be switching to name-based virtual hosting in early September. This means older browsers (which lack the HTTP 1.1 capability) may have difficulty viewing it. If this happens to you, please tell us.


This page written and maintained by the Editors of the Linux Gazette. Copyright © 2000, gazette@ssc.com
Published in Issue 57 of Linux Gazette, September 2000

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Contents:

Write the Linux Gazette at gazette@ssc.com. Send technical questions to the Answer Gang at linux-questions-only@ssc.com.


Help Wanted -- Article Ideas

These questions have been selected among the hundreds the Gazette recieves each month. Article submissions on these topics will be eagerly accepted at gazette@ssc.com, and posted in the next issue.

Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue -- in the Tips column if simple, the Answer Gang if more complex and detailed.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there. The AnswerGuy "past answers index" may also be helpful (if a bit dusty).



Article Idea

Tue, 22 Aug 2000 13:20:18 -0700
from: César A. K. Grossmann <ckant@fazenda.gov.br>

Cesar,

I'm trying to use netpipes to implement some file transfer automation, but the documentation that comes with netpipes is beyond my techno-knowledge (no explanation on the options, only some examples). A quick search over the Internet gives nothing (Altavista, Google). So I think a "Guide to NetPipes" is a good thing. Or it is lacking audience?

I think if something works well and there's no audience for it, it might be because of a lack of documentation. So yes, please write the article.
-- Don

Misunderstanding here... I need an article that helps me to use netpipes... I can write one, but it will take a lot of time until I'm ready to do that.

Thanks -- Cesar

Cesar, I'm cc'ing linux-questions-only@ssc.com. This is the place to send article requests. It will be published in the Mailbag, and hopefully a reader will see it and respond. -- Mike


Your OO Programming articles

Mon, 14 Aug 2000 15:30:54 +0100
From: "Sean Akers" <sean.akers@ntlworld.com>

I have just been looking at your OO programming articles on C++ and Python. Might I suggest an article on Smalltalk as well. As a Smalltalk programmer by profession I cannot praise this language too highly. There are a couple of alternative Smalltalks for Linux, one being Visual Works (a non-commercial version being available for download at http://www.cincom.com/smalltalk/downloads.html) and the other being Squeak (http://www.squeak.org). I personally have not used Squeak but I have heard it is very good if not quite as polished as Visual Works. I do all of my personal code development on Linux using Visual Works. Having been a C++ programmer for over 5 years I would hate go back now.

You can obtain information on this excellent language at the Smalltalk Webring (http://www.webring.org/cgi-bin/webring?index;ring=smalltalk) and at the Smalltalk Industry Council site (http://www.stic.org).

I think is it worthy of serious consideration in your excellent magazine.

Sean Akers.


Free ISPs under Linux

Fri, 04 Aug 2000 19:36:50 GMT
From: "Richard Shores" <rick_shores@hotmail.com>

Windows users can have free ISP access from netzero.com, juno.com, altavista.com, excite.com, freeinternet.com, and others (see computerbits.com, latest edition). I use two free ISPs. They work just fine, considering they save me $240/year. For me and I'm sure others in this world, $240/year is important.

I like using Linux, and would use it solely, if I could get free ISP with it. But the free ISP world seems to be accessible only via Windows. So I have a dual boot system, Windows98 or RedHat6.2. I do my net surfing in Windows for downloads, then mount/copy my download files to Linux later.

Is this the best I can do? Is anyone thinking of setting up a free ISP system, supported by advertising, for Linux? If not, why not?

Linux is a great server OS. It would become a more popular home OS if it could access free internet services.

It's worth noting that he used a free email service to send this in. There's clearly at least some market in Linux space for people who care simply that their ISP client is able to provide the basic services ... dialup, email... and are glad to accept a binary solution "paid for" by their eyeballs on your ads alone. Is America Online ior Juno listening?

This joins a request from last month for free ISPs. If anyone is interested in writing an article about them, we'd accept that too.

-- Heather.


Microsoft Reader books under Linux

Tue, 29 Aug 2000 10:06:55 -0700
From: Don Marti <dmarti@zgp.org>

We need an article on reading books in Microsoft Reader format under Linux. See: http://www.sjmercury.com/svtech/news/top/docs/ebooks082900.htm

-- Don Marti


Replacing an MS Exchange Mail Server with Linux

Tue, 1 Aug 2000 13:16:37 -0500
From: "Jonathan Hutchins" <hutchins@opus1.com>

This is a sort of follow on to your discussion in Issue 56 of reasons not to migrate a Linux mailserver to MS Exchange.

One feature that the MS Exchange Server/Outlook Client ( as well as the Lotus Notes Server/Client) offers is a centralized address book. If I want to send mail to Jim Smith, I just enter "Jim Smith" in the address line. The Client software queries the Server, which looks in whatever address books I've configured, and find's jsmith@region1.somewhere.com. When Jim transfers to the Tucson office, his address on the Server is updated, and new messages addressed to Jim Smith will go to jsmith2@region3.somewhere.com.

This also works for outside addresses, the central address book can have one entry for linux-questions-only@ssc.com, instead of 10,000 entries in each of 10,000 address books on 10,000 workstations. If your address changes, it only has to be updated once on the server, not 10,000 times, and you don't end up having to write 10,000 people to tell each of them you're address has changed. For small sites, this is the real advantage of this feature. Even on my home network, I can maintain a single address book, and when a relative changes their address, both Outlook and MS Word can look up the correct address in a single database.

So far, I have yet to figure out a way to implement this kind of feature on Linux workstations. The internal address scheme could probably be handled using Netscape as a mail client and an LDAP server, but I don't know how we would handle the external address book.

The only possible solution I've found so far is IF IBM releases a Lotus Notes client for Linux. which they were supposed to do last year. I haven't heard any further than the rumor that they MIGHT release it some day.

Perhaps we could use an article about convincing large companies to release products for Linux. Or alternatively, the trevails of adjusting your comapny's infrastructure from developing only for Windows, to a multiplatform release plan. -- Heather


Slashdot -- win2000 doesn't support loadlin or umsdos?

Tue, 29 Aug 2000 23:27:18 -0700 (PDT)
Mike Orr

http://slashdot.org/askslashdot/00/07/30/078252.shtml Windows ME - The End Of UMSDOS And BeOSfs Over Vfat?

What does this mean to future LG readers who have Win2000 and want to dual-boot Linux without using lilo? Are they SOL?

I glanced at this thread (it's gotten huge). The consensus seems to be

  1. easy solution, don't bother accepting an upgrade that's a downgrade.
  2. if it breaks loadlin, someone will patch the NT version so it works.

Wanted: articles about non-LILO boot loaders. -- Heather


Xlib - example source code

2 Aug 2000 15:24:00 +0100
From: christophe.limbree.145@B-RAIL.BE

I would like to start writing software with Xlib. I would like to find basic source code files containing small examples (opening windows, drawing lines, writing text, …) with their makefile. My purpose is to convert a GUI written in C language (gcc) for OS9 into a GUI under Linux.

Limbrée Christophe


Linux, NDS and ncpfs

Thu, 24 Aug 2000 09:22:44 +1000
From: "Kirkham" <kirkham@uq.net.au>

Hi,

I understand that Linux 6.2 (and most likely any following versions) have support for Novell's NDS via ncpfs. And that the IPX-HOWTO explains how to configure ncp client via ncpmount.

However;

  1. I could not determine how to configure NDS Tree, or Context. Is this possible? If so how?
  2. Since our Novell file servers have been taken up to NDS Patch 8, the Linux boxes can no longer connect to Novell file servers.
  3. Is there any documentation on how to configure the NDS side of ncpfs or ixp?
  4. Is there any configuration files that I can configure for the above.
  5. What documentation is there for ipx_configure and what it configures. And other ixp tools

    George.


    General Mail



    ping ...

    Fri, 25 Aug 2000 15:00:37 -0700 (PDT)
    From: terry white <twhite@aniota.com>

    ... hello:

    couple of questions.

    is this 'list' on-going, or new.

    The TAG list at SSC (linux-questions-only@ssc.com) is new in that we have only instituted the Answer Gang within the last few months. It's ongoing, in that this is the way we will continue onward, to provide better technical answers.

    what should someone new to this list know ...

    Well, you would be someone who is willing to jump in and help other people. You would also be someone who is willing to visit some search engines and find people useful pointers to learn more about the subject you're helping them with. Hopefully you would be able to write clearly enough so that it is fun to read, rather than scary like certain of the HOWTOs I'm not going to name. You don't have to know HTML, though. That's my job. -- Heather


    Complement to Micro Publishing series

    Fri, 18 Aug 2000 16:01:24 -0300
    From: Bruno Barberi Gnecco <brunobg@psi.com.br>

    Though the series have covered pretty well the "hardware" part --- how to bind, etc --- I have found the software part disappointing. I have always wanted to print some books, but basically for laziness I ended printing in the single sided, A4 format and brought to the nearest copy place to add a cheap plastic cover and spiral binding. Good for software documentation, bad for real books that I wanted to keep in the shelves, instead of in the middle of one of the piles of my desk. Reducing the images by half made the letters too small, and didn't look like a real book --- too many lines.

    I suppose here that you want to print a book that you have the TeX source, or some format that you can output to PostScript modifying the page setup and, therefore, the layout. You may have trouble with texts with figures, since usually the author cared about their size and position. You'll have to follow the #1 law of laboratories: "If you don't know what's going to happen, protect your eyes and tell your buddy to do it".

    The idea is to print in A5 format. If you don't know, a A5 page is exactly half of an A4 page, cut parallel to the smallest side (works for all A? pages: A3 is two A4 pages joined by the larger side). So, you can print two A5 pages in one A4 page, without reducing; and A5 pages are the size of a book. Talk about nice.

    First thing to do is to get the PSUtils package. This is a nice set of utilities that will most of your needs of manipulating PS files. Get them at: ftp.dcs.ed.ac.uk/pub/ajcd or ftp.tardis.ed.ac.uk/users/ajcd. Compile and install.

    Generate the PostScript file. If you're using LaTeX, you can do it using something like:

    \documentclass[a5paper]{book}

    I had a problem here: when I tried to generate in the A5 format, the page was cut in half. It turned out that the problem was in dvips. If you have this problem, find the file config.ps (probably in /usr/share/texmf/dvips/config) And add the following lines:

    @ a5 149mm 210mm @+ ! %%DocumentPaperSizes: a5 @+ %%BeginPaperSize: a5 @+ a5 @+ %%EndPaperSize

    Alternatively, you can use the following trick:

    \geometry{verbose,paperwidth=149mm,paperheight=210mm}

    on the beginning of your LaTeX file. Now convert your file to PostScript, and check it to confirm that it's really in the A5 format, and not cropped in the wrong place.

    Now comes the PSUtils. Though Mark Nielsen used mpage, it will not work for this task well, since it will reduce the page. In the PSUtils package there's an utility called pstops, which is very powerful. To do what Mark did with mpage, type the following commands:

    pstops "4:3L(21cm,0)+0L(21cm,14.85cm)" file.ps file1.ps
    pstops "4:1L(21cm,0)+2L(21cm,14.85cm)" file.ps file2.ps

    There is also psbook, which let's you print in large paper with a multiple of 4 pages per side, so you can fold it and it will really be like a book. The problem is to find a printer that accepts A0 paper. It's useful, however, if you can print in A3 paper, because you could print 8 pages in a single sheet (four A5 in each side).


    Diamond Stealth Pro VL contribution in August Linux Gazette

    Tue, 01 Aug 2000 16:02:58 -0500
    From: Chris Gianakopoulis <acg009@email.mot.com>

    I must apologize for passing misinformation about my Diamond Stealth Pro VL video card. I made the statement that the board uses an 80C929 device. I mistyped the device number! It was supposed to be an 80C928. I truely did proofread my mail before I posted it but somehow I missed that important piece of information. I truely understand that incorrecti information is more dangerous than no information!!!!!!

    My apologies, Chris Gianakopoulos


    Linuz Gazette article - Python

    Wed, 2 Aug 2000 19:36:16 -0700
    From: Jeremy Parks <parks@nortelnetworks.com<

    I was moreso curious why Python doesn't have self "built-in" to the _init_ method somewhat like C++ and Java do.

    Jeremy Parks

    1. Because the first versions of Python didn't have classes; they were added later. Languages such as Python and Perl which had classes grafted into them later tend to have that 'self' argument explicit, so that they can leverage the existing function-call code which expects all formal parameters to be explicit. ("Formal parameters" are the placeholder arguments in the function definition.)
    2. Because Python's author preferred to make it explicit in the syntax rather than hide it. So that people wouldn't forget it's there, I guess.

    -- Mike


    resolved problems ?

    Thu, 10 Aug 2000 12:18:58 -0400
    From: molly morris <sevenox@viaccess.net>

    Guys/Gals,

    Searched for info on how to make Canon bjc-250 work under Corel Linux. (1 found) Also found $.02 tip re: Netscape

    These are both dated 1998. I'm sure these issues have been resolved by now.

    Usually tips are posted to us because someone found or made a solution for themselves. That's why they're Two Cent Tips. Netscape has come a long way since then, but still takes a command line argument for printing - you could really use any application you want in the "print" dialog.

    Does the "Gazette" plan to link ancient history to today's solutions?

    One of the nicer things about Linux, is that is often happens that even very old solutions still work... even when better ones become available. I've seen means for using bubblejet printers via apsfilter, and magicfilter. There may be a few other things, and I'm certain there's at least one commercial grade print queue program.

    I thought this new- tomorrow Linux community was going to to be a learning curve thing but I've logged more web time on it in the last two weeks than Win (God forbid) 95 in the last two years.

    Well, yes, that's a balance point - more community, so more scattered knowledge, meaning it sometimes needs to be chased down. Combining it back into a form usable by ordinary folk is the job of the Linux Documentation Project, which the Gazette is proud to be part of.
    Since Corel Linux is a Debian derivate, it should be possible to apt-get install magicfilter, then run magicfilterconfig.

    I went with Commodore Amiga (still have running box with Utah Word Perfect) in its early stages and our user community makes the linux groups that I've encountered so far look like Sandbox 101 for verbose Unix programmers.

    M.M.

    Anyone want to lead him to more sites or IRC channels that are specifically helpful to UNIX newbies, other than a few websites I can immediately think of like linuxstart.com or linuxnewbie,org?
    Also: Perhaps when you find the sites that work for you, you can pass it along so Corel can do a better job setting new users up with some good bookmarks to follow in their next version. We'd like to hear about it too. And, last but not least, Linux Journal is looking for some hard nosed reporting on what's really good or bad in some of the latest distributions that are rolling out... if you're interested in reviewing them from a hard hitting attitude, contact Don Marti, dmarti@linuxjournal.com. -- Heather


    Gazette Matters


     Wed, 02 Aug 2000 17:26:17 -0400
    From: Srinivasa A. Shikaripura <sas@lucent.com>
    Subject: Reg. the display of email address on the gazette columns

    hi,

    I have a sugesstion on the open display of email address in the Linux Gazette columns "Help wanted" and "2-cent tips".

    Currently the pages contain the email addresses in open. This is very easy for the email bot programs to scan the page for email address and use it for building spam-lists and sell them.

    I have a suggestion. Could you please consider obfuscating the email ids as some other web news letters have started doing.

    For example you could obfuscate: user@domain.com to user at domain.com or user @ domain.com.

    I know this has drawbacks. Users can't click on the address in the article to reply directly. This is a minor inconvinience and once the user is educated about it, it shouldn't pose a problem.

    I am writing this, because, once I posted to usenet with my clear email address and I suddenly started getting a lot of spam mails.

    Cheers
    -Sas

    [It's a tradeoff between spam obfuscation and clickable mailto links. For better or for worse, the tradition in LG has been clickable links, and reader requests have been to make more mail links clickable rather than fewer. -Mike.]


     Thu, 3 Aug 2000 20:23:53 +0200
    From: Matthias Arndt <matthiasarndt@gmx.net>
    Subject: Linux Gazette

    I'm using Suse Linux 6.3 as my hobby OS. I do almost anything with it and I'd like to ask if you still seek for authors of additional articles? I would really love to prepare a few articles for the gazette because I always wanted to make a ezine. I do not have the power to create such a project myself. Instead, I'd like to put my efforts in projects that are running. And the Linux Gazette looks like it is ongoing project.

    [We are always looking for new authors. Author information is in the LG FAQ at http://www.linuxgazette.com/faq/index.html#author. In fact, Matthew did send us an article about choosing a window manager, which you can read in this issue. -Mike.]


     Wed, 23 Aug 2000 21:25:29 +0200
    From: DESCHAMPS.terra.es <DESCHAMPS@terra.es>
    Subject: Felicidades, y gracias

    Ante todo enhorabuena por el gran paso que acabais de dar, llevo esperando Linux Gazzette en español desde hace mucho tiempo, es una revista autética y sin lugar a dudas con el mejor contenido.

    Enhorabuena, me habeis hecho feliz.

    Desde España, Javi.

    Un saludo.

    Translation by Felipe Barousse <fbarousse@piensa.com>:

    Before anything else, my best wishes for the great step you just made, I have been waiting for Linux Gazette in Spanish since long time ago, which is a very authentic magazine and, without any doubts with the best content.

    My best wishes, you have made me very happy.

    From Spain, Javi.


     Tue, 1 Aug 2000 12:30:17 +0200
    From: Juan Florido <krypto@elrancho.com>
    Subject: new translation to italian

    Dear Mike,

    I have received a new free translation of the linux gazette article issue 55th, about journal file systems.

    The translation has been made by someone called Alberto Marmodoro,who transalated the article to italian.

    The url is http://trieste.linux.it/~marmo/index.html

    If you want to include a link in the original article to this translation, follow that URL.

    [I told Juan to send it to our Italian mirror site also. -Mike.]


     Fri, 25 Aug 2000 17:17:24 +0200
    From: Jan Hemmingsen <janhem@get2net.dk>
    Subject: Linux Gazette Logo

    Hi

    I like the design of your logo very much. Did you use Gimp to create it?

    If yes, i would appreciate if you could tell me how it was created.

    [Actually, the graphic designer used Photoshop. If he sends me the details sometime, I'll print them. -Mike.]


    This page written and maintained by the Editors of the Linux Gazette. Copyright © 2000, gazette@ssc.com
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    News Bytes

    Contents:

    LG volunteer opportunity!

    LG is looking for volunteers to help format this column. The work takes 6-8 hours per issue. We would send you an HTML file containing the raw entries, and you would decide which items to publish, move similar items together, strip marketoid verbiage, summarize press releases in one or two paragraphs, and turn URLs into hyperlinks. This can all be done with a text editor and a basic knowledge of HTML. The ability to convert MIME attachments (e.g., Word documents) to text is helpful but not required. If you would like to volunteer, e-mail gazette@ssc.com.

    Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! As always, a one- or two-paragraph summary plus URL is preferred over an entire press release.


     September 2000 Linux Journal

    The September issue of Linux Journal is on newsstands now. This issue focuses on embedded systems. Click here to view the table of contents, or here to subscribe.

    All articles through December 1999 are available for public reading at http://www.linuxjournal.com/lg-issues/mags.html. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


     Embedded Linux Journal

    We're excited to introduce a Linux Journal supplemental issue which will hit the streets October 10, 2000: Embedded Linux Journal. In this upcoming special issue you can look forward to conversations about:

    • Industry news emphasizing Open Source software solutions.
    • Reviews of products to reduce development time and improve testing.
    • Case studies that will save you time.
    • Design solutions that show you why embedded Linux is the cost-effective answer.
    • Hardware vs. software considerations.

    Current Linux Journal subscribers who live within North America will receive this special supplement at no additional charge. This issue will also be heavily distributed at upcoming trade shows, other industry events, and to targeted mailing lists.

    We hope you enjoy this special issue. We welcome feedback.


    Distro News


     Caldera

    OREM, UT--July 25, 2000--Caldera Systems, Inc. today announced the availability of the Linux 2.4 Technology Developer Release Preview. This developer's preview enables early software development with a beta version of the new Linux 2.4 kernel, Sun Microsystems' Java HotSpot technology and glibc 2.1.91, KDE 2.0 Development Snapshot with the Konqueror web browser. Anchordesk UK's Evan Leibovitch did a review of the product.

    Caldera acquires SCO -- OREM, UT--August 2, 2000--Caldera Systems, Inc. and The Santa Cruz Operation, Inc., (SCO), today announced that Caldera Systems has entered into an agreement to acquire the SCO Server Software Division and the Professional Services Division. The Professional Services Division will operate as a separate business unit of Caldera, to provide services to meet the Internet and eBusiness infrastructure needs of customers. The new company will offer the industry's first comprehensive Open Internet Platform (OIP) combining Linux and UNIX server solutions and services globally. The OIP provides commercial customers and developers with a single platform that can scale from the thinnest of clients to the clustering needs of the largest data center.

    OREM, UT and SANTA CRUZ, CA-July 18, 2000-Caldera Systems, Inc. , and Tarantella, Inc. a wholly owned subsidiary of The Santa Cruz Operation, Inc., today announced the first bundling of Tarantella Web-enabling software in the Linux space. This solution, Caldera OpenLinux Application Server with Tarantella, provides centralized management and deployment of applications on a fast, stable and low-cost platform simplifying IT responsibilities while reducing business costs. OpenLinux Application Server enables authorized users with a Java technology-enabled browser to run existing Windows, Linux and UNIX applications through the company's local area network or remotely through the Internet - even on a dial-up connection. In addition, companies can instantly deliver new Web-based and existing legacy applications to their users without code rewrites.

    OREM, UT-July 18, 2000-Caldera Systems, Inc. today began shipping its first computer-based training (CBT) product - Quick Start to Linux. Quick Start is self-paced with hands-on, guided demonstrations including the preparation of a Windows-based machine for a Linux installation, the install itself and the navigation of Linux desktops. In addition, Caldera's Quick Start CBT identifies business solutions using Linux while providing historical Linux information.

    RESEARCH TRIANGLE PARK, N.C. (July 25, 2000) - InterLan Technologies, a managed server provider (MSP), announced today it has formed a strategic alliance with Caldera Systems. InterLan selected Caldera to provide the Linux operating systems in its state-of-the-art Internet Utility Center, as well as for its QuickStart(tm) program, an industry-first program that gets premium managed servers up and running the same day an order is placed.


     Debian

    Debian GNU/Linux 2.2, the "Joel 'Espy' Klecker" release.

    The Debian Project is pleased to announce the latest release of the Debian GNU/Linux Operating System. This release has been in development for approximately 18 months, and has been extensively tested by several thousand developers and end-users.

    With the addition of the PowerPC and ARM architectures, Debian GNU/Linux now supports a total of six architectures -- more than any other distribution. Packages for all architectures are built from the same source packages. Debian GNU/Linux now runs on iMacs and Netwinders, and of course Intel PC's, Sun SPARCs, Alphas, and older Macintosh and Amiga hardware are still supported.

    Debian GNU/Linux 2.2 features a more streamlined and polished installation, including automatic network setup via DHCP, a simplified software selection process (just indicate the tasks your Debian GNU/Linux system will be used for), and a simplified configurator for the X Window System. Debian GNU/Linux can be installed via CD, or from the network and a few floppies:

    For detailed documentation about installing and upgrading Debian GNU/Linux, please see http://www.debian.org/releases/2.2/.

    Debian GNU/Linux 2.2 is dedicated to the memory of Joel "Espy" Klecker, a Debian developer who, unbeknownst to most of the Debian Project, was bedridden and fighting a disease known as Duchenne Muscular Dystrophy during most of his involvement with Debian. Only now is the Debian Project realizing the extent of his dedication, and the friendship he bestowed upon us. So as a show of appreciation, and in memory of his inspirational life, this release of Debian GNU/Linux is dedicated to him.


     Kondara

    KONDARA MNU/LINUX 2000 claims to be the first and only multilingual distribution on a single binary. While other major Linux distributions have multilingual support, you have to completely reinstall Linux to switch to a new language. Kondara, on the other hand, lets you read, write, edit and print in Japanese, English, Chinese, Spanish or over 40 other languages all on the same desktop. "One World, One Version."

    Kondara also offers multi-platform support on a single source code. Now users can make a single change to the source code and have it affect both the Alpha and Intel platforms.

    The kernel is 2.2.16

    www.df-usa.com


     Linux for Windows

    INDIANAPOLIS, Indiana-July 18, 2000-Macmillan announces the update to their popular Linux® for Windows® product is currently available in stores. With this software, first-time Linux users can try the Linux environment without losing the Windows® functionality that they are familiar with. Linux for Windows 7.1 includes the Linux-Mandrake 7.1 operating system, most noted for its ease of use and user-friendly tools.

    The technology included with Linux for Windows 7.1 removes the need for disk partitioning or reformatting, thus making an excellent gateway for new users interested in Linux. Free 24-hour technical support is also provided via Internet and fax for installation issues.


     Lute

    Argent Resources Ltd. has signed an agreement dated July 12, 2000, with Lute Linux.com Corp. that will result in Argent acquiring 100% control of Lute. Completion of the transaction is subject to shareholder approval on a majority of the minority basis, CDNX acceptance and the resulting company meeting minimum listing requirements as a Tier 2, Category 3 Technology Issuer. http://www.argentresources.com/news/071200.asp

    LuteLinux.com has been voted site of the month by InternetBrothers.com in the Helpware and Community category. According ro InternetBrothers.com, the main crieteria for this honour is a "sound functional interface". InternetBrothers.com strongly believes that "A primary key to any successful web site is the user interface."

    Features that make LuteLinux exceptional are their commitment to the Linux community, and their fostering of interactive information exchange with that community. They also provide useful tips and information for the non-Linux user, helping to open Linux up to a wider audience. LuteLinux.com is committed to bringing Linux knowledge to the businessman, hobbiest, and newbie alike.


     Mandrake

    San Jose, August 14, 2000 - MandrakeSoft, announced that Linux-Mandrake, has been ported to Sun Microsystems' SPARC(tm) and UltraSPARC(tm) platforms. Based on the Linux-Mandrake 7.1, the Corporate Server 1.0 provides all the tools needed to rapidly and easily set up the main server functions. WizDrake assistants will additionally guide the user in setting up a full range of services and applications like e-mail, Web servers, firewalls and routers.


     Progeny

    Progeny Linux Systems is a new company headed by Ian Murdock (Debian founder) and Bruce Perens (former Debian head guy).

    Progeny is doing two things:

    a.) Progeny Debian - a stripped down version of Debian with some improvements like the installer, based on extensive testing of the latest version. Progeny Debian will have limited circulation in pre-loads, downloads, and (probably) with third party companies. Progeny won't be selling it directly in the stores, although the third parties might be. Basically, Progeny Debian's main use will be as a basis for Linux NOW.

    b.) Linux NOW (Network of Workstations) - an Open Source product.

    More information can be found at http://www.progenylinux.com/news, and in the /debian and /now pages.

    Debian Founder Launches Commercial Company (Linux Magazine article)


     Red Hat

    RESEARCH TRIANGLE PARK, N.C.--July 10, 2000--Red Hat, Inc. today announced the Red Hat High Availability Server 1.0, a specialized version of Red Hat Linux 6.2.

    Red Hat High Availability Server is an out-of-the-box clustering solution that delivers dynamic load balancing, improved fault tolerance and scalability of TCP/IP based applications. It lets users combine individual servers into a cluster, resulting in highly available access to critical network resources such as data, applications, network services, and more. If one server in the cluster fails, another will automatically take over its workload.

    The Red Hat High Availability Server has built-in security features designed to withstand common attacks. Systems Administrators can setup sand traps, providing for redirection of IP traffic from a potential attacker to a secure address. Out of the box, finger, talk, wall, and other daemons are disabled or not installed. In addition, multiple traffic routing and scheduling techniques along with virtual IP addresses allow you to create a security barrier for your network.


     Rock

    Rock Linux is a distribution that's "harder to install" than the others.

    There, did that get your attention? If so, Rock Linux may be for you. It aims to be "sysadmin friendly" rather than "user friendly", to get out of the way as much as possible from between you and your applications. This means you configure the system using Unix's traditional command-line interface and shell scripts. Oh yes, X-windows is included.

    Serious geeks will love the fact that you get to compile your own distribution (optionally putting it on a CD) before installing it.

    http://www.rocklinux.org

    http://e-zine.nluug.nl/pub/rock/GUIDE.html

    [I'd really like to see an article or some Mailbag items about people's experience with this. Has anybody tried it? -Ed.]


     Slackware

    Slackware 7.1 is out at www.slackware.com and mirrors.


     Storm

    VANCOUVER, British Columbia--Stormix Technologies Inc., announces the launch of Storm Linux 2000 Starter Edition. The Storm Linux 2000 Starter Edition provides Windows® users who want to try Linux with a full featured, easy-to-install distribution at a low cost. This new product is intended for people who want to easily access the power of Linux without an abundance of third party software. The Storm Linux 2000 Starter Edition is the first member of the Storm Linux 2000 family based on Debian/GNU Linux 2.2 ("Potato").

    The Duke of URL has a review of Storm Linux 2000.

    August 10 -- Vancouver, British Columbia -- Stormix Technologies Inc., today announced the launch of Storm FirewallTM, a flexible, scalable network security solution targeted at small office and home office (SOHO) users.

    Storm Firewall is the first in a line of dedicated network products from StormixTM. Capitalizing on the natural protective features of Linux, it is one of the most powerful tools available for network protection. Storm Firewall uses a graphical user interface to simplify the once complex process of installing a Linux firewall. The choice of simple options or advanced setup gives users ultimate control over network traffic. In addition, users will benefit from IP masquerading which allows for multiple computers to share a single Internet connection.

    VANCOUVER, British Columbia - User Friendly Media Inc. and Stormix Technologies Inc. today announced an agreement for the sponsorship of the inaugural three month's publication of an online interactive newsletter for fans of the popular UserFriendly.org IT Web site. The e-mail newsletter, known as The Static Cling, offers subscribers access to several exclusive features, including unique cartoons and artwork, a trivia contest with prizes and links to a threaded comment section. UserFriendly.org is the leading online entertainment and community destination for the global IT community.

    www.stormix.com


     SuSE

    The German version of SuSE Linux 7.0 is out. The English version will ship in September.

    English version: http://www.suse.com

    German version: http://www.suse.de/ http://www.suse.de/de/produkte/susesoft/linux/index.html


     Trustix

    We are proud to announce the release of Trustix Secure Linux 1.1. You can download it at: http://www.trustix.net/download/ or directly from http://www.trustix.net/mirrors.php3.

    This is primarily a maintainence release, but some new features like database support and improved mail filtering have been added. In addition we've added Lynx with SSL support and various other utilities to simplify the every day maintainance of your server.

    Trustix Secure Linux is a Linux distribution aimed towards the server market.

    For a more complete list of features, please see http://www.trustix.net/products/trustix-1.1/

    Unix phoenix. On Unix openness, Unix history, and Linux. Another Anchordesk UK article by Evan Leibovitch.


    News in General


     Upcoming conferences & events

    O'Reilly's conference next year on emerging enterprise-class Java applications has a call for papers. Submission deadline is September 15, 2000.

    Linux Business Expo
    (co-located with Networld + Interop event)
    September 26-28, 2000
    Atlanta, GA
    www.key3media.com/linuxbizexpo

    Atlanta Linux Showcase
    October 10-14, 2000
    Atlanta, GA
    www.linuxshowcase.org

    ISPCON
    November 8-10, 2000
    San Jose, CA
    www.ispcon.com

    Linux Business Expo
    (co-located with COMDEX event)
    November 13-17, 2000
    Las Vegas, NV
    www.key3media.com/linuxbizexpo

    USENIX Winter - LISA 2000
    December 3-8, 2000
    New Orleans, LA
    www.usenix.org

    LinuxWorld Conference & Expo
    January 30 - February 2, 2001
    New York, NY
    www.linuxworldexpo.com


     $399 Linux computer with monitor

    July 17, 2000- Introducing the PortalPC, our new solution to the computing and networking market. After thorough development and extensive testing, we are proud to bring to the consumer this revolutionary new product. The unique design of the PortalPC gives the users numerous options and versatility with their computing needs without the expense. With a starting price of $149.99, this computer is a perfect base for all of the functions a traditional PC can provide. Complete packages that include CD-ROM, Floppy, 4.3GB HD, 15 Monitor, 32 Mb PC100 Ram, Keyboard, Mouse and free copy of Red Hat 6.2 start at $399.99. The on-board technology includes, 10/100 network card, Dual USB, PS/2 port, Parallel port, Com port, VGA display, Disk On Chip Socket 8 to 144Mb and PC/104 16BIT Expansion Connector. This unique configuration enables the PortalPC to be uniquely small (approximately 12x5x9). Using the PortalPC to build your private network, you can have 6 workstations at the cost of a traditional tower computer.

    www.portalpc.net


     Linux Square

    ITsquare.com is pleased to announce the launch of Linux Square, a supplement to the existing b2b marketplace for IT services at http://www.ITsquare.com/. Linux Square provides access to scores of established Linux development firms. Through use of a web-based framework, clients are able to securely and efficiently procure the development of Linux applications.


     MaxSQL

    Helsinki, Finland and Carlisle, Massachusetts, USA--August 9, 2000--MySQL AB and Sleepycat Software, Inc. today announced the release of MaxSQL, a new Open Source, high performance, and fully transactional SQL server. MaxSQL, developed jointly by the two companies, combines the industry-standard interface of MySQL's data access language with the high-performance transaction services of Sleepycat’s Berkeley DB. MaxSQL is available immediately via download at no charge from www.maxsql.com.

    MaxSQL is the first Open Source relational database engine to offer the reliability, scalability, and performance that commercial users demand. The software provides the ability to perform transactions with full recoverability for committed changes. It manages databases up to 256 terabytes in size, accommodates many concurrent users, and survives power failure or system crashes without losing data. Those features, combined with MySQL's enormous popularity make MaxSQL a potent threat in the SQL marketplace.

    The full source code for MaxSQL is available for download at www.maxsql.com. The software is distributed under the GNU Public License (GPL).


     The Napster battle has implications for free-software developers

    Salon article: "As the long arm of the law reaches Napster and its lookalikes, programmers could be held responsible for what others do with their code."

    "Free speech, copyright, piracy and the fundamental nature of source code -- ever since the Internet began its surge to cultural and economic prominence, these concepts have swirled around each other in a confusing and contradictory morass. Now, in courtrooms from coast to coast, judges are attempting to bring order to the burgeoning online chaos. And from the first indications, programmer freedom may end up coming under the most sustained assault yet seen."

    "We are on the verge of defining software and determining the responsibility of software developers to control the uses of their work."

    http://salon.com/tech/feature/2000/08/07/yoink_napster/index.html


     Linux Brains Sought for New Tech Support Site

    August 15, 2000, Vancouver, BC -- IQLinux.com has launched a new technical support website for the global pool of independent Linux experts to meet the support need of Linux users.

    "IQLinux.com is a combination of Ebay and Onvia.com for the Linux community. The site's potential is like a rocket and I encourage Linux enthusiasts to take a ride", said Mark Kuharich, publisher and editor of Softwareview.com.

    IQLinux.com membership is free. The site's seamless negotiation process fully manages the negotiation of open source products and their supporting services.

    IQLinux.com's combined features make it a unique opportunity for the Linux community to interact. It allows members to easily form business relationships, manage virtual consulting teams, negotiate agreements and contracts, set prices, define deliverables and carry out technical support transactions, while being assured of payment.


     Overseas internships for Canadian nationals

    Hello from Ottawa. I am the recruiting officer for NetCorps Canada with Voluntary Services Overseas (VSO Canada). Currently I have 2 6-month internships open to Canadian citizens or landed immigrants between the ages of 19-30 to go overseas in September. On is for a linux administrator to go to Jamaica for 6 months, the other is for someone with e-commerce skills to go to Guyana for 6 months.

    Interested applicants should forward their CV and brief cover letter to Wendy Street, NetCorps Program Officer, VSO Canada. Applicants who are accepted need to be prepared to leave no later than September 20th, 2000. wendys@vsocan.com, fax: +1 (613) 234-1444, http://www.vsocan.com


     "Dilbert Killer" on-line games

    User Friendly Media Inc., and SuSE Linux announced the launch of the first in a series of challenging and diverse on-line games currently under development for the new game section on UserFriendly.org.

    "This first game focuses on geek trivia and allows our very intellectual audience to compete head on in a variety of challenging information categories. The combatants move up the pyramid until there is only one... there can be only one", said Wildcard, lead coder for User Friendly Media Inc. and creator of "Pyramid of Trivia".

    The site also has an animated cartoon series now. This caused technicians to scramble for 300 GB of additional bandwidth for the debut. (Requires Flash player; episode 1 is 2.5 MB / 7 minutes.)


     Linux Links

    The age of cyborgs is coming closer. "Scientists are attempting to create molecular electronic circuits using DNA. They say that they could potentially create circuits 10,000 times smaller than with current technology." Yahoo via Slashdot.

    www.linuxlookup.com is "Your Source For Reviews, HOWTOs, Guides & Gear".

    A searchable distribution database.

    SiliconPenguin.com is an index of information on embedded Linux. The information is not gathered by web spiders, but by humans who evaluate each link.

    LinuxLinks.com now has a web-based calendar.

    The Duke of URL articles:

    Linux laptop info. Another laptop page. (Turn off Java for the second one to avoid Geocities popups.)

    Sony plans a notebook computer with Transmeta's Crusoe chip and a digital camera by year's end. http://news.cnet.com/news/0-1003-200-2523739.html

    Linux Means Business is a collection of articles on how businesses can make use of Linux.

    Hardware-Unlimited has lots of reviews about video cards and other hardware. Includes a price comparision guide of graphics cards and dealers. Here's a review of the D-Link MP3 player.

    How Linux's falling stock prices are only temporary. (Anchordesk UK article by Charles Babcock)

    Complete Reference Guide to Creating a Remote Log Server. LinuxSecurity article. Note that this is "log" in the sense of "logging error messages to a file", not "logging in remotely".

    The Right to Read by RMS is a short science-fiction story about the implications of widespread anti-piracy measures on electronic books.

    www.wininformant.com is a source of information on Microsoft's activities. The site has a pro-MS bent, but is honest about MS's shortcomings. One article the persistent rumor that MS is porting Office applications to Linux, which the company denies it is doing. Meanwhile, CNET news.com attempts to sort out the rumors.


    Software Announcements


     FIASCO

    I am pleased to announce the first public release of FIASCO.

    FIASCO is an image and video compression system for low bit-rates based on fractal coding which outperforms the well known JPEG and MPEG standards. License GPL.

    FIASCO consists of command line applications (like cjpeg/djpeg) to encode images and videos and to decode and display generated FIASCO streams. Moreover, library functions are available for reading and writing FIASCO files.

    http://ulli.linuxave.net/fiasco


     Other software

    Linux for Astronomy is precompiled astronomical software.

    SMSLink 0.48b-2 implements a client / server gateway to the SMS protocol (the short messages sent to mobile phones). This version is mostly a bugfix release. License GPL.

    Corel PHOTO-PAINT is a photo-editing, image composition and painting application, now available for free download.

    Metrolink has released Open Motif with Metro Link's enhancements and bug fixes, available for FTP download.

    Sun's StarOffice 5.2 office suite is available for download in eleven languages.

    Proven CHOICE Accounting is a complete business accounting system for Linux. New is the time billing module.

    CRYPTOAdmin 5.0 protects Apache, iPlanet and Microsoft IIS Web servers from unauthorized access - right down to the page level. The new feature, WEBGuard, ensures access to protected pages is only permitted with the correct one-time challenge/response generated from a CRYPTOCard hardware or software token. Web severs communicate directly with CRYPTOAdmin, enabling ASP (Active Server Page) or JSP (Java Server Page) security. CRYPTOAdmin is included with industry leading products such as Cisco Secure ACS and Red Hat Linux at no charge. Established in 1989, CRYPTOCard is a Canadian company based in Kanata, Ontario.

    Servtec's iServer is now available for Dallas Semiconductor's TINI board, an embedded Internet computer. iServer TINI Edition is a full featured Application/Web Server written entirely in Java running on TINI. 90-day free preview.

    Perl Builder 2.0 is a major upgrade to Solutionsoft's Perl IDE. Includes a CGI Wizard, debugger, and the ability to test CGI scripts on the desktop. 14-day evaluation copy available for download. (Note: "no route to host" error during final proofreading.)

    Helios PDF Handshake 2.0 is in beta. It allows PDF documents to be created remotely via a "Create PDF" print queue. A print preview queue is also available.

    The Linux Arabization project announces Aedit, an Arabic editor that supports bidirectional text and features specific to the Arabic language. Aedit is meant to be an international editor with support to other languages. Aedit is based on the internationalization support available in Gtk+-1.3. License GPL.

    Mainsoft has a porting application that turns Windows Code into native Unix.

    Loki news: Loki will maintain and support the Linux version of Unreal Tournament by Epic Games, Inc. Loki has also teamed up with BSDI to ensure Loki's games run on FreeBSD using its Linux-compatibility features. Certified games will be fully supported by Loki.

    Intel's Universal Plug and Play (UPnP) Software Development Kit (SDK) 1.0 is available through an open-source license.

    Magic Software news: Magic announced support for mobile e-business (mBusiness) in its web applications eService and eMerchant, allowing them to support remote transactions via the Wireless Application Protocol (WAP). Magic also announced an intention to acquire CoreTech Consulting Group, Inc., a provider of e-business professional services. This will enhance Magic's ability to offer consulting and professional services in North America. PerlMX embeds Perl in Sendmail as a mail filter engine. This allows you to use Perl code for spam control, keyword scanning, custom routing, etc. By ActiveState.

    CURSEL is a freeware FMLI (Forms and Menu Language Interpreter) implementation for Linux and UNIX. CURSEL interprets Menu description files, which are simple text files, describing a character GUI (Menus, forms, text files) for character terminals (xterm, vt100, etc.). Pipes, shell escapes, backquoted expressions, and file redirection are supported, and when compiled with ncurses 5, CURSEL supports color. CURSEL also supports coroutines, and descriptors to create, send data, or receive data, from an another process via named pipes. License GPL.


    This page written and maintained by the Editors of the Linux Gazette. Copyright © 2000, gazette@ssc.com
    Published in Issue 57 of Linux Gazette, September 2000

    Contents:

    ¶: Greetings From Heather Stern
    (?)Linux 'read'
    (?)File with Device Information
    (?)10BaseT Connection
    (?)shell script
    (?)Telnet to linux box from NT workstation in NT LAN --or--
    Connection Refused
    (?)connecting red hat workstation to nt server --or--
    Linux in a Windows NT Domain (under a PDC)
    (!)ACLs on Linux

    (¶) Greetings from Heather Stern

    Well, folks, it's another month, and yet another hefty mailbag. We've got more people asking strange hardware questions (though you won't see them here) and a few more interesting Windows questions than we usually get. Like, what about that new version, 2000ME. Is the "Millenium Edition" really Win2K Lite (a bunch of "server" features stripped off) or win98 with a Win2K GUI tacked on and the ability to reach the command line ripped out? Estimates are pointing more toward the latter, but really, we can't tell for certain except by behavioral analysis -- there's no source, not even a "non free" source view like the Sun Community License.

    Honestly, what are you gonna do if you're stuck in the Windows way? The slashdotters think it's pretty easy - "If your system works, don't upgrade". Hmmm. Once upon a time rsh worked great. It's not even that it's broken - as an application, as a server, the "r" tools still run like they always did. But, the environment around them has changed; they are so inherently unsafe that I know few sysadmins who don't use ssh and the analogous family of tools instead, even if they have a captive lab such as the environment the "r" tools grew up in. Besides, I took a fresh look at the licenses as posted at MS.

    Let's say you don't want to upgrade, but you just hired 12 people, and you want windows95 sp 2 for all their systems. For the purposes of argument let us also say that you are not in the Silicon Valley, so you cannot simply run down to the leftover software shop and pick up one from last year -- they're long gone, even w98 are hard to come by, all you might buy are win2k in one of its forms.

    You should read your EULA on the package of Win2k you'd like to get. Oh yeah. Forgot, you have to buy one to get far enough into the package to read that. Doh! One copy can't hurt the ol' pocket too much right? Maybe. So, you read it with a magnifying lens and maybe it says it's a license to this or any older version of Microsoft Windows (tm). Better check your lotto ticket at the same time, they have about an equal chance of being a winner.

    Probably no mention whatsoever of older versions. Okay. You visit Microsoft's website. The EULA posted there:

    http://www.microsoft.com/PERMISSION/copyrgt/cop-soft.htm

    ...says basically, if and only if you bought it in a store (you did! wow! full pop was worth something!) then you can write to them for permission to downgrade it. One copy at a time.

    Hoo boy. "Permission to downgrade" and they'll probably blow you off. You're not one of their "Select Customers" by a long shot. Even if they say yes, you'll have to figure out how to get copies from the media you already have to cooperate with the idea of having multiple instances in the office.

    You hardly have a choice, you think. It's an OS learning curve for the new version's differences, or a technical doan-and-dirty to make the old dog play new tricks.

    They've certainly come a long way from when I wanted to upgrade a useless copy of Windows to one of their more helpful products like a mouse, a joystick, or a book.

    You might be right. You have no choice about some things in this world. But the choice you lack is control over your external environment -- you can still control your own response. You are Mr. Bill's external environment. He has fairly little power over you.

    So, if you're about to go through this headache, it should make Linux, FreeBSD, and BeOS look a lot friendlier. Sure, it will be different, but perhaps not as amazingly different as you'd expect. If you have anyone technical enough to take Microsoft on as a challenge, Linux with its source and a lot of people trying to make it easy might not be such a bad thing to try.

    Ooo, owie, no UNIX experience here! you cry. Ah, but there's this. There are a world of people - not all of them programmers - out there contributing something to make life easier. Not necessarily for you. Chances are they've never heard of your 40 person startup. But, they're trying to make it easier for themselves... and then posting it for others.

    Now I'm not just talking about companies that make their living serving a niche specialty. I mean, sure, I even work for one (Tuxtops, selling laptops) and don't get me wrong, I think it's great that people can consult or sell products to meet specialty needs. But the companies that are doing the best, are the ones whose confidence in their solution is so strong, that they have no problem giving at least parts of it back to the community.

    It's plain folks and small groups that make the difference here. The more likely that you are similar to any of these other folks that might be working on something, the more likely that something that will work very well for you already exists in Linux. Or, the more likely that you'll see a bit of fame and friendship, if you post your solution to that very same everyday problem, first.

    Between this and more active local user groups, your chances of doing well with Linux really have improved quite dramatically in the last couple of years, It's possible to find Installfests with local gurus in a lot of places. It's our way of paying back those early days when we got some help after we realized we were lost. We all had to start somewhere.

    I'm still lost sometimes. That's why I love going to the trade shows, so I can meet all the cool folks with different specialties than my own. And if I can give a little back by being a local guru at the Installfests, I like that. A lot.

    (By the way, Tuxtops will be doing an Installfest for Laptop Users at the Atlanta Linux Showcase. If you want to meet me sometime, that's your best chance. I think it will be a lot of fun.)

    But that's what I'm really doing here, anyway. A few hours of my time every month can make a lot of people around the world happy, because we've're "Making Linux a Little More Fun!"

    See you next time, everyone.


    (?) Linux 'read'

    From Curtis J Blank to tag on Mon, 21 Aug 2000

    Answered by: Jim Dennis, Dan Wilder

    I've run into a problem where Linux's 'read' is not reading input from stdin when it is piped to. Here's a quick example:

    (!) [Jim Dennis] Of course it is. Try:
    ps wax | while read pid tty x x cmd args ; do
    echo $pid $cmd $args
    done
    (Note that the whole while loop is done within the subshell, so the values are available to use between the do and the done tokens).
    In your example using awk, naturally the awk print function is being executed from within awk's process. So the variable being read is within scope in this case.
    
    #!/bin/ksh
    #
    dafunc()
    {
    echo 1 2 3
    }
    #
    # MAIN
    #
    dafunc | read a b c
    echo $a $b $c
    #
    

    Running this script produces a blank line instead of '1 2 3'.

    I also tried this command line and it did not work that way either:

    
    echo 1 2 3 | read a b c
    echo $a $b $c
    

    But piping to awk works:

    
    echo 1 2 3 | awk '{print $2}'
    2
    

    I've tried this using the 2.2.14 kernel, on both SuSE 6.4 and Red Hat 6.2. I've used this technique on Solaris UNIX and Tru64 UNIX just fine, but for some reason the Linux 'read' from stdin is not picking this up.

    Any ideas why or what I'm overlooking?

    (!) [Jim Dennis] When studying shell scripting it's also useful to learn that shell and environment variables are not the same thing. A shell variable is "local" in the sense that it is not "inherited" by children of that process. When teaching shell scripting one of the first concepts I introduce to my students is the memory map of their process. I point out that the shell is constantly spawning child processes (through the fork() system call) and that it is frequently executing external programs (through the exec*() family of system calls). I then explain out a fork() simply makes a clone or copy of our process, and how the exec() overwrites MOST of the clone's memory with a new executable. I draw pictures of this, and label the part that is NOT over-written as the "environment."
    The export command simply moves a shell variable and value from the "local" region of memory (that would get over-written by an exec() call) into the environment (a region of memory that is preserved through the exec() system call).
    Using this set of pictures (by now I've filled the whiteboard with a couple of copies of our hypothetical processes and their memory blocks) it becomes obvious why changing the value of an environment variable in a child process doesn't affect any copies of that variable in OTHER processes. Just to drive that point home I then write the following reminder in big letters:
    The environment is NOT a shared memory mechanism!
    (Then I might also explain a little bit about SysV shared memory --- generally pointing out that the shell doesn't provide features for accessing these IPCs).
    Incidentally if you really want to do something similar your examples but using bash try this sort of command:
    read a b c < <( echo 1 2 3 ) echo $b
    In this case we are using "process substitution" (and perfectly normal redirection). Since our read command is happening in the current process (and the echo 1 2 3 command is in a sub-process) the variable's value is accessible to us.
    I think process substitution is a feature that's unique to bash. Basically it uses /proc/fd/ (or /dev/fd/*) entries, or temporary named pipes (depending on the features supported by your version of UNIX) to provide a file name that's associated with the output of the sub-process. If you do a command like:
    echo <( echo )
    ... you should get a response like: /dev/fd/63 (On a Linux system using glibc).
    I suspect that process substitution could be used in just about any case where you would have used the Korn shell semantics.
    Nonetheless I would like to see the next version of bash support the Korn shell (and zsh) semantics (putting the subshell on the "left" of the pipe operator). I'd also like to see them add associative arrays (where the index to our shell variable arrays can be an arbitrary string rather than a scalar/integer) and co-processes (where we can start a process with a |& operator, which puts in in the background, and we can use a series of echo or printf -p and read -p commands to write commands to that process and read responses back. Co-processes are handy for shell scripts which need to access something like the bc command, feed it operatings, reading back results and doing other work with those result; possibly in a loop.
    (!) [Dan Wilder] I think you'll find this a ksh (or pdksh) problem, not a Linux problem.
    To quote the pdksh man page:
    BUGS [ ... ] BTW, the most frequently reported bug is echo hi | read a; echo $a # Does not print hi I'm aware of this and there is no need to report it.
    (!) [Jim Dennis] Actually it's just a consequence of the way that pdksh, bash, and older versions of ksh (Korn's original and '88 versions) handle the pipe operator (|).
    Whenever you see a pipe in a command line you should understand that a subprocess has implicitly been created. That must exist in order for there to be an un-named pipe. Remember that the pipe in an "interprocess communication mechanism" (IPC). Therefore we have to have multiple processes between/among which to communicate.
    In most shells (including Bourne, older Korn, bash, and pdksh) the subprocess was created to handle the commands on the right of the pipe operator. Thus our 'read' command (in the examples below) is happening in a subshell. Naturally that shell exits after completing its commands; and the variables it has set are lost. Naturally the subshell can only affect its own copies for any shell and environment variables.
    With newer versions of ksh and zsh we find that the subshell is usually created on the left of the pipe. This allows us to use commands like "echo foo bar bang | read a b c ; echo $a $b $c" with that effect that most people would expect. Note that the follow will work under bash, pdksh, etc: "echo foo bar bang | ( read a b c ; echo $a $b $c )" (We have to do everything with our variables within the subshell).
    All of this is really quite obvious once you realize that a | operator is necessarily creating a subprocess.
    (!) [Dan Wilder] Try #!/bin/sh.

    (?) File with Device Information

    System Inventory File?

    From Paul Haigh on Thu, 17 Aug 2000

    Answered By: Jim Dennis

    Hi

    I once looked at a file in linux which had a listing of all devices found during the installation process. For example it listed the Video card of the PC I had just installed. I was using Redhat 6.0. What is the name of this file? Where is it? I for the life of me can't remember. nor find it I thought it was in /proc but that isn't correct. Sorry to be so forgetful. Thanks, your help is appreciated.

    Paul

    (!) Hmm. The installation process is specific to each distribution. So that list would depend on whether you were using Red Hat, Debian, Mandrake, etc. I also don't know what filename it would be under, nor which distributions and versions store this information.
    Indeed the whole issue is rather more complicated than you question implies.
    A Linux kernel does a certain amount of probing to find devices. This depends on the list of device drivers that were linked into the kernel. Obviously if you leave a device driver out of a kernel, that kernel won't probe for those devices. It's not as obvious, but the kernel also won't probe for devices for which the drivers were compiled as modules. To be more precise the kernel won't probe for any device until its device driver is loaded.
    So, you may find that some devices are completely ignored (undetected) until you've loaded the appropriate kernel module, or rebuilt a kernel with the necessary support.
    Some devices may also go undetected because they are set at some set of addresses (I/O or memory mapping) that is unusual for them, or is likely to be in conflict with other devices. The kernel doesn't scan the entire I/O address space for each card. Not only would that be slow, it would probably hang the system. Devices must be accessed using the correct protocols --- and some of those will go into a catatonic state, or will lock up the whole system if they are accessed incorrectly. (The Linux kernel avoids most of these "dangerous regions" by default, and only looks for most devices in the common places).
    All of this much less of a problem in recent years. Most platforms have adopted the PCI bus which has standard methods for discovering and identifying devices, and for avoided conflicts among them. In essence your PCI bus is a network of semi-intelligent adapter cards interoperating over the PCI "protocol." This has always been true of SCSI as well (though with SCSI we still need to manually set unique device IDs). USB and firewire are also much more intelligent and less problematic than the old ISA PC bus.
    That brings us back to the question at hand. How do we determine what hardware is installed in a PC without opening the case and getting the (all-too-often unavailable and/or inadequate) specification sheets.
    You can start with /var/log/dmesg. This file should have a copy of all the messages that your kernel printed during the initial boot process.
    Then take a look at /proc/pci. As you probably know, the /proc directory is usually a mount point for a special "virtual" filesystem. The various "files" and directories that appear under /proc don't exist as real files on any disk drive. They are sort of like a "RAMdisk" except that they don't take up memory in the same way. The "files" under /proc are actually a representation of the kernel's state or of specific data structures as they are maintained by the kernel. The entries under /proc are dynamic --- the contents of these "files" will appear to change as the state of the kernel changes. (In fact under the /proc/sys directory tree there are many nodes or "files" which can be modified by the system administrator to change the state of the kernel).
    After looking at /proc/pci, peruse /proc/interrupts and /proc/ioports and explore some of the other files thereunder. Note: All of the /proc/XXXX dircectories, where XXXX is a number are "processes." These represent all of the state about each process that is accessible to programs like 'ps' and 'top'. The original purpose of the /proc directory in UNIX (and Linux) was to allow for a cleaner interface to process data and to allow programs like 'ps' to be run without requiring them to have 'root' access. The Linux /proc goes beyond that to contain lots of information about the process state.
    In the next version of the kernel (2.4.x) you'll see yet another way to discover hardware that's installed in your system. The 2.4.x kernels will support a feature called "devfs" (a "device filesystem"). This is similar to /proc in that it's virtual and that it dynamically represents the state of a system as the kernel "sees" it. There are significant differences. However, we'll skip further comparison of /devfs to /proc.
    What's more interesting here is a comparison of /devfs to the traditional /dev/ directory. The /dev/ directory normally contains a set of "nodes" (basically special empty files with funny numbers instead of a filesize). Those "nodes" have all the attributes of regular files (owners, group associations, permissions, and dates. They come in two types, character and block. On a typical UNIX or Linux system the /dev/ directory contains a list of all the common devices that might be on a system. This list can be quite large (over a thousand entries on my laptop). Obviously no system actually has all of those devices. However, most systems contain the entries for them as a bit of bookkeeping baggage.
    With /devfs we'll see only a list of those devices which were detected by the kernel. As we load kernel modules we'll see new nodes appear under /devfs. It's also possible to manually create nodes under /devfs. Those will persist until the next reboot. Thus it may be necessary for some systems to restore a list of device nodes under their /devfs directory every time you reboot. (That would probably be most easily done by simply adding an rc.* script to extract a .tar or cpio file into the newly mounted /devfs directory).
    Of course this new model won't just appear overnight. It will be interesting to see how the distribution maintainers (Caldera, TurboLinux, etc) each choose to integrate this new feature into their offerings.
    Meanwhile there are things like Red Hat's "kudzu" package which tries to detect newly added hardware when it is first installed into your system (upon the next reboot). That may also help you.
    Sometimes, you'll probably still have to grab a screw driver and pop open the case. Worse, sometimes you probably will have no practical way of knowing about some of the hardware that's in your systems. PC manufacturers have gotten lax about providing technical documentation with their equipment.

    (?) 10BaseT Connection

    For more about this style of small networking, also see the Home Network article in this issue.

    From mercdragon on Mon, 28 Aug 2000

    Answered by: Don Marti

    (?) Why did you send the connect a laptop "off round the barn"???? It is far cheaper and much simpler to connect a hub up to the desk unit and then plug into the hub. A small four or five port hub can be found at most computer stores for $50(US) and the cables are not that expensive. The advantage of the hub is the visual connection, data transfer indicators. Tells you they are programmed correctly and communucating with the hub.

    (!) This is the way I have my system set up.
    The original questioner had only old-fashioned 10Base2 network cards, and needed to get the appropriate coax cable and accessories. After I made sure this is what he had, he ended up getting it working just fine.
    I suggested the crossover cable for connecting two 10BaseT systems without a hub, which works. You can use the link lights to make sure you're hooked up correctly. Just be sure to label your crossover cables clearly so you don't try to use them where a straight-through cable is more appropriate.

    (?) I have four systems connected this way and it is much less hassle than trying to work through the crossover cable blues when I forget to set up a connection properly. A quick glance at the hub says it needs to be setup.

    (!) One advantage of a crossover cable over a hub: you don't need to rig up a battery for the hub if you want to play two-player deathmatch video games on airplanes.

    (?) shell script

    From Peter Truong on Mon, 28 Aug 2000

    Answered By: Ben Okopnik

    (?) I have a directory consisting of:

    
    test01.in   test05.in   test99.in  <-- in files
    test01.out  test05.out  test99.out <-- out files
    

    this is my code:

    
    infile=`test[0-9][0-9]*.in`
    outfile=`test[0-9][0-9]*.out`
    
    (!) This is at least one of the reasons it doesn't work. What you seem to be trying to do here is create a list of files under the "$infile" and "$outfile" labels - but by putting the right side of the equation in backquotes, you're asking the shell to _execute_ it. That won't work; in fact, you should get an error message when you run this.
    (?)
    for input in $infile
    do
      for output in $outfile
      a.out < $input > $output
    
    (!) What this will do is execute "a.out" and use the current value of "$input" as the file for it to process, then output the result into the filename that is the current value of "$output" (overwriting whatever was in there originally). You didn't mention this part of the script in your explanation of what you want the script to do, but this _will_ wreck your "*.out" files. This "double" loop will also repeat the above process as many times as there are output files (if the original "list of files" equation worked) for each input file, i.e., if you have 50 "*.in/*.out" pairs, the inside loop will execute 2500 times - and the end result will be the "processed" value of the last file in the "input" list written to every file in the "output" list.
    (?)
      cmp $input $output
    
    (!) This part, of course, becomes useless after the above procedure: either "a.out" changes "$input", in which case it will always be different, or it does not change it, in which case it will always be identical.
    (?)
      echo $?
    done
    

    but this however, doesn't work. what I want it to do is:

    • get each of the individual pairs of files (ie. test01.in & test02.out)
    • and compare each pair until there is no more to compare.

    (!) All right; try this -
    --------------------------------------------------------------------
    #!/bin/bash
    #
    # "in_out" - compares all <fname>.in to <fname>.out files in the
    # current directory
    
    for n in *.in
    do
      cmp $n $(basename $n in)out
    done
    --------------------------------------------------------------------
    
    It's basic, but worth repeating: the "hash-bang" line comes first in any shell script and must be contiguous (no spaces). If the script requires input, write an error-checking routine and print usage instructions on an error; otherwise, as in this one, comments will help you remember what it does (you may remember it today, but not 3 years and 1,000 scripts down the road.) Next, the loop - for each "in" file, we generate the "out" filenames via "basename", by chopping off the "in" extension and tacking on an "out". Then we perform the comparison and print a message for every pair that fails: "cmp" exits silently on a "good" comparison, and prints a message on a bad one. If you want numeric output, you can use "cmp -s" (for "silent" output, i.e., it only sets the status flag) and "$?" to print the status value.
    Good luck with your shell-scripting,
    Ben Okopnik

    (?) Connection Refused

    From Yu-Kang Tsao on Wed, 26 Jul 2000

    Answered By: Jim Dennis

    Hi James:

    Now I am setting up a linux red hat 6.2

    server box in our NT LAN and I am trying to telnet connect to that box from one of the NT workstation in our NT LAN. But it gives me connectiong refuse message. Would you help me telnet connect to linux box ? Thank you very much.

    Sincerely
    Nathan

    (!) You probably don't have DNS, specifically your reverse DNS zones (PTR records) properly configured.
    Linux includes a package called TCP Wrappers (tcpd) which allows you to control which systems can connect to which services. This control is based on the contents of two configuration files (/etc/hosts.allow and /etc/hosts.deny) which can contain host/domain name and IP address patterns that "allow" or "deny" access to specific services.
    You could disable this feature by editing your /etc/inetd.conf file and changing a line that reads something like:
    telnet	stream	tcp	nowait	telnetd.telnetd	/usr/sbin/tcpd /usr/sbin/in.telnetd
    
    to something that looks more like:
    telnet	stream	tcp	nowait	telnetd.telnetd	/usr/sbin/in.telnetd /usr/sbin/in.telnetd
    
    (Note: THESE ARE EACY JUST ON ONE LINE! THE TRAILING BACKSLASH is for e-mail/browser legibility)

    My processing script knows about these backslashes and restored them to a complete line. But it may be worth knowing that most versions of inetd these days will allow you to use \ at the very end of line to continue it onto the next. It will not work if you have a space after it though. Think of it as escaping the newline character. -- Heather

    some of the details might differ abit. This example is from my Debian laptop and Red Hat has slightly different paths and permissions in some cases).
    You should search the back issues of LG for hosts.allow and tcpd for other (more detailed) discussions of this issue. It is an FAQ. Of course you can also read the man pages for hosts_access(5), hosts_options(5) and tcpd(8) for more details on how to use this package.
    Note: You should also consider banning telnet from your networks. I highly recommend that you search the LG back issues for references to 'ssh' for discussions that relate to that. Basically, the telnet protocol leaves your systems susceptible to sniffing (and session hijacking, among other problems) and therefore greatly increases your chances of getting cracked, and greatly increases the amount of damage that an intruder or disgruntled local user can do to your systems. 'ssh' and its alternatives are MUCH safer.

    (?) Linux in a Windows NT Domain (under a PDC)

    From Maenard Martinez on Tue, 25 Jul 2000

    Answered By: Jim Dennis

    Is it possible to connect the Linux Red Hat 6.0 (costum installed) to the network wherein the PDC is a Windows NT 4.0 Server? Do I need additional tools to connect it? Is it similar to UNIX X-windows?

    Thanks, Maenard

    (!) Basically all interoperation between Linux (and other forms of UNIX) and the Microsoft Windows family of network protocols (SMB used by OS/2 LANManager and LANServer, WfW, Win '9x, NT, and W2K) is done through the free Samba package.
    Normally Samba allows a Linux or other UNIX system to act as an SMB file and print server. There are various ways of getting Linux to act as an SMB client (including the smbclient program, which is basically like using "FTP" to an SMB server, and the smbfs kernel option that allows one to mount SMB shares basically as though they were NFS exports).
    Now, when it comes to having Linux act as a client in an MS Windows "domain" (under a PDC, or primary domain controller) it takes a bit of extra work. Recently the Andrew Tridgell and his Samba team have been working on a package called "winbind." Tridge demonstrated it to me last time he was in San Francisco.
    Basically you configure and run the winbind daemon, point it at your PDC (and BDCs?) and it can do host and user lookups, (and user authentication?) for you. I guess there is also a libnss (name services selector) module that is also included, so you could edit your Linux system's /etc/nsswitch.conf to add this, just as you might to force glibc linked programs to query NIS, NIS+, LDAP or other directory services.
    Now I should point out two things about what Tridge showed me. First, it was under development at the time. It probably still is. You'd want to look at the Samba web pages and read about the current state of the code --- but it may not be ready for use on production systems. (I hear that some sites are already using it in production, but basically that's because it's their only choice). The other thing I should mention is that I got the basic "salesman's" demo. That's not any fault of Tridge's (he wasn't trying to "sell" it to me and he certainly can get into the technical nitty gritty to any level that I could understand). It's just that we didn't have much time to spend together. As usual we were both pressed for time.
    I'm writing this on a train, which is why I can't look for more details at the Samba site for you. So, point your browser at: http://www.samba.org for more details.

    (?) ACLs on Linux

    In reply to Ivan Sergio Borgonovo on the SVLUG list

    Answered By: Rick Moen

    I thought you might be interested in the thing that follows, because of what I've heard you say in the past about capabilities models.

    Jim Dennis has been quite verbose about the difference between the current Linux privileges model and true capabilities systems like EROS (eros-os.org). -- Heather

    (?) A guy posting to the SVLUG list from Italy, Ivan Sergio Borgonovo, asked whether there were any general summaries of ACLs on Linux.

    (!) I looked around, was astonished to find that there weren't any, and decided to write one. It follows -- used within VA Linux's Knowledgebase, now, but I see no reason it can't be used anywhere else, as well. I hope you'll find it of interest.

    And so, it's posted here for all of you, dear readers. -- Heather

    (?) Q: Is there support for ACLs (Access Control Lists) in Linux?

    (!) A: Yes, there is -- from multiple development projects, with divergent approaches, all aiming to allow the administrator some means of specifying what capabilities a process is to be allowed, and other fine-grained permissions (including Mandatory Access Control labels, Capabilities, and auditing information). At this time (August 2000), all require modifications (third-party, unofficial kernel patches) to the Linux kernel's filesystem and VFS code (umask and access-control modifications), which sometimes take a while to catch up with new kernel releases. The kernel maintainers have not endorsed any one approach. Thus, implementing any of these approaches remains an advanced topic using experimental code.
    Further, there is not broad agreement on what filesystem it is best to use with ACLs. The obvious choices are ext2 + extended-attributes extensions, Steven Tweedie's ext3 (ftp://ftp.linux.org.uk/pub/linux/sct/fs/jfs), the AFS implementations from IBM/Transarc (http://www.transarc.com/Product/EFS/AFS) or the Arla Project (http://www.stacken.kth.se/projekt/arla), GFS (http://www.globalfilesystem.org), or ReiserFS (http://devlinux.com/projects/reiserfs).
    Adding further confusion is that the leading candidate for an ACL standard, IEEE Std 1003.1e, was withdrawn by the IEEE/PASC/SEC working group while it was still a draft, on Jan. 15, 1998, and thus was never formally included in POSIX (http://www.guug.de/~winni/posix.1e). It nonetheless remains influential.
    Generic "capabilities" support is included in 2.2 and greater kernels, including a control in /proc called the "capabilities bounding set". Many "capabilities" operations will also require libcap, a library for getting and setting POSIX 1003.1e-type capabilities, which you can find at ftp://ftp.de.kernel.org/pub/linux/libs/security/linux-privs/kernel-2.2 . See also the Linux Kernel Capabilities FAQ: ftp://ftp.de.kernel.org/pub/linux/libs/security/linux-privs/kernel-2.2/capfaq-0.2.txt
    The VFS patches, filesystems extensions or other filesystem facilities to store ACLs, patches for fsck utilities (preventing them from "cleaning up" your extended attributes), patches for GNU fileutils, patches for the quota tools, and administrative tools must be provided by the various unofficial ACL-on-Linux projects, of which there are several.
    In addition to applying any applicable patches to your kernel, you will have to enable three kernel-configuration options (all in the "filesystems" section): "Extended filesystem attributes" (CONFIG_FS_EXT_ATTR), "Access Control Lists" (CONFIG_FS_POSIX_ACL) and "Extended attributes for ext2" (CONFIG_EXT2_FS_EXT_ATTR). In order to be offered these configuration options, you must also select "Prompt for development and/or incomplete code/drivers" (CONFIG_EXPERIMENTAL) in the code-maturity level options, towards the beginning of kernel configuration.
    The AFS distributed storage system, originally developed at CMU, generically has built-in support for ACLs. As such, it seems reasonable to suspect that IBM/Transarc's leading AFS implementation on Linux, due to have an open-source (GPLed) development fork on the near future, would include ACL support. We have been unable to confirm that from Transarc's documentation, thus far. This may change as Transarc completes its open-source rollout.
    The pre-existing Linux AFS project, the Arla Project, has reportedly been moving slowly. The quality of its ACL support is likewise unknown.
    The existing documentation for AFS on Linux, unfortunately, makes no mention of ACLs or capabilities support: http://www.rzuser.uni-heidelberg.de/~x42/linuxafs/linuxafs.html http://web.urz.uni-heidelberg.de/Veranstaltungen/LUG/Vortraege/AFS/AFS-HOWTO.html
    There have been two main attempts to implement POSIX ACLs on ext2 + extensions. One was the LiVE Project, at http://aerobee.informatik.uni-bremen.de/acl_eng.html . That effort appears to be now defunct.
    The other, and probably your best bet for ACLs on Linux today, is Andreas Gruenbacher's Linux ACLs project, http://acl.bestbits.at . Gruenbacher has a well-developed ACL implementation with storage extensions for ext2, linking the extended attributes to inodes, and with ACLs among the data storable in those extended attributes. He expects that porting his subsystem to ext3 will be easy.
    The Samba Project favours/encourages Gruenbacher's approach, and aims for Samba to directly support POSIX ACLs on Linux if they are ever incorporated into the standard Linux kernel source tree: http://info.ccone.at/INF (http://www.inter-mezzo.org) in the near future implementing extended attributes similar to Gruenbacher's, making future ACL support on that filesystem (which is still in early beta) likely.
    The LIDS Project (http://www.lids.org) implements some "capabilities" ideas, but not ACLs.
    Last, Pavel Machek maintains an "ELF capabilities" kernel patch and matching utilities, which allow the admin to strip specified capabilities from binary executables at execution time. It does not provide other ACL-type capabilities. The information on what capabilities to drop for a given binary upon execution is stored inside the ELF header, in a manner compatible with ordinary Linux operation. The advantage to this approach is that it does not require extended-attributes support in the underlying filesystem. Full details are at http://atrey.karlin.mff.cuni.cz/~pavel/elfcap.html .

    "Linux Gazette...making Linux just a little more fun!"


    More 2¢ Tips!


    Send Linux Tips and Tricks to gazette@ssc.com


    2cent-tip: sendmail configuration

    Sat, 26 Aug 2000 13:00:28 +0200
    From: Matthias Arndt <matthiasarndt@gmx.net>
    Dear Editor,
    I've got another 2cent tip for the Linux Gazette.
    Many people have problems with the configuration of sendmail. As far as I have learned about it yet, you can use the file genericstable in the folder /etc/mail (or where your sendmail config files are) to map local email adresses to email adresses on other servers.
    #format of the file:
    #<local email adress> <real email adress>
    marndt@jerry.aknet.de matthiasarndt@gmx.net
    My file looks like the one above: all email written under my local account (marndt) appears to come from my Internet email.
    A nice feature in sendmail..... and it makes things easier to set.
    hope this one helps, Matthias


    2c tip followup

    Fri, 11 Aug 2000 19:28:04 -0400 (EDT)
    From: Matthew Willis <matt@optimus.cee.cornell.edu>
    Tip: 2up printing in netscape
    In the last 2c tip, Sudhakar Chandra (thaths@netscape.com) is indeed correct about
    psnup -c -n 2 | lpr -pprinter
    working as well as
    pstops -q -w8.5in -h11in -pletter "2:0L@0.7(8.in,-0.1in)+1L@0.7(8.in,4.95in)" | lpr Both will 2-up output from netscape.
    The only difference is in size of the output. In the former, the pages are 50% as large as the original, whereas in the latter the pages are 70% as large. I find the larger print (70% scaling) more readable than the 50% one, but your mileage may vary.
    Thanks,
    Matt Willis


    vim tip

    Thu, 27 Jul 2000 11:04:59 PDT
    From: Adam Monsen <meonkeys@hotmail.com>
    This is straight outta the vim manual, but is buried enough to possibly warrant a 2 cent tip.
    Dos files have <CR> <LF> at endlines,
    Mac files have <CR> at endlines,
    Unix files simply have <LF> at endlines.
    
    So who cares? Vim automagically understands any file upon loading. But... You're using UNIX. When you have to send a colleague a text file (and they have a mac or Windows box), first issue the following command... :set fileformat=x where "x" can be "mac" or "dos". then save the file.
    -Meonkeys


    256 is real

    Tue, 22 Aug 2000 08:16:35 -0400
    From: David Meyer <dlmeyer@pop.mindspring.com>
    You published the following: On Tue, Jul 18, 2000, James Strong wrote:
    In studying ip addressing I come across the reference of 255 and 256. if all ones (11111111) = ? if all 0s (00000000) = ?
    How does one come up with 256 sometime and 255 other times?
    -confused
    There are no "256"s in valid IP addresses.
    IP addresses are 32 bits, and are written in 4 octets of 8-bit numbers expressed in decimal form. The biggest possible 8-bit number is 255, which is 2^7 + 2^6 + ... + 2^1 + 2^0.
    A good explanation of IP addresses is in the Linux Network Administrator's Guide, available in your favorite Linux distribution or from linuxdoc.org.
    -- Don Marti
    You have gone way overboard to create a convoluted explanation where a simple one would do. There ARE 256 possibilities for an eight bit number. Not all are 'valid' for an IP address, but that is another question. The reason the highest number is 255 is because the lowest is -0- (ZERO), not -1- (ONE). When you start counting with zero, the amount of numbers counted is one more than the number reached. SO, you could do the 2^7+2^6+... bit, or you could reach the same valid answer with a simpler 2^8-1.


    Syslog for Linux

    Tue, 1 Aug 2000 19:22:17 +0200
    From: Anthony E. Greene <agreene@pobox.com>

    I am running Red Hat Linux 6.2 & am trying to get a syslog server running.

    I have limited Linux knowledge & just want to get it working to log messages from cisco devices. Do you know of the commands to get it working.

    Edit your init script (/etc/rtc.d/init.d/syslog) so that syslodgd starts with the option to allow remote logging. See syslogd man page for details.


    Windows Rescues Linux?!

    Tue, 22 Aug 2000 19:07:16 -0700
    From: Mike <10ram@888.nu>
    Dennis,
    Somehow, while flailing about in a kernel upgrade, I managed to corrupt the boot.p? file. After 2 days of struggling and 8 hours of searching, I was finally able to correct the odious "hangs at LI" problem only by booting from a Windows 95 diskette and using 'fdisk /mbr' and then booting from a linux diskette and running 'lilo'.
    In the meantime I read through dozens of posts on the subject and worked all kinds of strong ju-ju:
    -I used 'linear' -I set the BIOS to 'AUTO' and then back to 'LBA', -I put 'disk=/dev/hda <tab> bios=0x81' in my lilo.conf -I ran linux 'fdisk' and rewrote the partition info -on and on and on.
    I had seen the '/mbr' fix earlier, but discounted it because the instructions didn't mention that it could be used even if you're not dual-booting. I finally found the tiny bit of info I needed at
    http://www.wzz.org.pl/~lnag/en/FAQ.htm#LILO_stops
    Thanks anyway for all of your help and I hope you pass this on.
    Is there really no other way to fix a corrupt mbr?
    eek. Don't tell Linus. ;)
    MjM

    The debian package "mbr" is an MBR only, which defaults to booting your active partition, but press SHIFT and you can choose a partition, or to boot from floppy. Sweet -- Heather


    Well known Port numbers

    Mon, 31 Jul 2000 22:37:45 -0400 (EDT)
    From: Steven W. Orr <steveo@world.std.com>
    All services are listed in the /etc/services file. There's a man page for it and it's the datafile which provides the basis for the getservbyname(3) and the getservport(3) calls.
    Specifically, port 109 is pop2 and port 139 is netbios-ssn. The other ports are not listed in my services file.


    Memory Holes ??

    Wed, 16 Aug 2000 21:49:25 +0200
    From: Kees van Veen (kvv@casema.net)
    Hello there, I allready read in an earlier posting, that somebody also had problems with memory holes. I have a compaq proliant 2500 with 256 MB and it only reads the first 16 MB. If you know alittle about linux you proberbly would say add a kernel parameter with the amount of RAM and the job is done. But this doesn't work, the machine works fine for half an hour and later when the memory use increases the machine trips over the hole and crashes.
    I heard that there would be some kind of patch .. Does anybody know something..??

    Thanx, greetinx,

    Someone proposed a kernel patch to allow Linux to deal with bad memory because of damaged SIMMs. Check the Kernel Traffic (kt.linuxcare.com) archives for that. A great place to look for patches is linux-patches.rock-projects.com but I didn't see it there. -- Heather


    Palm Desktop

    Wed, 16 Aug 2000 18:37:53 -0700
    From: Dennis Andrews <oligarch_one@yahoo.com>
    Is there, or is anybody working on, a poting of Palm Desktop to Linux. I will be switching to Linux on my new system then converting my old one to Linux (both Red Hat), but I can't get by without my Palm. What to do?

    There are lots of utilities to link to Palm Pilots, but if it's Palm's Desktop you definitely want to use, check out jpilot, and you should feel right at home. -- Heather


    sendmail/fetchmail Caldera 2.4 problem

    Wed, 02 Aug 2000 08:25:17 -0400
    From: Robert Findlay <fcsoft@attcanada.ca>
    My Caldera 2.4 installation is going to be used to test (all bash scripts) a web application which transmits a confirmation email. This Caldera 2.4 is a citizen on a Microsoft Exchange email network. For sake of argument lets say the relevant info is:
    hostname: caldera24.mydomain.com Microsoft mail server: 10.0.0.2 DNS server: 10.0.0.18
    At present I can send a test email to bfindlay@mydomain.com using the text based mail program. When I view the /var/log/mail file I see that sendmail has correctly sent this mail on to the Microsoft Exchange Server at 10.0.0.2. I then want to use fetchmail to bring the mail back to my test box (I want to ultimately script the whole test so I need to stick with text based programs).
    However fetchmail and the log both report the following error:
    SMTP error: 451 bfindlay@mydomain.com Sender domain must resolve
    Any ideas are welcome. Thanks.

    This is an antispam feature. If a machine's number does not resolve in what is called "reverse DNS" and thus map to a name, it may not be a real host on the internet at all, so Exchange is ignoring the mail. Lots of companies have hosts that they don't want the whole world to know about, though they might want their inside servers to know them this way - for this, use an inside DNS server that contains more information for your zone than the one outsiders are allowed to see. It's often called "split DNS" because the early implementations of it involved hacking on the DNS software a bit.

    Or, much easier, you could send your test mail from a host which really reverse resolves correctly. -- Heather


    network and broadcast addresses

    Thu, 24 Aug 2000 15:03:56 -0400
    From: tarun pahuja <tpahuja@guesswho.com>
    If someone gives me an ipaddress and a subnet mask, what would be the easiest way to calculate the network and the broadast address for that subnet.
    thanks

    First apply the mask. The idea is that the mask is a binary number that results from having 1's all the way down to a certain point. If it's a normal class A, B, or C, it's easy - everywhere that the mask says 255, use the number from the IP address. Where it says 0, no bits are allowed to leak through, so use 0.

    The result is your network value. Your gateway is often the next address (1 greater than the network number). However you should check - some places swap broadcast and gateway (the broadcast is usually the highest legal address in the range). -- Heather


    overclocking

    Fri, 11 Aug 2000 23:43:22 +1000
    From: Greg Hand <greghand@tpg.com.au>
    Hi! These sites will give you more than you need.
    www.arstechnica.com bxboards.com www.overclockers.com www.overclockers.com.au www.tomshardware.com
    Regards
    greg

    This concludes our recent Danish translation thread. Thanks everyone! -- Heather


    ftp to restart failed transfers

    Mon, 31 Jul 2000 22:26:52 -0400 (EDT)
    From: Steven W. Orr <steveo@world.std.com>
    Get hold of lftp. (I happen to be running lftp-2.2.4-2mdk) It's a great improvement over regular ftp. But note that the regular ftp client would also solve your problem. The reget command will fetch a file from where a current version was left off. As far as what the RFC provides, the only thing that I am aware of that the regular ftp client does not provide is the ability to transfer a file from one remote machine to another remote machine.
    But once you use lftp you'll never go back. It uses the readline library, it does filename completion, it will background transfers, plus a lot more.

    I use ncftp. It automatically tries to recover the partial download if you issue the "get" command again. Start ncftp and type "help get" to see the options. -- Mike


    Common Modem Problem

    Tue, 8 Aug 2000 15:22:17 -0500
    From: "Jonathan Hutchins"

    The point here, is a solution for problems that look like it may be a winmodem, but it's not. -- Heather

    One common problem with Modems when migrating to Linux is IRQ conflicts. Many ISA Network Cards default to IRQ3, commonly used by the modem. Under DOS/Windows, this shows up as a non-working Network card, or may be overcome by a plug-and-play configuration utility for the network card.
    Under Linux, the Network card comes up first, and it's the modem that won't work. This can be very frustrating for a new user, since "everything works fine under Windows", and nothing indicates what the problem is with the modem (usually Linux utilities will just report that it's "busy").
    Make sure that the Modem and NIC are hard-configured or at least have their ROM permanently set to different IRQ's, and the light will begin to dawn.


    True Modems

    Sun, 06 Aug 2000 22:05:00 GMT
    From: carl smith <kershawsmith@hotmail.com>
    Just read a reply about a true modem. I've been searching for one ever since a friend mentioned it to me. He has an ISA True modem. And wouldn't you know it mine is not. It's a PCI and so far no luck finding a PCI True modem. Any ideas on where I might score one?
    Hoping for the right answer,
    KershawSmith

    There's a HOWTO on Winmodems!!! http://www.ssc.com/mirrors/LDP/HOWTO/Winmodems-and-Linux-HOWTO.html --Mike

    All the thing says is, "buggy proprietary drivers exist for _two_ modems. Any others, you're SOL." -- Ben

    That's less than exciting news; there's better and _far_ more informative info at http://www.linmodems.org and http://www.o2.net/~gromitkc/winmodem.html - and they provide links to quite a bit of very creative software that lets you get a number of other uses out of winmodems (e.g., DTMF enc/decoder). --Ben

    Okay, so maybe it should have been an Answer Gang thread, but it is rather short. The Gromitkz site has an excellent set of guides about buying real modems in chain stores. It appears there may be support for 3 of these incomplete modems now, but it remains to be seen if any of them will port their efforts to the 2.4 kernel.

    The state of the art in forcing Lucent's modems to work is

    • fetch a raw-patched version of their module which doesn't steal "register_serial" and "unregister_serial" (you can use theirs straight, if you don't use other serial gadgets at all)
    • use the 2.2.14 version of the ppp support module even if you're in a later kernel. That will require also forcing the underlying slhc.o to load to make a complete ppp stack.

    In short, yuck. -- Heather


    re: HELP: Crontab not running nested executable

    Fri, 11 Aug 2000 09:44:59 -0500
    From: John McKown <JMckown@healthaxis.com>
    One question. In "file1", do you specify the entire pathname to "file2" in order to run it? The reason that I ask is the quite often "cron" does not have the PATH that you expect. This generally results in a "file not found" type error.
    Hope this helps some,
    John McKown


    And on the same topic, but a different tip...

    Crontab not running nested executable

    Wed, 2 Aug 2000 12:30:52 -0400
    From: Pierre Abbat <phma@oltronics.net>
    The most likely reason is that the path is different or something else in the environment. Stick an env command in File1. The output should be sent to you by email from the cron job. Compare it to the env when you run File1 yourself. I usually write full paths in cron scripts for this reason.
    phma


    Port numbers

    Wed, 2 Aug 2000 22:34:46 -0400 (EDT)
    From: Kurt <khockenb@linux.cc.stevens-tech.edu>

    The proper place to find port numbers is the Internet Assigned Number Authority, at http://www.iana.org/

    The page you are looking for is http://www.isi.edu/in-notes/iana/assignments/port-numbers

    Chris Gianakopoulos <pilolla@gateway.net> adds:

    There exists a good list of well known port numbers for TCP and UDP. It is called (of course!) The Assigned Numbers RFC. Here's the easiest way to find it. Go to a site such as www.excite.com, and search for: RFC1340

    You will get lots of hits that reference RFC1340.html. This RFC (an acronym for Request For Comments -- I know, I know, you probably already knew that!) has your information and a ton of other assigned numbers such as protocol numbers, magic numbers, ........


    Rawrite script

    Thu, 3 Aug 2000 13:52:38 -0400
    From: <APeda@INTERPUBLIC.COM>

    I've been coming across those boot (raw) images quite a bit lately, and as I move toward an all GNU/Linux solution, I find that saving images of certain DOS formatted diskettes is quite useful. So, in part as an exercise in using getopt (1) , I decided to write a script wrapper around dd.

    Here it is, for what it's worth: rawrite.sh


    255 or 256 IPs?

    Thu, 3 Aug 2000 12:05:20 -0700 (PDT)
    From: James Blackwell <jblack@insyncla.com>

    While you are of course correct regarding the fact that an octet can not exceed 255 for obvious reasons, what I think he was referring to is that some texts (particularly newer ones) refer to 256 possible values.

    While you are correct in stating 255 is the maximum, I think you forgot that the minimum isn't 1, it's 0. This leads to 256 possible numbers per byte.


    Reading Word files

    Fri, 4 Aug 2000 17:18:10 -0400 (EDT)
    From: Matthew Willis <matt@optimus.cee.cornell.edu>

    Tip: How to view microsoft word files

    You can use several programs to translate microsoft word "doc" files to some other format. There is word2x (which works for word 6) or mswordview (which work for MS Word Version 8, i.e. Office97). Or, you can download the free version of WordPerfect which can read many Word files. Another option is to download abiword, which can read microsoft word files. I have automatically configured pine to call abiword on ms word files by editing /home/matt/.mailcap and having this line in it:

    application/msword;abiword %s
    


    Windows Install over Linux

    Fri, 4 Aug 2000 16:53:39 -0700 (PDT)
    From: adh math <adh_math@yahoo.com>

    Dear Mr. Train,

    This advice may be quite late (your original message is nearly a month old), but I hope you haven't had any problems...

    Typically, Windows overwrites the master boot record when it is installed, and I've had worse things happen (like the Windows installer corrupting the partition table). The standard advice is to install Windows before installing other operating systems. It sounds like this isn't an option for you, but please be aware that by installing Windows on top of Linux you're likely to have frightening (or infuriating, if you prefer) problems which may or may not require re-installing Linux and restoring all your user data.

    Best of luck, in any case!


    Linux-friendly ISPs

    Fri, 4 Aug 2000 17:02:27 -0700 (PDT)
    From: adh math <adh_math@yahoo.com>

    You posted to Linux Gazette about looking for ISPs that allow Linux connections. This isn't much (in fact, probably little more than moral support), but in the Pacific northwest, FreeI.net is Linux friendly, as is nocharge.com. I strongly prefer the former, because 1. They run FreeBSD (instead of Windows NT) and 2. Their modems are better configured (e.g., they answer without ringing four times, and are not always busy).

    I'm sure there are counterparts in other parts of the country...


    Passwords and SSH

    Fri, 4 Aug 2000 17:36:57 -0700 (PDT)
    From: adh math <adh_math@yahoo.com>

    Dear Mr. Benfell,

    Can't help you with POP over SSH, but can perhaps explain why you keep getting prompted for passwords (and why you should be *happy* about it:).

    If you're not prompted to enter a password to authenticate a connection, it's because your password is stored on a machine somewhere, often as plain text. In other words, storing your password is like writing your PIN on your ATM card. If you care about privacy enough to encrypt network transmissions with SSH (and you should, with good reason), you probably also care enough not to leave your password written on a scrap of paper next to your computer, or sitting unencrypted on a hard drive (possibly on a publically-accessible server, or on your laptop, where a thief could get access to it by booting with a rescue disk, thereby granting themselves root).

    Hope that makes the password annoyance more tolerable, and sorry I can't help you with POP/SSH.
    David Benfell <benfell@greybeard95a.com> replies: This problem was solved a long time ago. And I'm well aware of the security issues.

    What you can do is run ssh-keygen on each machine that you want to be able to communicate in this way. This produces two files: identity and identity.pub. identity.pub from each machine must be copied into the authorized_keys file of each of the others.

    The authorized_keys file can hold multiple keys. Each key takes one line. So you copy the identity.pub file from each machine with commands something like:

    ssh-keygen
     scp .ssh/identity.pub user@remote-machine-1:.ssh/1.identity.pub
     cat .ssh/1.identity.pub >> .ssh/authorized_keys
    

    Remember that for this to work, each machine must have a copy of the other machines' keys. So, you then log in to the remote machine and do something similar:

    ssh user@remote-machine-1
    (enter the password on the remote machine)
    ssh-keygen
    scp .ssh/identity.pub user@local-machine:.ssh/1.identity.pub
    cat .ssh/1.identity.pub >> .ssh/authorized_keys
    

    Why bother?

    So you won't have to type the password every five minutes for every POP account you're accessing this way. I have four e-mail accounts and collect several hundred e-mails per day. So I prefer to leave fetchmail more or less continuously running.

    I used to do this with fetchmail's daemon mode. But for pop via ssh, this won't work.

    So I need to change my .fetchmailrc so it looks something like this (for only one e-mail account):

     defaults
     protocol POP3
     is localuser here
     fetchall
     forcecr
    poll remote.server.org port 11110 via localhost user username pass ********
     preconnect "ssh -C -f username@remote.server.org -L 11110:remote.server.org:110 sleep 5"
    

    It needs the password in the .fetchmailrc file but if you have a reasonably secure system, this isn't a tremendous worry. .fetchmailrc cannot be world-readable (fetchmail will reject it if it is). I'm not that worried about people gaining access to my system as long as they can't sniff the password in plain text off the internet (which they can do with normal POP usage, and in my case, it was getting transmitted every five minutes, so the bad guy wouldn't even have to have been terribly patient).

    The password still has to be fed to the pop daemon, but this way, it isn't crossing the Internet in clear text for feeding.

    Next, create a script like:

    #!/bin/sh
    ssh-add
    while true; do fetchmail; sleep 5m; done
    

    I call mine "getmail".

    Then I can do:

    ssh-agent getmail

    It asks me for the passphrase once, then uses the keys to authenticate my access to the remote systems.


    Missing/duplicated keystrokes

    Sat, 5 Aug 2000 13:46:55 +0200
    From: Tom <tom.mattmann@gmx.net>

    after i work with my computer for about 20 minutes or so, i start missing keystrokes and sometimes keystrokes are duplicated

    Seems that your Keyboard-Controller is overclocked. If you have an AWARD Bios, enter it pressing DEL when the computer is starting. Select:

    -> Chipset Features Setup
     -> KBD Clock Src Speed
      -> 8 Mhz
    


    Regarding Dual-Boot Windows/Linux

    Sun, 6 Aug 2000 21:45:56 -0400
    From: Robert Day <zarin@support.drlogick.com>

    Well, your tips are helpful I have noticed, but I do know one thing about RedHat (In particulay, it's my chosen Distro)

    Use FDISK (Windows version is fine) to create a partition LESS than the full drive (Or two hard drives) - leave whatever you need for Linux EMPTY... (Partition Magic to shrink the partition is fine as well) and install Win9x/NT/2k

    Then, boot up with yer RedHat CD/Floppy, and install into the empty area... The LILo config will see the Windows install, and add it to LILO for you.. (Install LILO into the MBR - overwriting the DOS MBR) and voila, Dual Boot - It's simply simple..


    CB Radio Connection

    Tue, 8 Aug 2000 15:07:50 -0500
    From: Jonathan Hutchins <hutchins@opus1.com>

    You might be able to get a good idea of how to do this by studying the "Amateur Radio" guides for Linux.

    In any case, I don't think it can be done with one CB radio, but it could be done with two (at each end).

    Please note that doing this may be ILLEGAL. It also violates the FCC rule that requires you to include your license number in each transmission.

    One problem with this is that while telephony and modems are "full duplex", which means that both ends can both speak and listen at the same time, CB's are "half duplex", which means when one is "talking", the other must be "listening".

    First, you have to separate the "send" and "receive" or "mic" and "earpiece" channels. In a telephone, this is accomplished by having both mic and speaker "live", separated by a biasing transformer so that "your" mic is "louder" on the output to the phone line, and "their" mic is louder on "your" earphone. For CB purposes this would be easiest to accomplish with a modem that was set up for an acoustic adapter - one that you place the telephone receiver in so that it doesn't actually plug in to the phone but produces the equivalent tones through a speaker held next to the headset mic and vice versa.

    The send output (or mic output for the coupler) goes to the mic in on one CB, set to say Channel 10, with the Transmit switch strapped "On". At the other end, a CB would be set to listen to Ch. 10, with the output of the speaker or headphone jack going to the earpice or receive circuit. Repeat the process in the other direction on channel 20.

    You now have monopolized two CB channels for miles around with earsplitting noise which will bleed across adjacent channels (hence the large interval between send and receive), but you may have reached something close enough to a telephone connection that doing ATH1 on one end and ATA on the other may get you a connection.

    If you had a couple of modems capable of doing a synchronous connection, it wouldn't be too hard to wire something up for single-channel use, but it would involve doing some interesting coupling between the sync signal or DSR/DTR pair and the "Send" switch.

    On the other hand, there are lots of possible problems here. How do you get that "send" output isolated if you don't have an acoustic coupler? Can you be sure that the modems will sync? What do you say to the guy from the FCC who says he's traced the signal that's jamming everybody's CB's to your rooftop? And it's very likely you can't do more than 2,400 baud because of the limited quality of the connection.

    You will find that unless you're a real handy electronics hacker who knows the guts of a telephone pretty well, this will be difficult enough that you'll want to buy the pieces ready made, which means buying the Amateur Radio gear if you can; which still means adapting it, and which means spending real money. Unless your time is pretty worthless, you and your boss would probably be better off purchasing a wireless networking solution from an existing vendor.


    Tree script

    Wed, 9 Aug 2000 12:49:30 +0200
    From: Matthias Arndt <matthiasarndt@gmx.net>

    This is a new version of the bash based tree utility which was published in the Linux Gazette about 2 years ago. I've added the feature to display the files inside the directories. This tool displays the whole directory tree below the PWD. You may supply an alternate starting directory on the command line. This is a bash script so it is not very fast. But obviously, it does its job.

    tree

    To use the script, just cut and paste the code to your favourite editor. Call it tree and make the script executable using chmod u+x. I suggest copying it to /usr/local/bin as root and do a chmod +x on it to make it available to all users on your system. The output goes straight to the stdout. This means you can use I/O redirection to capture the resulting tree to a file.

    As on all of my releases of software for Linux, the GNU General Public Licence (GPL) applies to this utility.

    [Matthias also wrote an article in this issue about window managers. -Mike]


    This page written and maintained by the Editors of the Linux Gazette. Copyright © 2000, gazette@ssc.com
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Choosing your Window Manager: a Matter of Taste

    By Matthias Arndt


    Table of Contents


    A short notice about this article

    This article should be of concern to all users of Linux. Because a graphical user interface seems to be required by almost any sort of computer users, it is essential to provide a pleasant GUI to the user.

    The problem (or the pleasing fact, if you want to claim that) with Linux, is that you're are not limited to a GUI provided by your OS manufacturer. In Linux, (in deed, any sort of Unix), you have the choice, how your desktop will look like.

    In this article, I'll try to line out the several ways of providing a pleasant GUI using your favorite window manager.
    I'll provide a list with several window managers. I'll try to line out their advantages and their disadvantages. I'll also tell you about the experiences I made while actually using one of the window managers mentioned.

    However, I'll not cover the aspect of configuring the various window managers. Check the manuals or articles in the Linux Gazette covering the aspect of configuring a special window manager. (As far as I remember, there were very decent articles about configuring fvwm in the first 8 or 9 issues of the Linux Gazette.

    back to the top


    A short list of available window managers for Linux

    The following list shows you a short glimpse on this topic. It shows you a list of the available window managers for Linux.
    This list is not complete, however, as the linux world is as always on the move and new products are likely to appear on the scene.

    Name of window manager short description
    TWM the default window manager provided by the XFree team, for purists only
    FVWM this has been the most used window manager in the Linux world
    FVWM 2 a modernized version of the good old FVWM, with themes and much colour
    I haven't used this window manager yet.
    FVWM 95 a rewrite of the FVWM window manager to provide a Windows 95 feel
    AfterStep a window manager trying to emulate the NextStep feel - includes a wharf, some sort of panel to swallow applications
    Enlightenment I haven't used that one. I have heard that this one should be very colorful. More a toy than a window manager? ;-)
    KWM the window manager provided with the KDE desktop environment, very easy to use and to configure without having to edit the config files with a text editor
    IceWM a window manager completely written from scratch, supports a Win95'like taskbar with Linux icon and themes - very fast especially to load
    KDE needs almost the double amount of time to launch. (My personal tip!)

    These are the most popular window managers in the Linux world. There are a few more window managers like the olvwm, the olwm. the wm2 and Windowmaker.
    I haven't used them before and I intend to focus on the window managers I know in this article.
    If you don't like that, what about writing an article about your favorite window manager? I'd love to read about other interesting window managers.

    back to the top


    My experiences with window managers in the last 2 years

    I'm using Linux for almost 2 years now - with breaks between. A friend told me about Linux and I found the idea very great. An OS you could copy and give away for nothing, an OS you could rewrite and you could take a look at the code. Wow, that seemed to be the future of home computing for me. So in December 1997, I bought a good Linux book,
    The Linux A-Z by Phil Cornes, and a CD containing the PTS Linux distribution. (I'm from Germany, you perhaps should know.)

    The first window manager I was confronted with, was the clumsy TWM. I was happy to run X and to learn more about Linux so I didn't care. The TWM gave me a real UNIX and highend user feeling I never encountered before.

    In the spring of 1999, I started being interested in dial-ins to run a better BBS than the old DOS BBS systems, perhaps running a PPP link and using Internet (eq. TCP/IP) technologies for that.
    Linux seemed to be best suited for that. So I reinstalled Linux and learned more about Linux. That time, I focused on system configuration and programming first (and I still do that).

    With gathering more and more information of the Linux system, I managed to configure the TWM to fit my needs. However, I was looking for a better suited window manager as I learned that the user can select the window manager to be displayed. At first, I tried FVWM 95 because, at that time I still worked using Windows 95.

    Linux was giving me more and more. In the October of 1999, I got a copy of Debian and installed it. That was a more modern distribution and it was the first that I managed to provide Internet access to me. Form then on, I have hardly used Windows 95/98 to connect to the Net. Linux is just much more stable and better suited for that.

    Debian came along with both, KDE and GNOME. But both seemed to be broken and at that time, I could not get one of them to work.
    Under Debian then , I started my adventure of exploring the advantages of different standard window managers under Linux. I installed FVWM and AfterStep and I tried both.

    I discovered that FVWM was superior to both FVWM 95 and to TWM. However I liked the look and feel of Afterstep and the Wharf got my attention. I switched to AfterStep and I still have that window manager on the machine I used at that time.

    Because of problems with my disk space (I couldn't afford a second harddrive), I didn't install SuSE Linux because it consumed to much disk space.
    In the spring of 2000, this changed as I got my new Athlon 600 machine with a GeForce 256 chip set video card. I first had problems to get a working X server for it. But I installed the new SuSE Linux 6.4 (and I still use it now while I'm writing this article). Disk space was not a problem so I have currently installed almost all window managers supplied with SuSE, even KDE and a small working copy of GNOME.

    At first I tried KDE because all Linux magazines (here in Germany) focused on that.
    I was amazed and puzzled. Such a powerful desktop, far superior to all kinds of products from MS. Then, I discovered the possibility to change the window manager with the KDM login utility. Ok, I was (and I am still able) to use all of the window managers I mentioned above. I played around a little bit with it, and then I found IceWM.

    At that point, the world of Linux changed for me. A classical window manager for Linux and it loads so d*** fast. I was puzzled. Quickly, I decided not to use KDE anymore. KDE was far too close to Windows 98, while the icewm provides a more UNIX like feeling.

    I can configure icewm in the good old unix way, it is fast and it provides almost all of the aspects I used under KDE. I now run icewm exclusively. However, I still have KDE installed and I use the kfm and a few of the KDE applications, especially the KMail E-Mail client.

    I can tell you, I'm now pleased and satisfied with Linux all the way.
    The only reason to load Windows 98 is to either to play the games (I'm quite a gaming fan) and to run an Atari ST emulator - StonX is not the best ST emulator I know. All I can tell you: In 1 out of 20 occasions, I boot my machine, I boot Windows, the 19 other occasions, I boot into Linux with icewm.

    back to the top


    Tips concerning window managers

    I tend to sort the window managers into 3 classes:

    1. simple window managers like TWM
    2. feature-rich window managers that require user customization via one or more configuration files like FVWM, icewm and AfterStep
    3. feature-rich window managers that are configured using a GUI like the KDE window manager

    The first thing you must choose: do you only need a window manager to allow you to move your windows around (and perhaps a menu to launch your favourite applications) or do you want to have a complete desktop environment with all components having the same user interface.

    For new Linux users, I'd suggest using KDE or GNOME because they fit into the 3rd category mentioned above. Especially KDE can be configured like Windows and is therefor better suited for Windows fans or new Linux users.

    Even, a Linux guru may use KDE, but most people prefer to have the control over all config files.

    If you like to have a colorful desktop or you want to install desktop themes and sound sets, FVWM2 or AfterStep could fit your needs. Do you want to have a Windows 95 feeling but you still want to be remembered that you are using Linux? I suggest using either FVWM95 or icewm. Because both of them support a START menu in the lower left corner of the screen. Both have a Windows95 like taskbar. Icewm is even better than FVWM95 because it features several workspaces. Try both out and make your choice afterwards.
    Configuration of icewm is somewhat easier because the configuration is split up into several files rather than a single one.

    If you like the classical Linux feeling, you should use FVWM1. It is a powerfull window manager and you can easily find help and tips for it in almost any Linux user group or on the net.

    If you're tight on memory, especially when running your X with only 16 MB available, you should forget about KDE. As far as I have used it, it seems to be very memory consuming. With only 16 MB of RAM, you should install one of the other standard window managers.
    A special tip for those of you with small amounts of RAM:The icewm is very fast and it does not use much memory. Give it a try.

    I haven't used GNOME yet but I think most of the things I said about KDE apply to GNOME as well.

    back to the top


    Advantages and Disadvantages of the window managers mentioned

    The following table will give you a short overview about the advantages and disadvantages of the window managers mentioned

    window managerAdvantagesDisadvantagesConclusion
    TWMcomes shipped with every Linux that features X
    • rather clumsy interface
    • problems with large menus (at least on my PTS Linux system)
    • no workspaces
    for hardcore Unix users only
    FVWM
    • a de-facto standard under Linux
    • support for it almost everywere
    • great stuff about it in the Linux Gazette, including configuration and tips
    • pleasant look and feel
    • all-in-one configuration file
    • no GUI-based configuration utility available
    If you cannot decide, choose this one. Not recommended for complete new Linux users
    FVWM2
    • modernized FVWM
    • Themes
    • no GUI-based configuration utility
    the modern variant of the FVWM above - for those who prefer a colorfull desktop
    FVWM95
    • FVWM based
    • Taskbar
    • START menu
    • no GUI-based configuration utility
    • as far as I can remember only one workspace
    the variant of the FVWM above - for those who want to have a Windows 95 like appearance of their X
    AfterStep
    • Wharf
    • NextStep look'n'feel
    • Wharf hard to configure
    • as far as I know, no GUI-based configurator
    If you ever used a NeXT or a NextStep system, this is the right window manager for you.
    KDE
    • modern look'n'fell
    • Themes
    • START Menu
    • Drag'n'Drop on taskbar
    • comes along with a complete set of applications
    • easy to use and to configure
    • up to 8 workspaces
    • GUI-based configuration
    • very configurable
    • uses much memory
    • too close to Windows
    a complete desktop solution for Unix - recommended for novice users
    IceWM
    • fast loading
    • small memory usage
    • GUI-based configurators available
    • taskbar
    • at least 4 workspaces
    • START menu
    • Themes
    • configured using files
    a powerfull window manager - my personal tip
    at least the menus are easy to configure, at least for an intelligent person
    Even novices should, at least, take a look at it.

    Notice: Most distributions come along with a utility that allows to create the menu entries of the various window managers.

    • SuSE Linux has this feature
    • Debian has it - seems to work pretty good
    • I guess, Red Hat has something like that too

    back to the top


    The Dotfile Generator

    This is program to create the so-called dotfiles - the configuration files.

    As far as I've heard about it (from Linux Gazette), it can also create the configuration files for some window managers.

    I've not used it yet and I do not plan to do. Search the Linux Gazette website on it to find more information.

    back to the top


    Conclusion

    I hope this article helped you to find your window manager of taste.
    However, I cannot give any warranties that all information provided is correct.

    back to the top


    Book tip

    The Linux A-Z
    written by Phil Cornes
    ISBN: 0-13-234709-1

    I do not give any warranties that the information above is correct.
    This book covers almost any aspects of Linux, including system programming and configuration. However it is written from a 1995 point of view and some information, including the URLs are outdated.

    This is a book about the usage of Linux in general, not about window managers.
    However, a fairly small chapter deals with the configuration of FVWM.

    Notice for my German readers: This book is in English.

    back to the top

    created using Bluefish
    [Matthias also submitted a 2-Cent Tip in this issue, a tree script -Ed.]


    Copyright © 2000, Matthias Arndt
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    HelpDex

    By Shane Collinge


    ouija.jpg
    reliable.jpg
    creepernet.jpg
    microwave.jpg
    abacus.jpg
    debugme.jpg

    Courtesy Linux Today where you can read all the latest Help Dex cartoons.


    Copyright © 2000, Shane Collinge
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Next Linux 2.4 Kernel: some tips by Alan Cox

    By Fernando Ribeiro Corrêa
    Originally published at OLinux


    Olinux: What is you main motivation working with Linux?

    Alan Cox: I enjoy it. It also happens to make lots of other people happy which is even better.

    Olinux: Are there any technology events that had a strong impact on your life in terms of change your mind, tear down old values, raise new ideas, new discoveries and announcements...

    Alan Cox: Having seen Linux from its early days as a fun toy through to the latest figures on its usage the one thing I have learned is that predicting the future in computing is not very practical.

    Olinux: How is your personal relation with Linux Torvalds? Did he ever invite you to join Transmeta? What do you think about Transmeta Crusoe chip?

    Alan Cox: Actually I don't know Linus that well. I guess its more of a working relationship than anything else. Transmeta people have tried to get me to work for them but I'm not keen to work on what is effectively proprietary software in their CPU core. Not letting people program the CPU to emulate anything they want really is locking out a lot of potential clever usage like building a virtualisable 80x86.

    Olinux: How long have you been using Linux? When did you start with the kernel?

    Alan Cox: I ordered my first 386 PC when 386BSD was announced and 0.12/3 had just appeared in the Linux world. 386BSD needed an FPU which I didnt have so I set off installing Linux (by then 0.95) from Ted's boot/root floppies.

    Olinux: What is you role inside kernel.org?

    Alan Cox: It is just somewhere I load kernels. H Peter Anvin and other folk manage the kernel.org setup.

    Olinux: Can you describe how kernel.org is organized? How the workflow is manage and the meeting and decisions making process?

    Alan Cox: People send me stuff, I put it together and test it a lot, when I think a new 2.2 is ready I send it to Linus who then reviews the code to see if there are problems or changes he doesnt like. That sometimes catches bugs everyone else misses.

    Olinux: Why do program a Linux kernel instead of a BSD kernel?

    Alan Cox: Initially because it ran on my machine, nowdays the licensing also matters to me.

    Olinux: Are there any news about kernel 2.4 release?

    Alan Cox: Right now I'm working on 2.2 stuff and Red Hat Pinstripe beta work. I am not currently following 2.4 in detail. Its getting there - slowly but surely. It would be nice if it was finished by now but alas it isnt.

    Olinux: What are the improvements in the kernel for the future? Give some ideas of the video, sound and other innovations on the kernel?

    Alan Cox:I am actually not sure what we will see after 2.4. Certainly it will be very interesting with people like SGI working on Linux for huge NUMA machines.

    Video and sound will both need work. Sound cards are getting more and more simplistic on the whole, while at the same time people want to expand up to very expensive high end music studio grade hardware while using linux. The big video challenge will be digital television.

    Olinux: What was your role on the version 2.0 release?

    Alan Cox: I took over Linux 2.0 maintainance later in the 2.0 cycle, before that I was working on the networking code and a lot of general bug fixing.

    Olinux: Why can't Linux take the UNIX server's place?

    Alan Cox: I see no reason Linux won't become the replacement for old unix OS's on most hardware, certainly for non specialist applications and environments.

    Olinux: What is your relationship with Red Hat? How important is Red Hat for that Linux World?

    Alan Cox: Red Hat pay me to do whatever I like to improve the kernel and help with their support needs. I get a lot of good input through Red Hat because their support staff are very aware of the things the OS needs to provide that mass market users really need.

    I think Red Hat is important to the Linux world as a major provider of the kind of corporate stability, training and support that Linux needs to make it in many markets. SuSE, Conectiva and others are just as important. In different ways Debian is probably even more important to the Linux World

    Olinux: What are the future plans for the Building Number Three after Red Hat?

    Alan Cox: Building Number Three is in the middle of ceasing to exist. Now that I work directly for Red Hat it really doesn't do anything.

    Olinux: In your opinion, what are the the best advantages in Open Source programing?

    Alan Cox: The biggest advantage of all is it prevents you the end user from being screwed by a vendor. If you need a bug fix or a change then you can go to whomever you like. This also means you can provide support and services locally rather than having to pay a single US corporation for all your requirements.

    Local support also means support in native languages and the ability for people to customise Linux to their cultural needs. That is incredibly important.

    Olinux: What is the Linux future? What improvements are needed to be more used? How far did projects as Gnome or KDE go in terms of building friendly users interface?

    Alan Cox: Gnome and KDE are the beginnings of the right things. They are at best level with windows, and IMHO windows isn't good enough either. To create a really user friendly system it has to be usable by anyone, not just computer literate people. You shouldn't need to know about computing to use computers. Thats why I think things like Linux in set top boxes and simple application servers is actually a very important market area. The PC desktop is too complex, too pricy and too time consuming to learn if all you want to do is send email, chat to friends and buy things on the internet.

    Olinux: There are a lot of companies and manufactures as Lineo, Transmeta, IBM that are betting on linux success for embedded devices. What is Linux future for embedded systems?

    Alan Cox: I think we will see Linux as a huge success in the large embedded devices. I don't think it will ever be invading the really small embedded systems like car ignition and washing machines however. Linux is too big for such devices. Alan.


    Copyright © 2000, Fernando Ribeiro Corrêa
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Apache, the star of the Open Source World

    By Fernando Ribeiro Corrêa
    Originally published at OLinux


    Olinux: Tell us about your career: college, jobs, personal life (age, birth place)

    Brian Behelendorf: I Went to University of California, Berkeley, from 1991-1994. Didn't graduate, got distracted by a job at Wired Magazine in 1993 and later HotWired, where I was the Chief Engineer at their launch in 1994. Around the same time co-founded Organic Online, a web site design shop. Left there in mid-1998 and then co-founded Collab.Net, where I now work.

    Olinux: Where Apache headquarters are located?

    Brian Behelendorf: There really is no headquarters, we're distributed. We are officially incorporated in Delaware, and our fax line/phone # for our secretary is in Maryland, but there's only one guy there. We are truly distributed.

    Olinux: What are your responsibilities at Apache? Do you have any other jobs?

    Brian Behelendorf: My responsibility is to help speak for Apache to the outside world, to evangelize a bit and to help make sure things internally run smoothly. I'm on the board of directors.

    Olinux: How is Apache organized? Try to give us an idea of how Apache works?

    Brian Behelendorf: Politically, we are a membership-based organization, with an elected board of directors. The members are developers who are invited by existing members based on their contributions to the group.

    Olinux: How the work is coordinated and managed?

    Brian Behelendorf: The software development is done without much serious coordination; basically we all just share a CVS tree and check in changes and enhancements. We do split it up by project and module, and each small subgroup has their own way of deciding what new features to add (or remove). Again, very decentralized.

    Olinux: And the servers, directories...

    Brian Behelendorf: The server sits at my company's colocation facility in San Francisco. It chews up a small amount of bandwidth, and is actually rather fun to support.

    Olinux: What about funding?

    Brian Behelendorf: We don't require any money to operate; it's only recently that we've started getting donations, so we're still figuring out what we want to spend it on. =)

    Olinux: Are there any hired employees?

    Brian Behelendorf: No one is paid directly by the ASF to work on Apache. We may hire a system administrator at some point, and/or an admin for the non profit.

    Olinux: How many people are involved?

    Brian Behelendorf: About 150 people have CVS commit privileges, but there are probably a thousand or so who have actually contributed code at some point or another. The various mailing lists have somewhere ~40K subscribers on them.

    Olinux: What are the main problems?

    Brian Behelendorf: Main problems? Making the big-picture decisions, and sticking to them.

    Olinux: What is the hardware available for the project? Who donated it?

    Brian Behelendorf: The hardware is donated by me, it's just a rack-mount pentium server.

    Olinux: Does any private company supports apache?

    Brian Behelendorf: Lots of companies have donated money, but the most valuable donations are that of people's time; there are several companies, like IBM, Covalent, Apple, and others who have staff engineers who spend company time working on Apache, since those companies use Apache in commercial products.

    Olinux: How much apache Foundation is expected to raise in 2000?

    Brian Behelendorf: We have no specific target. Probably ~$50K.

    Olinux: Everyone is volunteer?

    Brian Behelendorf: Yes.

    Olinux: Who are the main leaders of the project?

    Brian Behelendorf: We have no big heros, or at least try not to. Each of our subprojects has lots of people who do a lot of hard work - Ryan Bloom on the core web server, for example, and Craig McClanahan on the Jakarta side of the house. But really we have no "main leaders", just the board of directors that set policy decisions for the ASF as a whole. Roy Fielding is the Chairman of the Board, and I'm the President.

    Olinux: Are there a special ASF think tank that work on the project macro architecture?

    Brian Behelendorf: Nope, it's all done over email, though we do meet in person from time to time.

    Olinux: Does ASF has any key strategic alliances with companies as OReilly, IBM or any other ones?

    Brian Behelendorf: They're not official alliances, but these companies do dedicate resources in the form of engineers' time towards things.

    Olinux: How often and where the group responsible for key decision meet? those meets take place on any specific lace or over the internet?

    Brian Behelendorf: We have Apache conferences about twice a year now, once in the US, once abroad. The next one is coming in October in London.

    Olinux: What are the main projects involving apache are under way?

    Brian Behelendorf: See www.apache.org.

    Olinux: How the development is coordinated? deadlines & guidelines established? is there a special testing procedures before the changes are added to the core code? are there any special quality control, auditing on code produced? what are the analysis and programming tools used?

    Read my chapter in "Open Sources", It'll answer a lot of these questions.

    Basically, each project (and each module under each project) have their own bylaws, but most run by consensus; and the more significant the change, the more discussion it generates. We don't do strict voting, though if one person feels that a particular solution is incorrect they can veto it. No special quality control or auditing or testing procedures, we let the public do that for us. =)

    Olinux: What is the operating system used to run ASF? FreeBSD? Are there any situation when linux is used?

    Brian Behelendorf: apache.org runs on FreeBSD. I'd say there are more Apache developers using Linux and developing on Linux than any other platform, though.

    Olinux: What are the main steps toward a better software concerning apache development are still under way? are there any expected turn point in terms of future technology or procedures used? Let's say, some secret project that will replace apache in the future?

    Brian Behelendorf: We do have an Apache HTTP Server 2.0 coming out soon - it's no secret, though, you can download an alpha from www.apache.org.

    Olinux: How did your group feel being offered the ACM 99 software award? What it represented? did it change something in terms of funding or international support for ASF?

    Brian Behelendorf: It was very much appreciated - an award from the ACM means a tremendous amount. It made all the developers pretty happy, I think. It didn't change much in our public perception though.

    Olinux: Recently, Apache website was hacked by good hackers otherwise it was said they could have done a lot of damage. What happened: what security roles were exploited to break through? what were the main lessons learned? How ASF changed its security policy?

    Brian Behelendorf: The main lesson was that there was too much software installed on the system in an insecure way, partly because we'd given a few too many people root on the box without having a formal internal security policy. Now, only a few people have root.

    Olinux: Who is ahead of the organization of ApacheCom Europe 2000? Fell free to make any interesting comments about the event?

    Brian Behelendorf: Ken Coar (coar@apache.org) can be contacted about the conference. We're all very excited about it.

    Olinux: In you opinion, Microsoft shall be broken apart? Will this remedy be enough to stop its monopoly?

    Brian Behelendorf: Clearly it sounds like they will, but I suspect that Microsoft will find a way to find another part of the software world to monopolize even after this. The .NET initiative sounds like it would be one way.

    Olinux: In you opinion, how much Linux/Os community has grown and how do you oversee its future?

    Brian Behelendorf: I think it will become ubiquitous; I believe it will provide serious competition to Windows, even on the average user's desktop.

    Olinux: What are the main internet technologies that you consider extremely interesting or relevant advance for technology information?

    Brian Behelendorf: The whole peer-to-peer space is interesting, of course - what companies like PopularPower are doing seems pretty important. What else... not much new has happened in the last 5 years, really. All we ever needed was portable code (which we have now with Java and some other languages) and portable data (XML). Now, it's a matter of actually building interesting things with them.

    Olinux: Personally, what are your main plans for the future? Have any plans to start your own business or a new company?

    Brian Behelendorf: I founded Collab.Net about 18 months ago and it's been a lot of work, so I have no plans to start another =) Nope, I'll be sticking with it for several years at least, it's a lot of fun helping teach other companies what open source means and how to use it.

    Olinux: Send a short message to programers in Brazil that work in Free Software/ Opensource projects and to OLinux user's?

    Brian Behelendorf: I encourage every Linux advocate in Brazil to spend time learning about how Linux, Apache, and other open source tools work, and I urge them to consider helping write them too. We need more of these tools internationalized, we need fresh perspectives, and there is no better way to ensure that your technological future won't be dictated by some soulless single company in the US. =) But while Linux is free, it will only survive if more developers continue to help develop it... so be sure and help out where you can. Thanks, hopes this works for you. Brian


    Copyright © 2000, Fernando Ribeiro Corrêa
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Arturo Esponosa and Red Escolar

    By Fernando Ribeiro Corrêa
    Originally published at OLinux


    ["Red Escolar" is Spanish for "school network". -Ed.]

    Olinux: Tell about yourself: carrier, college, main jobs and positions, birth place?

    Arturo Espinosa: Born in 14/11/76 in Mexico City. Started university studies in 1996 in the faculty of sciences, and changed to psychology last year (nothing fancy here).

    This describes in a very general way my involvment with computers:

    Arturo Espinosa Aldama forms part of the generation that was early exposed to 8-bit computers and BASIC programming. He has been either an amateur or professional programmer since, and a GNU/Linux user since he started his computer sciences studies at UNAM, the National University of Mexico. Having worked in several institutional systems for the university and the Red Escolar Linux project for the ministry of education, he now works for HelixCode, a leading Free Software development company.

    Olinux: Describe your relationship with Miguel de Icaza? Did you work initially for Gnome project? Are there any relation between Red Escolar and Gnome?

    Arturo Espinosa: Miguel de Icaza and I have been friends since I got to the faculty. I saw the GNOME project be born, and have participated in it since. Red Escolar takes advantage of the GNOME project, using it as a desktop and as a platform for some user administration software we wrote.

    Olinux: How did you came up with Red Escolar? When did it start?

    Arturo Espinosa: Red Escolar is a big project the Mexican government has been working on for three years now. There are some big challenges in the realization of its goals, and Red Escolar Linux, which is the project I worked for, solves some of these problems. I spent some 6 months investigating the way GNU/Linux could help Red Escolar to be a more successful project, looking at what there was available at the time and what would be needed to develop.

    It all got started in October 1998 when my former boss, José Chacón saw that Linux could fit into Red Escolar and was looking for someone who could take the development challenge.

    Olinux: Was there anybody else helping you?

    Arturo Espinosa: I couldn't have made it without the alliances of José, the efforts of Miguel Ibarra, who was hired to develop from scratch a dynamic DNS administration service and the end-user user administrator. Walter Aprile, now the telecomunications director of Red Escolar, and also a Linux user, has supported us since the beginning, and has been a positive influence inside Red Escolar too. Obviously, we are also in debt with all the big bosses who have been patient with us and who have had enough vision not to pull the plug from us.

    Olinux: What is the main objective of the project? How many volunteers are already involved? What were the main obstacles in the beginning?

    Arturo Espinosa: The objective of Red Escolar Linux is to reduce operation costs and help with the technical difficulties that Red Escolar may have. As the guys from Red Escolar install computer labs with windows in the schools, and an Internet connection, we provide a lab server and remote service infrastructure that allows them to have one e-mail account per user, mannaged by the responsible of the lab: a teacher, to have all their windows machines connected to the Internet, through the server's modem, to immediatly publish documents through a lab web service, and so forth. (http://redesc.linux.org.mx/). We also offer a GNOME workstation, for maximum software cost reduction, but a dependency from the teachers to certain Windows-only multimedia CDs have made this impossible to accept.

    We don't have any volunteers at this time.

    The biggest difficulties at the beginning were to make the installation of the operating system completly automatic, and to gather all the configurations for the multiple services that get installed: we had to learn a lot of stuff.

    Olinux: How many schools have joined the project? And how many users will be involved? Once a school decide to join Red Escolar, then what are the next steps?

    Arturo Espinosa: Take a look at that in Red Escolar's web page: Redescolar. As for Red Escolar Linux, we have 7 interested states of the Mexican republic, and one of them is using it extensively.

    Olinux: Did the Mexican Government support the ideas? How is Mexican government role? How Red Escolar is helping to transform Mexican Education system?

    Arturo Espinosa: Te Mexican Government has been giving economic support since the beginning, and Red Escolar have been words of the president. I hope this vanguard project opens the world to a lot of people: the teachers in the project are in general enthusiastic, as far as I know. Please take a look at Red Escolar's web page for more info on this.

    Olinux: What Open Source companies or organization are helping the project, donating money or helping with people? How do they do that?

    Arturo Espinosa: seul/edu has us listed inside their projects, and their community has helped with ideas, and we are part of the emerging OFSET organization. None of them have provided with the kind of help you mention, and we didn't expect it, due to their nature.

    In the case of OFSET, it is even the other way around: we are hosting them.

    Some Free Software companies have shown proud that we use their distribution, but never communicated with us or sent any kind of resources, but that's stuff we never expected either: we got paid for our develpments, and our bosses were responsible of providing us whith what we may have needed.

    Some Mexican GNU/Linux Users Groups have shown their interest in participating on the project, and we may take their offer for support in the future.

    Olinux: Why did you choose Linux and FreeSoftware for the Red Escolar? Are there any default distro?

    Arturo Espinosa: We have made our own distro, based on a mixture of RedHat 6.2 and 6.0 (the installer). GNU/Linux was not chosen for Red Escolar, as a general policy could dictate: it is an option that the technical people in the states of the republic decide to adopt.

    Olinux: Was there any kind of resistance for using a Open Source Software in Mexico?

    Arturo Espinosa: Yes, particulary from people from Intel that visited us, and other people who saw their initiatives kiss bye bye, given the acceptance of our project.

    Olinux: What are the main software development being worked on these days? Any special distance learning software?

    Arturo Espinosa: We include certain educational free software, but there isn't much out there. All the distance education stuff is done through the web and e-mail , so there's no need for Red Escolar to develop any in-house software of that kind.

    Olinux: Tell us about infrastructure: how many servers were already installed? how do you manage to hook schools to the internet? how many labs have been installed and currently maintained? Are there any installFest or InstallDays?

    Arturo Espinosa: There are lots of servers installed this days, but I couldn't tell how many. The Red Escolar and state government guys take care of all hardware infrastructure, so I think you would have to ask them. In general, they make special deals with ISP and phone companies for the Net access.

    Olinux: Is there a site where people can get information about the Red Escolar Linux? Are you going to try to expand this idea to other countries?

    Arturo Espinosa: redesc

    The use in other countries goes beyond my responsability. I'm going to Brazil in September: they people form Rio Grande do Sul, where we gave a conference, want to adopt it, and I'm going to give them some training. This is free software: any government from any country is free to download, modify and use our CD, according to their local needs.

    Olinux: In your words, what are the main achievements reached by the Red Escolar? Please, can you give us some numbers showing those results? Are you satisfied with it?

    Arturo Espinosa: Red Escolar Linux made it all the way, in terms of development, and that's as far as my responsability went. The installation phase of our project is too young to show any numbers: sorry. I'm satisfied with my labour, and expect the Red Escolar people to really take advantage of the solution Miguel Ibarra and me brought up.

    Olinux: What are your plans for 2000 concerning Red Escolar?

    Arturo Espinosa: I'm transplanting any responsability on my side to the Red Escolar people, as I am now working for Helix Code. Brazil will probably be my last move related to this project. Greetings, Arturo.


    Copyright © 2000, Fernando Ribeiro Corrêa
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Mathias Ettrich: founder of KDE

    By Fernando Ribeiro Corrêa
    Originally published at OLinux


    Olinux: How relevant is KDE Project regarding self professional accomplishment? How important is KDE for you profesional carrier? Or How proud are you being KDE's founder?

    Mathias Ettrich: Without KDE, I probably wouldn't been doing what I am doing now, that is making a living from Unix programming. In that aspect it was very important for my professional carrier. But writing free software in general is a good start into the software industry, KDE is no exception there. Take the big Linux distributors, for example. You'll find KDE Core Team members in leading positions at S.u.S.E, Caldera, Mandrake, even RedHat. Most of these people got the initial contact through KDE. It's a safe bet for an employer, they look at the code and know what these guys are capable of.

    When I initially proposed KDE, I was in the lucky position to be the first one to recognize that the time and technology was there to start such a project. If I didn't do it, somebody else would. The interesting part is the project itself, the great programmers it attracted and the they way it grew without giving up its personality. KDE was the first free software project to address user friendliness and consistency on the desktop. This gave us lots of praise but at the same time also lots of critics, like "copying MS-Window", "not innovative", "destroying the soul of unix" and so on. We survived the critics and proved our concept to be right. This is what makes me proud.

    Olinux: Please, evaluate rapidly KDE evolution since it began? Can you describe something that really helped the project to succeeded? Have any ideas of number of downloads of KDE1 and its apps?

    Mathias Ettrich: In a very short phrase: when KDE started, nobody belived it was possible to a free graphical user interface that could compete feature wise with the offerings on other platforms. Unix was seen as a pure server operating system, especially after the commercial Unix players managed to even kill the few things that were there. Ever tried a new Sun workstation the way it comes from the vendor? Their desktop - although maintained by quite a large group of paid fulltime developers - is worse than twm plus xclock plus two or three xterms. And that was exactly what people were using instead. The Linux community reacted on this obvious shortcoming by praising the command line as the mother of all interactive computing interfaces and by calling everybody a wimp who wanted to use a mouse. A programmer who wanted to utilize component libraries instead of reinventing the wheel for every single program wasn't a real hacker. To be able to change this attitude within the Linux Community and to turn Unix into a serious player on the desktop, were probably the biggest achievements of KDE.

    In fact, unlike CDE and other commercial offerings, KDE was so successful that the FSF not only started to bash it, but also to clone it. This brings us on par with Nextstep at its time and is quite a good indicator for good software. We kept the lead, though, and I don't see this changing anytime soon.

    The key to the success of KDE is the huge amount of code that is shared between applications. We implemented the basic idea of free software - code sharing - to a degree that was never done before. This was possible due to two reasons: a) the choice of an object oriented language and its sane use within the project and b) the concept of open source in general. Before I forget it: I said "Linux Community" on purpose, as the BSD people in general were much more open towards the idea of KDE.

    For the number of downloads: ask our ftp master Martin Konold (konold@kde.org), I can't remember all those big numbers ;)

    Olinux: Shortly, What are the main improvements on the release of KDE2? How can the KDE2 release help the spread of Linux through the desktops?

    Mathias Ettrich: The main improvement is certainly Konqueror, the integrated file manager and browser. Thanks to the KParts component technology, it's a very generic and powerful tool. Basically, it lets you browse and view almost everything, may that be web pages, directories, text documents, images, remote machines or whatever. Via our lightweight middleware, the Desktop Communication Protocol (DCOP), all services are easily accessible to all applications. Overall, KDE 2 is a much tighter integrated desktop compared to KDE 1. For a detailed list of technical changes, please point your browser to kde.

    KDE 2 may very well help spreading Linux through the desktops because of two very simple reasons: it's good and it's sexy. People will want to have it and play with it the moment they see it. It's also a big step towards an even more integrated Unix desktop. Examples for this are the full support for the upcoming common NET_WM window manager hints, XDND drag and drop, drop support for the obsolete Motif DND protocol, standard-compliant XSMP session management, full support for the new dotdesktop-file standard and last but not least a wide range of support for applications written with the Gimp Toolkit. Apart from the common DND protocol, we propagate color and font settings to non-KDE applications and make it possible to import legacy pixmap themes. For those who like it, KDE 2 also emulate GTK's native look and feel (a modernized motifish-look with mouse-under effects).

    Another thing is support for commercial applications. In the libraries, we brought the Qt and the KDE API closer together. It's more straightforward now to port a pure Qt Application to KDE. After the KDE 2.0 release, we hope that some of the commercial Qt applications will also ship special KDE editions. The Opera browser or the excellent video software MainActor may be two great candidates for that.

    Olinux: Rapidly, What are the better features KDE2 will bring to users that Windows doesn't have?

    Mathias Ettrich: Users coming from Windows will gain much more sophisticated window management (snap to border, more mouse and keyboard shortcuts, different focus policies, shading), a nicer and more modern look and feel with lots of different window decoration and widget styles to choose from, a fancy desktop panel with applets for various tasks, a multi-session capable terminal window that is really usable, virtual desktops without the need of buying two screens and graphics cards, session management, network transparency, overall more configurability, a great mail client and many more neat applications. One of the most important features I almost forgot: a very stable, fast and secure underlying operating system, may it be Linux or one of the free BSD derivates.

    Olinux: What is your involvement with LyX? What about KLyX? Do you think LyX is a good paradigm to text edition?

    Mathias Ettrich: Unfortunately I don't have any spare time left for LyX development. Bad for me, but doesn't harm the project much as it's really well taken care of. The only support I currently provide is letting my employer host lyx.org.

    Regarding KLyX, Kalle Dalheimer and I eventually want to do another port of LyX to KDE 2.0, based on the newest LyX code. If we do that, it will happen together with the LyX Team and the result will be integrated into the LyX source repository. But nothing has been decided yet.

    For scientific writing, LyX is simply the best thing you can get. I couldn't possibly imagine having had to write a master thesis with a standard word processor. In that case, I'd rather choosen plain LaTeX and vi.

    Olinux: What will the KDE project next steps: release KDE2.0? What are them? When is the KDE2 going to be released?

    Mathias Ettrich: If we stick to our schedule, the official release will be at the 4th of september this year. This requires that we manage to provide the final beta at the 18th this month.

    Right after KDE 2.0, there will be development business as usual. Mosfet is working on another iteration of the style engine that unfortunately didn't make it into KDE-2.0 in time, the plain ANSI-C DCOP client will be finished (it works pretty well already) and we'll provide DCOP integration for the most popular scripting languages (we have proof-of-concepts for Perl, Python and TCL). There wasn't much point pushing this before KDE-2.0, because for scripting you need applications talking DCOP first. Still a lot of applications want to be ported to the new KDE 2.0 API and make use of the new features provided there.

    Olinux: Why was Qt chose for the KDE team? Would you go from Qt to another graphic library like Gtk? What is the difference between Qt versions 2.0 and 1.0?

    Mathias Ettrich: KDE is a true open source project that consists mainly of voluntary work. You can't compare this to commercial open source projects like Mozilla or the Eazel file manager. KDE gets written because its authors want to write it, while those commercial open source projects get written because somebody senses business and pays programmers to do the job. Don't get me wrong, that's a perfectly sane thing to do, but it also has an influence on the choice of tools. While you can easily make programmers work with inferior technologies and let them reinvent everything from scratch by giving them enough money, you cannot do that in a free project like KDE. Free programmers work for fun. Better tools promise more fun. Programming with Qt is extreme fun, as it lets you concentrate on what you really wanted to do: writing an application, not fighting a toolkit or a programming language. If Netscape used Qt, they would have release a modern cross-platform browser two years ago. Now we are still waiting for a final release of Mozilla and what we will get ships with its own middleware, a new component system and yet another widget set. Compare Mozilla with Konqueror, compare the sizes of the development teams, the time they used and the results. Then judge for yourself.

    If you compare Qt-2.2 (the Qt version used by KDE-2.0 final) with Qt-1.44 (the Qt version used by the latest KDE 1.x), the changes are endless: Unicode everywhere, the object property system, network classes, XML/Dom, highly optimized 2D graphics canvas, graphical user interface designer, generic table control, MDI workspace, generalized widget styles, rich text output and much more.

    Olinux: What is your job at TrollTech? What do you do?

    Mathias Ettrich: Together with Arnt Gulbrandsen, I'm leading the Qt Core Group within Trolltech. We are responsible for the technical aspects of further Qt development, what modules to develop, what classes to put in. As a KDE Core Team member, I'm most certainly taking KDE into consideration when making this kind of decisions. My most recent project was the Qt Designer, the long-awaited graphical user interface designer from Trolltech. I'm quite proud that we manage to provide this technology to the other KDE developers under the GNU General Public License (GPL) and I'm looking forward to an integration of our GUI design component into the KDevelop IDE. Apart from that, many new developers needs to be integrated in the Qt development team, which demands a good share of my time as well. But I guess that's simply how it is in a sucessful and fast growing company.


    Copyright © 2000, Fernando Ribeiro Corrêa
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Variable Mangling in Bash with String Operators

    By Pat Eyler


    Abstract

    Have you ever wanted to change the names of many files at once? How about using a default value for a variable if it has no value? These and many other options are available to you through string operators in bash and other Bourne shell derived shells.

    String operators allow you to manipulate the contents of a variable without having to write your own shell functions to do so. They are provided through 'curly brace' syntax. Any variable can be displayed like this ${foo} without changing its meaning. This functionality is often used to protect a variable name from surrounding characters.

         bash-2.02$ export foo=foo
         bash-2.02$ echo ${foo}bar # foo exists so this works
         foobar
         bash-2.02$ echo $foobar # foobar doesn't exist, so this fails
    
         bash-2.02$ 
    

    By the end of this article, you'll be able to use it for a whole lot more.

    There are three kinds of variable substitution:

    • Pattern Matching,
    • Substitution,
    • Command Substitution.
    I'll talk about the first two and leave command substitution for another article.

    Pattern Matching

    There are two kinds of pattern matching available, matching from the left and matching from the right. The operators, with their functions and an expample, are shown in the following table:

    Operator Function Example
    ${foo#t*is} deletes the shortest possible match from the left export $foo="this is a test"
    echo ${foo#t*is}
    is a test
    ${foo##t*is} deletes the longest possible match from the left export $foo="this is a test"
    echo ${foo#t*is}
    a test
    ${foo%t*st} deletes the shortest possible match from the right export $foo="this is a test"
    echo ${foo%t*st}
    this is a
    ${foo%%t*st} deletes the longest possible match from the right export $foo="this is a test"
    echo ${foo#t*is}
     

    Note: While the # and % identifiers may not seem obvious, they have a convenient mnemonic. The # key is on the left side of the $ key and operates from the left. The % key is on the right of the $ key and operated from the right.

    These operators can be used to do a variety of things. For example, the following script will change the extension of all '.html' files to '.htm'.

    #!/bin/bash
    # quickly convert html filenames for use on a dossy system
    # only handles file extensions, not file names
    
    for i in *.html; do 
       if [ -f ${i%l} ]; then
           echo ${i%l} already exists
       else
           mv $i ${i%l}
       fi
    done
    
    

    Substitution

    Another kind of variable mangling you might want to employ is substitution. There are four substitution operators in Bash. They are shown in the following table:

    Operator Function Example
    ${foo:-bar} If $foo exists and is not null, return $foo. If it doesn't exist, or is null, return bar. export foo=""
    echo ${foo:-one}
    one
    echo $foo
     
    ${foo:=bar} If $foo exists and is not null, return $foo. If it doesn't exist, or is null, set $foo to bar and return bar export foo=""
    echo ${foo:=one}
    one
    echo $foo
    one
    ${foo:+bar} If $foo exists and is not null, return bar. If it doesn't exist, or is null, return a null. export foo="this is a test"
    echo ${foo:+bar}
    bar
    ${foo:?"error message"} If $foo exists and isn't null, return it's value. If it doesn't exist, or is null, print the error message. If no error message is given, it prints parameter null or not set.
    Note: In a non-interactive shell, this will abort the current script. In an interactive shell, this will just print the error message.
    export foo="one"
    for i in foo bar baz; do
    eval echo \${$foo:?}
    one
    bash: bar: parameter null or not set
    bash: baz: parameter null or not set
    Note: The : in the above operators can be omitted. Doing so changes the behavior of the operator to only test for existence of the variable. This will cause the creation of a variable in the case of ${foo=bar}

    These operators can be used in a variety of ways. A good example would be to give a default value to a variable normally read from the command line arguments when no arguments are given. This is shown in the following script.

    #!/bin/bash
    
    export INFILE=${1-"infile"}
    
    export OUTFILE=${2-"outfile"}
    
    cat $INFILE > $OUTFILE
    
    

    Hopefully this gives you something to think about and to play with until the next article. If you're interested in more hints about bash (or other stuff I've written about), please take a look at my home page. If you've got questions or comments, please drop me a line.


    Copyright © 2000, Pat Eyler
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Using ngrep

    By Pat Eyler


    Using ngrep

    ngrep

    ngrep is a tool for watching network traffic. It is based on the libpcap library, which provides packet capturing functionality. ngrep allows regular expression style filters to be used to select traffic to be displayed.

    ngrep is the first utility we'll discuss that doesn't ship on most linux systems. We'll talk about how to get and install it, how to start it up and use it, and more advanced use.

    Getting & Installing ngrep

    Source code for ngrep is available from http://www.packetfactory.net/Projects/ngrep/ as are some binary packages. I'll review installing the source, as the binary packages are fairly straight forward.

    On a Redhat 6.2 system, you'll need to install libpcap before you can install ngrep. This package is available from http://www.tcpdump.org/release. As of this writing, the most recent version is libpcap-0.5.2.tar.gz. Once this is downloaded, I put things like this into /usr/local/src, you should do the following:

    $ tar xvzf libpcap-0.5.2.tar.gz
    $ cd libpcap_0_5rel2
    $ ./configure
    $ make
    $ su
    Password: ********
    # make install-incl
    # make install-man
    # exit
          
    


    Your next step is to build ngrep itself. ngrep source can be downloaded from http://www.packetfactory.net/Projects/ngrep. After downloading it, do the following:

    $ tar xzvf ngrep-1.38.tar.gz
    $ cd ngrep
    $ ./configure
    $ make
    $ su
    Password: ********
    # make install
    # exit
          
    


    Congratulations, at this point you should have a working copy of ngrep installed on your system.

    Using ngrep

    To start using ngrep you'll need to decide what pattern you want to search for. These can be either libpcap style descriptions of network traffic or GNU grep style regular expressions describing the contents of traffic. In the following example, we'll grab any packet containing the pattern "ssword" and display it in the alternative format (which I think is a lot more readable):

    [root@cherry /root]# ngrep -x ssword
    interface: eth0 (192.168.1.0/255.255.255.0)
    match: ssword
    ################################
    T 192.168.1.20:23 -> 192.168.1.10:1056 [AP]
      50 61 73 73 77 6f 72 64    3a 20                      Password:
    #########################exit
    59 received, 0 dropped
    [root@cherry /root]# 
          
    


    Each of the hash marks in the above example represent a packet not containing the pattern we're searching for, any packets containing the pattern are displayed.

    In this example we followed the basic syntax of ngrep – ngrep <options> [pattern]. We used only the -x option, which sets the alternative display format.

    Doing More with ngrep

    There are a number of additional twists to the way that you can use ngrep. Chief among them is the ability to include libpcap style packet filtering. libpcap provides a fairly simple language for filtering traffic.

    Filters are written by combining primitives with conjunctions (and and or). Primitives can be preceeded with the term not.

    Primitives are normally formed with an id (which can be numeric or a symbolic name followed by one or more qualifiers. There are three kinds of qualifiers; type, direction, and protocol.

    Type qualifiers describe what the id refers to. Allowed options are host, net, and port. If not type is given, the primitive defaults to host. Examples of type primitives are; host crashtestdummy, net 192.168.2, or port 80

    Directional qualifiers indicate which direction traffic is flowing in. Allowable qualifiers are src and/or dst. Examples of direction primitives are: src cherry, dst mango, src or dst port http. This last example shows two qualifiers being used with a single id.

    Protocol qualifiers limit the captured packets to those of a single protocol. In the absence of a protocol qualifier, all ip packets are captured (subject to other filtering rules). Protocols which can be fileter are tcp, udp, and icmp. You might use a protocol qualifier like icmp, or tcp dst port telnet.

    Primitives can be negated and combined to develop more complex filters. If you wanted to see all traffic to rose except telnet and ftp-data, you could use the following filter host dst rose and not port telnet and not port ftp-data.

    There are some command line switches that are worth noting as well. The following table shows the command line switches likely to be of the most use. As usual, check the man page for more detail.

    Table 1. Command Line Switches for ngrep

    -e Show empty packets.
    -n [num] Match num packets, then exit.
    -i [expression] Search for the regular expression without regard to case.
    -v [expression] Search for packets not containing the regular expression.
    -t Print a YYYY/MM/DD HH:MM:SS.UUUUUU timestamp on each matched packet.
    -T display a +S.UUUUUU timestamp on each matched packet.
    -x Show the packets in the alternate hex and ascii style.
    -I [filename] Read from a pcap style dump named filename instead of live traffic.
    -O filename Write output to a pcap style file named filename.
    -D Mimic realtime by printing matched packets at their recorded timestamp.


    Wrapping up ngrep

    Using ngrep can help you quickly match and display packets during your troubleshooting. If you've got an application level problem, ngrep can help you isolate the problem.

    For example, if I was trying to make a connection from cherry (192.168.1.10) to cuke (192.168.2.10) and the connection was failing, I might troubleshoot the problem like this:

    Describe the symptoms: Cherry can not make a connection to hosts on remote network, but can connect to hosts on other networks. Other hosts on cherry's network can connect to hosts on the remote network.

    Understand the environment: The hosts involved are cherry, rhubard (the gateway to the remote network), and cuke.

    List hypotheses: My problems might be a misconfiguration of cherry, or of an intervening router.

    Prioritize hypothesis & narrow focus: Because cuke seems to be the only host affected, we'll start looking there. If we can't solve the problem on cuke, we'll move to rhubarb.

    Create a plan of attack: I can try to ping cuke from cherry while ngreping for traffic like this, ngrep host cherry to see what traffic I am sending.

    Act on your plan: As we start pinging cuke, we see the following results in our ngrep session:

    [root@cherry /root]# ngrep -e -x host 192.168.1.10
    interface: eth0 (192.168.1.0/255.255.255.0)
    filter: ip and ( host 192.168.1.10 )
    #
    I 192.168.1.10 -> 192.168.2.10 8:0
      eb 07 00 00 31 86 a7 39    5e cd 0e 00 08 09 0a 0b    ....1..9^.......
      0c 0d 0e 0f 10 11 12 13    14 15 16 17 18 19 1a 1b    ................
      1c 1d 1e 1f 20 21 22 23    24 25 26 27 28 29 2a 2b    .... !"#$%&'()*+
      2c 2d 2e 2f 30 31 32 33    34 35 36 37                ,-./01234567    
    #
    I 192.168.1.1 -> 192.168.1.10 5:1
      c0 a8 01 0b 45 00 00 54    25 f2 00 00 40 01 d0 52    ....E..T%...@..R
      c0 a8 01 0a c0 a8 02 0a    08 00 dc 67 eb 07 00 00    ...........g....
      31 86 a7 39 5e cd 0e 00    08 09 0a 0b 0c 0d 0e 0f    1..9^...........
      10 11 12 13 14 15 16 17    18 19 1a 1b 1c 1d 1e 1f    ................
      20 21 22 23 24 25 26 27    28 29 2a 2b 2c 2d 2e 2f     !"#$%&'()*+,-./
      30 31 32 33 34 35 36 37    b4 04 01 00 06 00 00 00    01234567........
      00 10 00 00 01 00 00 00    e8 40 00 00                .........@..    
    exit
    2 received, 0 dropped
    [root@cherry /root]# 
        
    
    This shows two packets. The first is an ICMP packet of Type 8 and Code 0, a ping request. It is destined for cuke. The second is and ICMP packet of Type 5 and Code 1 and ICMP Redirect. This is coming from Mango, the gateway to the rest of the world.

    Test your results: We didn't expect to see mango involved at all, if we look at the ICMP Redirects being sent (using the -v switch), we can see that we're being redirected to the 192.168.1.11 address, not rhubarb.

    Apply results of test to hypothesis: If we're not sending our traffic to the right gateway it will never get to the right place. We should be able to solve this by adding a route to the 192.168.2.0/24 network on cherry (a quick check of working hosts shows that this is the way they're configured). We'll probably want to fix the bad route on mango as well.

    Iterate as needed: Once we've made the change and tested it, we know that it works and don't need to go any further.

    Author and License

    This document is a draft release of a section from a book called "Networking Linux: A Practical Guide to TCP/IP" being published by New Riders Publishing. This section is released under the OPL with no further terms, the Free Software Foundation has agreed that this constitutes a Free license. (Please see www.opencontent.org and www.gnu.org for more information).

    Because this document is a draft, there are likely errors, typos, and the like. If you see anything that you think should be changed, please let me know. I can be reached at pate@gnu.org.


    Copyright © 2000, Pat Eyler
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Easy Addition of an IDE CD-Writer to a Linux/Redhat PC

    By Daniel Feenberg


    The CD-Writing HOWTO at http://www.guug.de/~winni/linux/and the official cdrecord site at www.fokus.gmd.de/research/cc/glone/employees/joerg.schilling/private/cdrecord.html offer a great deal of information about installing and using CD-ROM writers. Certainly much more than you need to know for an ordinary Linux installation. If you don't want to support an obsolete drive, or an older kernel, or VAX VMS, then you probably don't need to recompile the kernel or make any devices. This short document should be sufficient to get you started burning disks. Once you are started, the official documents will guide you to such esoterica as audio, bootable, multisession and hybrid disks. After determining that RedHat 6.1 was so easy, I tried setting up several other distributions, including RedHat 6.0, SUSE 6.1, Debian 2.1, and Storm 2000. All of those were slightly harder because they didn't include a recent version of cdrecord but none required a kernel rebuild, I have noted the differences along the way. You will need to have:

    • a supported IDE drive,
    • a Linux kernel that supports loadable device drivers,
    • use LILO to boot. (Default for most distributions).
    • cdrecord 1.8 or better (included with RedHat 6.1).

    I believe any drives you bought in a store recently will qualify. The cdrecord docs say that all 1999 or later ATAPI drives support MMC, which is sufficient. Many earlier drives are suppoted also. A look on the shelves at the local computer superstore did not turn up any that mentioned MMC or Linux on the box. My first installation used an older Richoh MP6200A cd recorder. I did more installations with the MagicWriter 4X4X24. This is a very cheap drive, but the manual had a 1999 copyright date and it did work as I expected.

    Physical Installation

    Perform the physical installation of the new drive just as you would any ide drive. It can replace your original read-only drive or be added on. Make sure the drive jumper is set for master or slave as required, the power cable is connected and the data cable has the correct orientation. You shouldn't have to do any CMOS setup. I am told that keeping the cd-writer on a different cable from the hard drive speeds data transfers, but this is probably not significant with Pentium class machines.

    At this point stop to check if the BIOS mentions the drive during BIOS initialization. Not all BIOS do that, but especially if it doesn't, you might want to check the cables and the Master/slave jumpers on both the CD-writer and any other drive on the same cable. The /var/log/messages file should have a line about the new "ATAPI" drive.

    SCSI Emulation Setup

    1. Find out the name of physical CD-ROM device. This is probably /dev/hdc (the master device on the second IDE cable) but could be /dev/hdb (slave device on the primary IDE cable or /dev/hdd (slave device on the secondary cable).

    2. Become root.

    3. Add the following line to the end of your /etc/rc.local: insmod ide-scsi

      RedHat seemed to be the only distribution with an rc.local file. for the others, you have to find some other mechanism for running this command before burning disks and after each reboot.

    4. Add the following line to your /etc/lilo.conf file: append="hdc=ide-scsi" where "hdc" might be "hdb" or "hdd" depending on where your drive is installed. This line should be inserted just after the image statement that boots Linux on your computer. This instructs the kernel to access the cd-writer via the scsi emulation driver.

    5. Reconfigure LILO by running the following command at the Unix shell prompt: lilo
    6. You need to ``install'' cdrecord and mkisofs. Version 1.8.1 of cdrecord is on the RedHat disk in an rpm file. Here are the commands for the software installation with RedHat. cd /cdrom/RedHat/RPMS rpm --install cdrecord* rpm --install mkisofs* The other distributions either included an older version of cdrecord (1.6) which did not support my recorders, or did not include cdrecord at all. If don't have RedHat and need to compile these yourself, the most recent version of cdrecord can easily be found at www.freshmeat.org. I noted that the very extensive cdrecord instructions cover many operating systems, not just Linux (or even Unix). For the distributions considered here, all you need to do is: tar -xvf cdrecord*tar cd cdrecord-1.8 make make install The cdrecord tarball includes mkisofs. Both packages get installed to /opt/shilly/bin so you will need to make links from a directory in the root path, such as /usr/bin. A very nice feature of cdrecord is that it auto-detects the drive characteristics at run time so there isn't any configuration and you can replace or upgrade the drive without getting into trouble.

    7. If you want to read with the recorder, you will need to add or modify the appropriate line in /etc/fstab so that the drive is addressed through the ide-scsi interface. The following worked for me, but the device name may not be correct for all distributions (/dev/sg0 is an alternative), and in any case reading is beyond the scope of this document. /dev/scd0 /cdrom auto defaults,ro,noauto,user,exec 0 0

    8. Reboot the machine. The installation is complete. My impression is that if you did anything wrong, there won't be any error messages, so go back and check the spelling of the changes listed above before proceeding to actually testing your work.

    9. Now you get to check if the installation was successfull: cdrecord -scanbus One of the output lines should mention a Removable CD-ROM, and maybe even indicate ``-R'' or ``RW'' to indicate that it is a recorder. Something like this: 0,0,0 0) 'Richoh' 'MP602A' '2.03' Removeable CD-ROM Drive Only the digit triple at the start is significant. It indicates controller, SCSI ID, and LUN, and is likely to be all zeroes, as shown with SCSI emulation. The only error message I saw in my experiments combined the cases of (1) scsi emulation not correctly installed and (2) drive not found or supported. At that point you might try getting reading to work as an IDE drive in Windows, then in Linux as IDE, then in Linux with SCSI emulation, before concluding that the drive was not supported or broken. I often find this systematic approach to hardware debugging slow but sure.

    Writing Disks

    1. Writing disks is a two step process. First the ISO filesystem is created on your hard disk: mkisofs -v -o file.iso file... where file.iso is your output file and file... is the list of files and directories you want on the cd-rom. If you just list a single directory, the structure is maintained on the CD. Otherwise all the files and subdirectory files are dropped into the root directory with no subdirectory structure. There are a lot of options described in the man page. If you keep your filenames to 8.3 lower case, you won't need to be bothered with most of them. The "-J" option (for Joliet) will allow longer Windows style filenames, but if you actually use longer or case sensitive names your file names will look funny or not work in a minimal ISO9660 system.

    2. Then you actually burn the cd-rom: cdrecord -v dev=0,0,0 file.iso

      The ``dev=0,0,0'' specifies the output device, and might be possibly be different on your system, check the cdrecord -scanbus output if in doubt. Because cdrecord wants to lock pages in memory, it has to be run as root. Making cdrecord setuserid root is endorsed by the Shilly's documentation.

      On my 1997 vintage 233mHz AMD with a 5400 rpm hard disk, and a quad speed CD-ROM writer, the system had no trouble maintaining speed and the 512K buffer was never less than 97% full. After initial success you might try combining the mkisofs and cdrecord steps:

      mkisofs file... | cdrecord -v dev=0,0,0 - where the hyphen indicates to cdrecord that it should take its input from the standard input. This worked on my system, even when the files were NFS mounted (on a 100BaseTX connection).

    Any corrections or suggestions should be sent to me. I am particularly interested in hearing which distributions will work with these minimal instructions and any variations. I don't want to encroach on existing documentation by covering enhanced capabilities - that is well handled already by the existing documentation.

    This page is kept at http://www.nber.org/cdrecord.html

    [There is another article about CD recording in this issue. -Mike.]


    Copyright © 2000, Daniel Feenberg
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Creating A Linux Certification Program: Part 10

    Around The World In Less Than Two Years

    By Ray Ferrari


    The Linux Professional Institute(LPI) has been very busy. With the help of volunteers from around the world, an advisory council which includes individuals from the top Linux related companies, and sponsors which help promote their agenda, LPI is poised for global recognition. Since it's inception two years ago, the mission has been simple and straightforward; to deliver "a standardized, multi-national, and respected program to certify levels of individual expertise in Linux".


    In less than two short years, the dream has become a reality for the founders of LPI. With a Linux Professional Institute Certification(LPIC) now positioned to become the de-facto certification worldwide, the hard work and sponsorships are having their desired effect. Much work must yet be done; volunteers and sponsors are always needed; as the non-profit organization plans to make their certification available to anyone around the world.


    Already, their web site http://www.lpi.org/ has been translated into Japanese and German, with French and Chinese to closely follow. Other languages are in the process of being added, with many volunteers assisting in the endeavor. Anyone wanting to volunteer to help the translation team is asked to contact Duane Dunstan by e-mailing him at pdunstan@pfeiffer.edu. An LPI-Japan has been established with directors from Turbolinux Japan, IBM Japan, Linuxcare, and assistance from Fujitsu, Hitachi, NEC, Keio University and others.


    The organization has been contacted by interested parties to create an LPI Greater China, with an Official Office in Hong Kong and Representative Offices in China and Taiwan. Invitations have been extended to attend and speak at the upcoming IDG shows in Beijing, China(Aug. 29-Sept.1); with Jon Maddog Hall representing LPI; Taipei, China(Sept.7-9), and Kuala Lumpur, Malaysia(Nov. 7-9); with Dan York speaking on certification.


    A book has been written; specifically for the LPIC Exam 101; by the publishing giant MacMillan. IBM has embraced the LPIC, making it a technical prerequisite for their own "Solution Technologist" e-business certification. According to Dan York; president of the LPI board of directors; "This relationship with IBM provides a path for those professionals to further develop and validate their skills". IBM's Scott Handy declares that "Linux plays a critical role in today's world of e-business. Ensuring our partners have a strong, consistent knowledge of the Linux operating system, ... is critical to us, our partners and our customers.


    A search for "Linux certification" under Yahoo; http://www.yahoo.com/ ; yields LPI and Lintraining. Anyone interested in Linux certification should visit both websites for information on testing centers, training materials, instructors or trainers. To visit LPI's preparation web site, go to http://www.lpi.org/c-preperation.html. There are currently 361 locations around the globe offering linux training. For a comprehensive list, go to http://www.lintraining.com.


    The first web based LPIC Level One course has been started through CyberState University. The cost for the course is currently set at $995 and qualifies the student to take the Level One exam from LPI. The V.P. of Curriculum Development for CyberState University; David Clarke; states "CyberStateU.com's approach...is unique...it integrates a real-world hands-on lab environment rather than a simulated environment. Real world experience is a critical success factor in passing LPI's rigorous exams".


    Some of the events which the LPI has participated in since January of this year, have included the countries of France, Germany, Australia, England, Canada, the United States and Japan. Upcoming events this year, will be in China, Malaysia, New Zealand and the United States(Atlanta:Sept.26-28/Linux Business Expo, and Las Vegas:Nov.13-17/Comdex).


    With a furry somewhat like that of a wildfire out of control, LPI has gone from infancy to a global organization in relatively short time. LPI encourages participation and public involvement through its mailing lists and web site. A Japanese translation of the web site is available at http://www.yesitis.co.jp/LPI/. For information on taking the LPIC exams at an authorized testing center, visit the Virtual University Enterprise at http://www.vue.com.


    For Linux enthusiasts, there is finally a platform for advancing their knowledge and abilities. It now appears that Linux is here to stay, and for the Linux Professional Institute, their future looks long and brite. For a grass roots effort to come so far in such a short time, shows the true value of a world wide community working together. Stay tune and look for a lot more from this organization.


    *Linux is a trademark of Linux Torvalds

    *Linux Professional Institute is a trademark of Linux Professional Institute, Inc.


    Copyright © 2000, Ray Ferrari
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Tuxedo Tails

    By Eric Kasten


    penguinstrike.png inetrevolt.png

    [I gave Eric a hard time about these cartoons not being about Linux, and he replied, "It's getting tough to say exactly what is and what isn't about Linux anymore, now that you've got Linux on embedded devices, desktops, servers, etc." Good point. Now, about my Java doorknob... -Mike]

    Eric also draws a strip called Sun Puppy about--you guessed it--puppies. Read it at http://www.sunpuppy.com.


    Copyright © 2000, Eric Kasten
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    CVS: Concurrent Versions System

    A source control program used by Mark

    By Mark Nielsen


    1. References
    2. Introduction
    3. Upload files to the CVS repository-- creating your first repository.
    4. Download files from CVS.
    5. Adding a file to the cvs repository "My_Files"
    6. Deleting a file from the cvs repository "My_Files"
    7. Changing a file and uploading changes to the repository "My_Files"
    8. Conclusion

    References

    1. www.cyclic.com
    2. Manual for CVS
    3. New user information.
    4. Support ----- I am not sure if this link is valid. There seems to be a company which will support CVS.

    Introduction

    CVS is a cool program to let people control different version of their software. It has saved my butt before. It is relatively easy to use, and everybody who is either new to programming, or who wants to stick to free software, should use CVS. Recently, CVS has become officially supported by a company.

    Getting Started

    First off, I assume you are using the BASH shell, which is standard on most intelligent Unix systems. I also assume you have root access to the computer you are using. There are simple ways to let various people have access to a CVS repository, but I will assume it will only be used by one person for now.

    Login as root, and execute the following. I assume you are the user "mark", but it can be any user on your system.
    mkdir /usr/local/cvs
    chown mark /usr/local/cvs

    Now login as "mark" and do the following.

    Edit your .bashrc file using vi or emacs or even pico, and enter these commands.
    CVSROOT=/usr/local/cvs
    export CVSROOT

    Save it and then execute "source .bashrc". Now when you log in, it will setup your environment to use this directory by default if you don't specify a directory to use.

    Make a directory, which is need for cvs.
    mkdir /usr/local/cvs/CVSROOT

    Upload files to the CVS repository

    The purpose of this exercise is to upload files into the repository so that the files are under source control. Later, we will download the files into another directory, or the Working directory.

    In your home directory for "mark", make a directory called "Temp_Source" and put a few files in it. Such as,
    mkdir Temp_Source
    ls > Temp_Source/File1.txt
    ls Temp_Source/* > Temp_Source/File2.txt

    Now we want to put the files in Temp_Source into CVS. Do this, enter the directory Temp_Source.
    cd Temp_Source
    then issue this command
    cvs import -m "Test Import" My_Files Revision1 start

    Now we are ready to make a working directory. We will forget about the directory Temp_Source and pretend it never existed. By the way, take a look at /usr/local/cvs and see what cvs has done to it. You can add more packages to the cvs repository if you wish.

    Download files from CVS

    Okay, now we want to download these files into a Working directory and we will pretend that our Temp_Source directory doesn't exist. This will often be the case where someone else enters files into CVS and expects you to maintain them.

    When we checkout a package from cvs, it will create a directory for us. The parameter "My_Files" that we specified when we uploaded the files into cvs will be the name of the directory created for us when cvs downloads the package for us.

    Now we need to get the cvs package.
    cvs checkout My_Files

    If we look, we now have a directory named "My_Files". Enter into the directory,
    cd My_Files
    and execute the the "ls" command.
    ls
     

    Adding a file to the cvs repository "My_Files"

    I assume you are in the directory My_Files. In order to add a file to the repository, create one.
    ls /etc > File3.txt
    Now execute the command to set it up so that the file will be added to the cvs repository.
    cvs add File3.txt
    Now you need to actually upload the file. The previous command just setup the configuration to do it.
    cvs commit

    "cvs commit" will bring you into your default editor, vi or emacs or something else. Save the file, and when you quit the editor, cvs will ask you to continue, and select the option to continue. Now you have uploaded a file to the cvs repository "My_Files".

    Also, execute the "ls" command, and you will notice that you have a directory called "CVS". cvs creates a directory "CVS" in every directory that you download files from a repository and it keeps the changes up to date.
     

    Deleting a file to the cvs repository "My_Files"

    In order to delete a file from the cvs repository, do this
    rm File3.txt
    cvs remove File3.txt
    cvs commit

    The first step actually deletes the file in your directory. The second step removes it from the configuration of the current directory you are in. The third step commits this change to the cvs repository"My_Files". If you do not execute "cvs remove File3.txt", you will find it hard to execute "cvs commit" in the future and it won't update the repository correctly, at least that has been from my experience.

    Changing a file to the cvs repository "My_Files"

    I assume you are in the directory "My_Files". Let us add some content to the file File2.txt.

    ls /var >> File2.txt
    cvs commit
     

    Downloading updates that other people make

    If you have downloaded a package from a repository that someone else is maintaining, if you wish to download all the changes, then execute the followign command,

    cvs update -dP

    The "d" creates any directories that are or are missing.
    The "P" removes any directories that were deleted from the repository.
     

    Comments

    I have noticed the the "cvs commit" command should recursively go through all directories from where you are currently at and list all the changes in one file. However, on some systems, it makes a one file per directory, which means right before it uploads the changes to the repository, cvs starts the editor for each directory that has changes, which is annoying. I have to figure out how to set it up to list all the changes in one file and not many.

    CVS is the best source control program I have used. It is the best because it comes by default on major Linux distributions, and it is relatively easy to use, unlike some other source control software I have seen. Major free software websites use CVS, which is another plus, because if they use it, then it will be an ongoing project. Also, you can download documents using cvs over the internet. I downloaded all the sgml howtos from the Linux Documentation Project using cvs using their anonymous CVS server. This is very useful if you want to keep yourself up to date on various versions of documentation.


    Mark works as a computer guy at The Computer Underground and also at ZING and also at GNUJobs.com (soon).


    Copyright © 2000, Mark Nielsen
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Mark's Web/Database Installation

    Zope, Apache, Mysql, PostgreSQL, Perl, Python, PHP

    By Mark Nielsen


    1. Introduction
    2. Mark's opinion of the useful free software
    3. The installation script
    4. Conclusion
    5. References

    Introduction

    I find it easier to install services without using rpms, because I might install different versions. I am a firm believer in never using rpms when you have services that use custom made binary programs. The only exception is using the Debian GNU/Linux distribution which has some neat features.

    Why does Mark go through the trouble of installing all these web and database services? It is a learning tool which has been useful at all his jobs. When the opportunity strikes to use free software, it is handy to have live demonstrations of the capabilities of the free software. All of the skills associated with the software below have been very useful at various jobs.

    Mark's opinion on the free software

    The software Mark uses is all you need to become a top notch web/database programmer. Look at Dice.com and other webpages. Almost of the skills you need for commercial software can be learned first through the free software listed below. The power of the free software is that you can use it anywhere with unlimited possibilities. Put it on your laptop -- you don't have to pay a license. Learn the skills you need with the free software -- you can always get it and you can always learn it. You don't have to wait to get a stupid license. These skills are transferable to other webservers, databases, and most likely you will use the free languages mentioned below at any company.

    Hopefully, you will be able to convince your bosses and/or companies to use the free software. Free software is critical to save lots of money and to create an environment where your limitations are just people and hardware. Usually, I find it takes less people to manage well designed large networks than all the goof-balls who install commercial software.

    Here is a list of reasons why Mark considers this setup very valuable:

    1. Python -- I was a die-hard Perl fanatic. Until Python came along. It took me months of other people telling me to switch to Python before I decided to do it. Now, unless I need a specific Perl module, I use Python. I am going to see what I can do to help port Perl Modules to Python with the mod_snake module for Apache. Python is the language to use for rapid application development and is a key critical language for the Zope webserver. Learn Python now, and you will be a valuable employee or consultant later. When the market booms for Python programmers, it will be easier to get a job in Python rather than Perl. Where Perl was 5 years ago, Python is today. It is gaining ground in the corporate world slowly. You can create Python binaries so that you don't need Python installed in order to run a Python program. Python is an object-oriented language. The nice thing about Python is that it will make JAVA binaries, and hence, I can passby writing JAVA code. I really wish another language that was free and open would replace JAVA in browsers. To me, JAVA needs to be replaced, or fully open-sourced before I would consider it ethically-wise and business-wise. As long as it is not GNUed, or some similar deal, I consider putting your life in the hands of corporations (when you don't have to) to be skeptical.
    2. Perl -- The standard of web development. It is the most powerful language out there for web/database development. It is not as user-friendly as PHP, but I like it better because Perl can do more than just Web/Database stuff. Perl and Python are the two languages I use for scripting.
    3. PHP -- A nice language for people who just want to do web/database development. I don't use it much because I want to be really good and Perl and Python and I don't want to spread my skills out too much.
    4. Apache -- The best webserver in the world. I don't need to explain it. You can use Apache and SSL as a front end and put Zope behind Apache and SSL. Thus, Zope can also be encrypted indirectly.
    5. Zope -- A very powerful webserver that offers security, database, and html scripting all in one. It is truly the webserver to learn for rapid application development. Zope and Apache are separate beasts that can work together nicely with the Apache rewrite module. Apache and Zope compliment each other and serve different purposes. If you learn Zope and Apache both, you don't ever need to learn another webserver.
    6. PostgreSQL -- The best free database server in the world. I have noticed that PostgreSQL is based on elegance and is somewhat similar to Oracle for an SQL programmer. PostgreSQL has PL/pgSQL (like PL/SQL in Oracle), it has stored procedures, its counters are similar to Oracles,  and you can embed Perl commands in side SQL statements if you compile Perl into PostgreSQL. The only thing I don't like about PostgreSQL is its bad handling of large objects. If PostgreSQL had an easy method to use Large Objects like MySQL has, I would never use MySQL for private work.
    7. MySQL -- A great fast database server that a lot of companies use in combination with PHP. I like it, but I prefer PostgreSQL because MySQL lacks a lot of things. I don't want to spread my skills out over to many database servers, and thus, PostgreSQL and Oracle are my choices because they have many features. MySQL is good for fast simplistic databases. Its easy handling of large objects is really nice, which is why I still use it. I think MySQL and PostgreSQL serve different purposes and both are good. I like goofing around and experimenting and trying new things, so PostgreSQL will always be it for me.

    Installation script

    #### Before anything is done, goto the apache/src
    #### directory and edit Configuration.tmpl add uncomment info and so.  
    #### Answer y to just configure apache with mod_perl,
    ###      but don't let mod_perl build apache. 
    ### Add
    #       AddModule modules/mod_snake/libmod_snake.a
    # to the end of src/Configuration.tmpl in apache. 
    
    ### Then do this stuff. 
    
    cd mysql-3.22.32
    mkdir /usr/local/mysql-3.22.32
    ./configure --prefix=/usr/local/mysql
    make
    make install
    
    cd ../postgresql-7.0.2/src
    mkdir /usr/local/postgresql
    ./configure --prefix=/usr/local/postgresql --with-perl -with-odbc
    make
    make install
    chown -R postgres /usr/local/postgresql
    
    ### Just configure apache, but don't build it
    cd ../apache_1.3.12
    make clean
    ./configure --prefix=/usr/local/apache
    
    cd ../mod_snake-0.2.0
    ./configure  --prefix=/usr/local/mod_snake --with-apache=/usr/local/src/apache_1.3.12
    make
    make install
    
    ### Do not have perl build apache, just let is configure apache 
    cd ../mod_perl-1.24
    make clean
    perl Makefile.PL EVERYTHING=1 APACHE_PREFIX=/usr/local/apache
    make test
    make install
    
    ### Build apache
    cd ../apache_1.3.12
    make
    make install
    
    ### For those of you who might try to get php3 working with php4, 
    ### you can try to get the stuff installed, but I got the error
    #Syntax error on line 208 of /usr/local/apache/conf/httpd.conf:
    #Cannot load /usr/local/apache/libexec/libphp3.so into server: /usr/local/apache/libexec/libphp3.so: undefined symbol: dlst_first
    #/usr/local/apache/bin/apachectl start: httpd could not be started
    ### Thanks to Chad Cunningham for metioning you can get both php3 and php4
    ### working in Apache at the same time. I still had some errors, 
    ### so I just abandoned php3. 
    
    #cd ../php-3.0.16
    #./configure  \
    #--enable-versioning \
    #--with-pgsql=/usr/local/postgresql \
    #--with-mysql=/usr/local/mysql \
    #--with-config-file-path=/usr/local/apache/ --enable-track-vars \
    #--with-apxs=/usr/local/apache/bin/apxs --with-xml
    #make
    #make install
    
    cd ../php-4.0.1pl2
    ./configure  \
    --enable-versioning \
    --with-pgsql=/usr/local/postgresql \
    --with-mysql=/usr/local/mysql \
    --with-config-file-path=/usr/local/apache/ --enable-track-vars \
    --with-apxs=/usr/local/apache/bin/apxs --with-xml
    make
    make install
    
    cd ..
    mv Zope-2.2.0-src /usr/local/Zope
    chown -R nobody /usr/local/Zope
    cd /usr/local/Zope
    ### Setup the password and remember this password
    su - nobody 'python wo_pcgi.py
    ### My hack to get a password I can remember, very bad security risk
    su - nobody 'python zpasswd.py -u mark -p Something'
    
    ### Start the web servers
    su -c nobody '/usr/local/Zope/start' & 
    chown -R nobody /usr/local/apache
    /usr/local/apache/bin/apachectrl start
    
    #### Execute this command so that php can find the mysql libraries
    
    ln -s /usr/local/mysql/lib/mysql/libmysqlclient.so.6.0.0 /usr/lib/libmysqlclient.so.6
    
    ### Put this in your startup script for apache.
    ### this gets PHP working. 
    
    LD_LIBRARY_PATH=/usr/local/postgresql/lib
    export LD_LIBRARY_PATH
    PATH=$PATH:/usr/local/postgresql/bin
    export PATH
    export LIBDIR=/usr/local/postgresql/lib
    
    /usr/local/apache/bin/apachectl start
    
    
    ### REMEMBER to initialize the database for postgresql and mysql.
    ### Execute the shell commands above
    ### For postgresql,
    
    mkdir /usr/local/postgresql/data
    chown -R postgres /usr/local/postgresql
    cd /usr/local/postgresql
    su postgresql -c '/usr/local/postgresql/bin/initdb -D /usr/local/postgresql/data'
    /usr/local/postgresql/bin/pg_ctl -D /usr/local/postgresql/data start
    
    ### To initialize MySQL 
    cd /usr/local/src/mysql-3.22.32
    chown -R postgres  /usr/local/mysql
    su postgres -c 'scripts/mysql_install_db'
    su postgres -c '/usr/local/mysql/bin/safe_mysqld' &
    ### Remember to change the password for the server and to setup
    ### permissions for other users. 
    
    #### Remember to setup permissions in MySQL and PostgreSQL for
    #### the username that Zope and Apache un under. 
    
    #### Here are some httpd.conf file option I put at the bottom
    
     
     SetHandler perl-script
     PerlHandler Apache::OutputChain Apache::SSIChain Apache::Registry 
     PerlSendHeader On
     Options ExecCGI
     
    
    AddType application/x-httpd-php4 .php4 
    
    ### I haven't yet done anything with the mod_snake module. 
    
    #### These are my files and directgries in /usr/local/src
    Apache-OutputChain-0.07           Zope-2.2.0-src
    Apache-OutputChain-0.07.tar.gz    Zope-2.2.0-src.tgz
    Apache-SSI-2.13                   apache_1.3.12
    Apache-SSI-2.13.tar.gz            apache_1.3.12.tar.gz
    ApacheDBI-0.87                    mod_perl-1.24
    ApacheDBI-0.87.tar.gz             mod_perl-1.24.tar.gz
    DBD-CSV-0.1023.tar.gz             mod_snake-0.2.0
    DBD-ODBC-0.28.tar.gz              mod_snake-0.2.0.tar.gz
    DBD-Oracle-1.06.tar.gz            mysql-3.22.32
    DBD-Pg-0.95.tar.gz                mysql-3.22.32.tar.gz
    DBD-XBase-0.161.tar.gz            php-3.0.16
    DBI-1.14                          php-3.0.16.tar.gz
    DBI-1.14.tar.gz                   php-4.0.1pl2
    Install                           php-4.0.1pl2.tar.gz
    Install~                          postgresql-7.0.2
    Msql-Mysql-modules-1.2214         postgresql-7.0.2.tar.gz
    Msql-Mysql-modules-1.2214.tar.gz
    
    

    Conclusion

    Well, there is no conclusion, just comments.
    Combining these programs with CVS, and storing any changes you make to your database with CVS (like writing down stored procedures, formatting of tables, webpages, etc), can be a very powerful combination for newbies and experienced dudes.

    I have been telling people for years to get into the technologies above. I have noticed an explosion of jobs requiring these skills in the past few years. Even if you won't be using this stuff in the future, these programs have all the tools to get you familiar with the concepts which are transferable to any related commercial software package. The biggest threat that I had to learning these skills, to get relatively good jobs, was the lack of practice and lack of software to goof around on (years ago, when I wanted to get into web/database design, to get an MS system would have cost me well over $10,000 even as a student. Linux was around. It was all free. It had all the stuff. It had all the programming languages, web servers, database stuff, and networking capabilities. So here I am).

    The key to being a web/database developer, is getting good at one scripting language, Javascript, HTML, generic SQL, Apache, and managing one free database server and also CVS. From there, learn the other scripting languages, Zope, and other database servers. If you learn all of these skills on Linux, a lot of the software, like Apache, PostgreSQL, Perl, Python, and Zope, are ported to NT and many Unices. Thus, you can learn it easily on the other platforms.

    Hope this helps for beginners and don't hurt me for my opinions!

    References

    1. apache.org
    2. zope.org
    3. php.net
    4. python.org
    5. perl.com
    6. postgresql.org
    7. mysql.com

    Mark works as a computer guy at The Computer Underground and also at ZING and also at GNUJobs.com (soon).


    Copyright © 2000, Mark Nielsen
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Introduction to Shell Scripting

    By Ben Okopnik


    "When the only hammer you have is C++, the whole world looks like a thumb."
    -- Keith Hodges

    The Myth of Design; the Mystery of Color

    At this point in the series, we're getting pretty close to what I consider the upper limit of basic shell scripting; there are still a few areas I'd like to cover, but most of the issues involved are getting rather, umm, involved. A good example is the `tput' command that I'll be covering this month: in order to really understand what's going on, as opposed to just using it, you'd need to learn all about "termcap/terminfo" controversy (A.K.A. one of the main arguments in the "Why UNIX Sucks" debate) - a deep, involved, ugly issue (for a fairly decent and simple explanation, see Hans de Goede's fixkeys.tgz, which contains a neat little "HOWTO". For a more in-depth study, the Keyboard-and-Console-HOWTO is an awesome reference on the subject). I'll try to make sense despite the confusion, but be warned...

    Affunctionately Yours

    The concept of functions is not a difficult one, but is certainly very useful: they are simply blocks of code that you can execute under a single label. Unlike a script, they do not spawn a new subshell but execute within the current one. They can be used within a script, or stand-alone.

    Let's see how a function works in a shell script: (text version)

     
    #!/bin/bash
    #
    # "venice_beach" - translates English to beach-bunny
    
    function kewl ()        # Makes everything, like, totally rad, dude!
    {
         [ -z $1 ] &&& {
            echo "That was bogus, dude."
            return
         }
    
         echo "Like, I'm thinkin', dude, gimme a minute..."
         sleep 3
         echo $@', dude!'
         # While the function runs, positional parameters ($1, etc.)
         # refer to those given the function - not the shell script.
    }
    
    clear
    
    kewl $(echo "$@"|tr -d "[:punct:]")    # Strip off all punctuation
    

    This, umm, incredibly important script should print the "I'm thinkin'..." line followed by a thoroughly mangled list of parameters:

    Odin:~$ venice_beach Right on
    Like, I'm thinkin', dude, gimme a minute...
    Right on, dude!
    
    Odin:~$ venice_beach Rad.
    Like, I'm thinkin', dude, gimme a minute...
    Rad, dude!
    
    Odin:~$ venice_beach Dude!
    Like, I'm thinkin', dude, gimme a minute...
    Dude, dude!
    

    Functions may also be loaded into the environment, and invoked just like shell scripts; we'll talk about sourcing functions later on. For those of you who use Midnight Commander, check out the "mc ()" function described in their man page - it's a very useful one, and is loaded from ".bashrc".

    Important item: functions are created as "function pour_the_beer () { ... }" or "pour_the_beer () { ... }" (the keyword is optional); they are invoked as "pour_the_beer" (no parentheses). Also, be very careful (as in, _do not_ unless you really mean it) about using an "exit" statement in a function: since you're running the code in the current shell, this will cause you to exit your current (i.e. the "login") shell! Exiting a shell script this way can produce some very ugly results, like a `hung' shell that has to be killed from another VT (yep, I've experimented). The statement that will terminate a function without killing the shell is "return".

    Single, Free, and Easy

    Everything we've discussed in this series so far has a common underlying assumption: that the script you're writing is going to be saved and re-used. For most scripts, that's what you'd want - but what if you have a situation where you need the structure of a script, but you're only going to use it once (i.e., don't need or want to create a file)? The answer is - Just Do It:

    Odin:~$ (
    > echo
    > [ $blood_caffeine_concentration -lt 5ppm ] &&& {
    > echo $LOW_CAFFEINE_ERROR
    > while [ $coffee_cup != "full" ]
    > do
    > brew ttyS2 # Enable coffeepot via /dev/ttyS2
    > echo "Drip..."
    > sleep 1m
    > done
    > echo
    >
    > while [ $coffee_cup != "empty" ]
    > do
    > sip_slowly # Coffee ingestion binary, from coffee_1.6.12-4.tgz
    > done
    > }
    >
    > echo "Aaaahhh!"
    > echo
    > )
    Coffee Not Found: Operator Halted!
    Drip...
    Drip...
    Drip...
    
    Aaaahhh!
    
    Odin:~$
    

    Typing a `(' character tells "bash" that you'd like to spawn a subshell and execute, within that subshell, the code that follows - and this is what a shell script does. The ending character, `)', obviously tells the subshell to 'close and execute'. For an equivalent of a function (i.e., code executed within the current shell), the delimiters are `{' and `}'.

    Of course, something like a simple loop or a single 'if' statement doesn't even require that:

    Odin:~$ for fname in *.c
    > do
    > echo $fname
    > cc $fname -o $(basename $fname .c)
    > done
    

    "bash" is smart enough to recognize a multi-part command of this type - a handy sort of thing when you have more than a line's worth of syntax to type (not an uncommon situation in a 'for' or a 'while' statement). By the way, a cute thing happens when you hit the up-arrow to repeat the last command: "bash" will reproduce everything as a single line - with the appropriate semi-colons added. Clever, those GNU people...

    No "hash-bang" ("#!/bin/bash") is necessary for a one-time script, as it would be at the start of a script file. You know that you're executing this as a "bash" subshell (at least I _hope_ you're running "bash" while writing and testing a "bash" script...), whereas with a script file you can never be sure: the user's choice of shell is a variable, so the "hash-bang" is necessary to make sure that the script uses the correct interpreter.

    The Best Laid Plans of Mice and Men

    In order to write good shell scripts, you have to learn good programming. Simply knowing the ins and outs of the commands that "bash" will accept is far from all there is - the first step of problem resolution is problem definition, and defining exactly what needs to be done can be far more challenging than writing the actual script.

    One of the first scripts I ever wrote, "bkgr" (a random background selector for X), had a problem - I'd call it a "race condition", but that means something different in Unix terminology - that took a long time and a large number of rewrites to resolve. "bkgr" is executed as part of my ".xinitrc":

    ...
    # start some nice programs
    bkgr &
    rxvt-xterm -geometry 78x28+0+26 -tn xterm -fn 10x20 -iconic &
    coolicon &
    icewm
    

    OK, by the book - I background all the processes except the last one, "icewm" (this way, the window manager keeps X "up", and exiting it kills the server). Here was the problem: "bkgr" runs, and "paints" my background image on the root window; fine, so far. Then, "icewm" runs - and paints a greenish-gray background over it (as far as I've been able to discover, there's no way to disable that other than hacking the code).

    What to do? I can't put "bkgr" after "icewm" - the WM has to be last. How about a delay after "bkgr", say 3 seconds... oh, that won't work: it would simply delay the "icewm" start by 3 seconds. OK, how about this (in "bkgr"):

    ...
    while [ -z "$(ps ax|grep icewm)" ] # Check via 'ps' if "icewm" is up
    do
        sleep 1                        # If not, wait, then loop
    done
    ...
    

    That should work, since it'll delay the actual "root window painting" until after "icewm" is up!


    It didn't work, for three major reasons.

    Reason #1: try the above "ps ax|grep" line, from your command line, for any process that you have running; e.g., type

    ps ax|grep init
    

    Try it several times. What you will get, randomly, is either one or two lines: just "init", or "init" and the "grep init" as well, where "ps" manages to catch the line that you're currently executing!

    Reason #2: "icewm" starts, takes a second or so to load, and then paints the root window. Worse yet, that initial delay varies - when you start X for the first time after booting, it takes significantly longer than subsequent restarts. "So," you'd say, "make the delay in the loop a bit longer!" That doesn't work either - I've got two machines, an old laptop and a desktop, and the laptop is horribly slow by comparison; you can't "size" a delay to one machine and have it work on both... and in my not-so-humble opinion, a script should be universal - you shouldn't have to "adjust" it for a given machine. At the very least, that kind of tuning should be minimized, and preferably eliminated completely.

    One of the things that also caused trouble at this point is that some of my pics are pretty large - e.g., my photos from the Kennedy Space Center - and take several seconds to load. The overall effect was to allow large pics to work with "bkgr", whereas the smaller ones got overpainted - and trying to stretch the delay resulted in a significant built-in slowdown in the X startup process, an untenable situation.

    Reason #3: "bkgr" was supposed to be a random background selector as well as a startup background selector - meaning that if I didn't like the original background, I'd just run it again to get another one. A built-in delay any longer than a second or so, given that a pic takes time to paint anyway, was not acceptable.

    What a mess. What was needed was a conditional delay that would keep running as long as "icewm" wasn't up, then a fixed delay that would cover the interval between the "icewm" startup and the "root window painting". The first thing I tried was creating a reliable `detector' for "icewm":

    ...
    delay=0
    X="$(ps ax)"
    
    while [ $(echo $X|grep -c icewm) -lt 1 ]
    do
       [ $delay -eq 0 ] && (delay=1; sleep 3)
       [ $delay -eq 1 ] && sleep 1
       X="$(ps ax)"
    done
    ...
    

    '$X' gets set to the value of "$(ps ax)", a long string listing all the running processes which we check for the presence of "icewm" as the loop condition. The thing that makes all the difference here is that "ps ax" and "grep" are not running at the same time: one runs inside (and just before) the loop, the other is done as part of the loop test (a nifty little hack, and well worth remembering). This registers a count of only one "icewm" if it is running, and none if it is not. Unfortunately, due to the finicky timing - specifically the difference in the delays between an initial X startup and repeated ones - this wasn't quite good enough. Lots of experimentation later, here's a version that works:

    ...
    delay=0
    until [ ! $(xv -root -quit /usr/share/Eterm/tiny.gif) ]
    do
        delay=1
        sleep 1
    done
    [ delay -eq 1 ] && sleep 3
    ...
    

    What I'm doing here is loading a 1x1-pixel image and checking to see if "xv" has managed to do so successfully; if it has not, I continue looping. Once it has - and this only means that X has reached a point where it will accept those directives from a program - I stick in a 3 second delay (but only if we've done the loop; if "icewm" is already up, no delay is necessary or wanted). This seems to work very well no matter what the "startup count" is. Running it this way, I have not had a single image "overpainted", or a delay of longer than a second or so. I was a bit concerned about the effect of all those "xv"s running one after another, but timing the X startup with and without "bkgr" put that to rest: I found no measurable difference (as a guess, when "xv" exits with an error code it probably doesn't take much in a way of resources.)

    Note that the resulting script is only slightly longer than the original - what took all this time was not writing some huge, complex magical fix but understanding the problem and defining the solution... even though it was a strange one.

    There are a number of programming errors to watch out for: "race conditions" (a security concern, not just a time conflict), the `banana problem', the `fencepost/Obi-Wan error'... (Yes, they do have interesting names; a story behind each one.) Reading up on a bit of programming theory would benefit anyone who's learning to write shell scripts; if nothing else, you won't be repeating someone else's mistakes. My favorite reference is an ancient "C" manual, long out of print, but there are many fine reference texts available on the net; take a peek. "Canned" solutions for standard programming errors do exist, tend to be language-independent, and are very good things to have in your mental toolbox.

    Coloring Fun with Dick and Jane

    One of the things that I used to do, way back when in the days of BBSs and the ASCII art that went with them, is create flashy opening screens that moved and bleeped and blinked and did all sorts of things - without any graphics programming or anything more complicated than those ASCII codes and ANSI escape sequences (they could get complicated enough, thank you very much), since all of this ran on pure text terminals. Linux, thanks to the absolutely stunning results of work done by Jan Hubicka and his friends (if you have not seen the "bb" demo of "aalib", you're missing out on a serious acid trip. As far as I know, the authorities have not yet caught on, and it's still legal), has far outstripped everything even the fanciest ASCII artist could come up with back then ("Quake" and fractal generators on text-only terminals, as two examples).

    What does this have to do with us, since we're not doing any "aalib"-based programming? Well, there are times when you want to create a nice-looking menu, say one you'll be using every day - and if you're working with text, you'll need some specialized tools:

    1) Cursor manipulation. The ability to position it is a must; being able to turn it on and off, and saving and restoring the position are nice to have.

    2) Text attribute control. Bold, underline, blinking, reverse - these are all useful in menu creation.

    3) Color. Let's face it: plain old B&W gets boring after a bit, and even something as simple as a text menu can benefit from a touch of spiffing up.

    So, let's start with a simple menu: (text version)

    #!/bin/bash
    #
    # "ho-hum" - a text-mode menu
    
    clear
    
    while [ 1 ]         # Loop `forever'
    do
    # We're going to do some `display formatting' to lay out the text;
    # a `here-document', using "cat", will do the job for us.
    
    cat << !
    
                M A I N   M E N U
    
            1. Business auto policies
            2. Private auto policies
            3. Claims
            4. Accounting
            5. Quit
    
    !
    echo -n " Enter your choice: "
    
    # Why have I not followed standard indenting practice here? Because
    # the `here-doc' format requires the delimiter ('!') to be on a line
    # by itself - and spaces or tabs count. Just consider everything as
    # being set back one indent level, and it'll all make sense.
    
    read choice
    
    case $choice
    in
        1|B|b) bizpol ;;
        2|P|p) perspol ;;
        3|C|c) claims ;;
        4|A|a) acct ;;
        5|Q|q) clear; exit ;;
        *) echo; echo "\"$choice\" is not a valid option."; sleep 2 ;;
    esac
    
    clear
    done
    

    If you copy and paste the above into a file and run it, you'll realize why the insurance business is considered deadly dull. Erm, well, one of the reasons, I guess. Bo-o-ring ( apologies to one of my former employers, but it's true...) Not that there's a tremendous amount of excitement to be had out of a text menu - but surely there's something we can do to make things just a tad brighter! (text version)

    #!/bin/bash
    #
    # "jazz_it_up" - an improved text-mode menu
    
    tput civis        # Turn off the cursor
    
    while [ 1 ]
    do
        echo -e '\E[44;38m'    # Set colors: bg=blue, fg=white
        clear                  # Note: colors may be different in xterms
        echo -e '\E[41;38m'    # bg=red
    
        for n in `seq 6 20`
        do
            tput cup $n 15
            echo " "
        done
    
        echo -ne '\E[45;38m'    # bg=magenta
        tput cup 8 25 ; echo -n " M A I N   M E N U "
        echo -e '\E[41;38m'     # bg=red
    
        tput cup 10 25 ; echo -n " 1. Business auto policies "
        tput cup 11 25 ; echo -n " 2. Private auto policies "
        tput cup 12 25 ; echo -n " 3. Claims "
        tput cup 13 25 ; echo -n " 4. Accounting "
        tput cup 14 25 ; echo -n " 5. Quit "
    
        # I would have really liked to make the cursor invisible here -
        # but my "xterm" does not implement the `civis' option for "tput"
        # which is what does that job. I could experiment and hack it
        # into "terminfo"... but I'm not *that* ambitious.
    
        echo -ne '\E[44;38m'     # bg=blue
        tput cup 16 28 ; echo -n " Enter your choice: "
        tput cup 16 48
    
        read choice
        tput cup 18 30
    
        case $choice
        in
            1|B|b) bizpol ;;
            2|P|p) perspol ;;
            3|C|c) claims ;;
            4|A|a) acct ;;
            5|Q|q) tput sgr0; clear; exit ;;
            *) tput cup 18 26; echo "\"$choice\" is not a valid option.";
                sleep 2 ;;
        esac
    done
    

    This is NOT, by any means, The Greatest Menu Ever Written - but it gives you an idea of basic layout and color capabilities. Note that the colors may not work exactly right in your xterm, depending on your hardware and your "terminfo" version - I did this as a quick slapdash job to illustrate the capabilities of "tput" and "echo -e". These things can be made portable - "tput" variables are common to almost everything, and color values can be set based on the value of `$TERM' - but this script falls short of that. These codes, by the way, are basically the same for DOS, Linux, etc., text terminals - they're dependent on hardware/firmware rather than the software we're running. Xterms, as always, are a different breed...

    So, what's this "tput" and "echo -e" stuff? Well, in order to `speak' directly to our terminal, i.e., give it commands that will be used to modify the terminal characteristics, we need a method of sending control codes. The difference between these two methods is that while "echo -e" accepts "raw" escape codes (like '\E[H\E[2J' - same thing as H2J), "tput" calls them as `capabilities' ("tput clear" does the same thing as "echo -e" with the above code) and is (theoretically) term-independent (it uses the codes in the terminfo database for the current term-type). The problem with "tput" is that most of the codes for it are as impenetrable as the escape codes that they replace: things like `civis' ("make cursor invisible"), `cup' ("move cursor to x y"), and `smso' ("start standout mode") are just as bad as memorizing the codes themselves! Worse yet, I've never found a reference that lists them all... well, just remember that the two methods are basically interchangeable, and you'll be able to use whatever is available. The "infocmp" command will list the capabilities and their equivalent codes for a given terminal type; when run without any parameters, it returns the set for the current term-type.

    Colors and attributes for an ISO6429 (ANSI-compliant) terminal, i.e., a typical text terminal, can be found in the "ls" man page, in the "DISPLAY COLORIZATION" section; xterms, on the other hand, vary so much in their interpretation of exactly what a color code means, that you basically have to "try it and see": (text version)

    #!/bin/bash
    #
    # "colsel" - a term color selector
    
    for n in `seq 40 47`
    do
        for m in `seq 30 37`
        do
            echo -en "\E[$m;${n}m"
            clear
            echo $n $m
            read
        done
    done
    

    This little script will run through the gamut of colors of which your termtype is capable. Just remember the number combos that appeal to you, and use them in your "echo -e '\E[<bg>;<fg>m'" statements.

    Note that the positions of the numbers within the statement don't matter; also note that some combinations will make your text into unreadable gibberish ("12" seems to do that on most xterms). Don't let it bother you; just type "reset" or "tput sgr0" and hit "Enter".

    Wrapping It Up

    Hmm, I seem to have made it through all of the above without too much pain or suffering; amazing. :) Yes, some of the areas of Linux still have a ways to go... but that's one of the really exciting things about it: they are changing and going places. Given the amazing diversity of projects people are working on, I wouldn't be surprised to see someone come up with an elegant solution to the color code/attribute mess.

    Next month, we'll cover things like sourcing functions (pretty exciting stuff - reusable code!), and some really nifty goodies like "eval" and "trap". Until then -

    Happy Linuxing to all!

    Linux Quote of the Month

    "The words `community' and `communication' have the same root. Wherever you put a communications network, you put a community as well. And whenever you take away that network - confiscate it, outlaw it, crash it, raise its price beyond affordability - then you hurt that community.

    Communities will fight to defend themselves. People will fight harder and more bitterly to defend their communities, than they will fight to defend their own individual selves."
    -- Bruce Sterling, "Hacker Crackdown"

    References

    The "man" pages for 'bash', 'builtins', 'tput', 'infocmp', 'startx'
    "Introduction to Shell Scripting - The Basics", LG #53
    "Introduction to Shell Scripting", LG #54
    "Introduction to Shell Scripting", LG #55


    Copyright © 2000, Ben Okopnik
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Security Scanners

    By Kapil Sharma


    "A scanner is a program that automatically detects security weaknesses in a remote or localhost.". Scanners are important to Internet security because they reveal weaknesses in the network. System administrators can strengthen the security of networks by scanning their own networks. The primary attributes of a scanner should be:

    • The capability to find a machine or network.
    • The capability to find out what services are being run on the host (once having found the machine).
    • The capability to test those services for known holes.

    There are various tools available for Linux system scanning and intrusion detection. I will explain some of the very famous tools available. I have divided the scanners into three categories:

    1. Host Scanners
    2. Network Scanners
    3. Intrusion Scanners

    Host scanners
    Host scanners are software you run locally on the system to probe for problems.

    Cops
    COPS is a collection of security tools that are designed specifically to aid the typical UNIX systems administrator, programmer, operator, or consultant in the oft neglected area of computer security. COPS is available at: http://www.fish.com/cops

    Tiger
    Tiger is a UNIX Security Checker. Tiger is a package consisting of Bourne Shell scripts, C code and data files which is used for checking for security problems on a UNIX system. It scans system configuration files, file systems, and user configuration files for possible security problems and reports them. You can get it from: http://www.giga.or.at/pub/hacker/unix

    check.pl
    Check.pl a perl script that looks through your entire filesystem, (or just the directory you tell it to) for suid, sgid, sticky, and writeable files. You should run it as a regular user maybe once a week to check for permission problems. It will output a list of questionable files to stdout which you can redirect wherever. It's available at: http://opop.nols.com/proggie.html.

    Network scanners
         Network scanners are run from a host and pound away on other machines, looking for open services. If you can find them, chances are an attacker can too. These are generally very useful for ensuring your firewall works.

    NSS (Network Security Scanner):
         NSS is a perl script that scans either individual remote hosts or entire subnets of hosts for various simple network security problems. It is extremely fast. Routine checks that it can perform include the following:
    1:  sendmail
    2: Anon FTP
    3: NFS Exports
    4: TFTP
    5: Hosts.equiv
    6: Xhost
        NSS can be found at: http://www.giga.or.at/pub/hacker/UNIX

    SATAN (Security Administrator's Tool for Analyzing Networks):
         SATAN is an automated network vulnerability search and report tool that provides an excellent framework for expansion.Satan scans remote hosts for most known holes:
        1: FTPD vulnerabilities and writable FTP directories
        2: NFS  vulnerabilities
        3: NIS vulnerabilities
        4: RSH vulnerability
        5: sendmail
        6: X server vulnerabilities SATAN performs these probes automatically and provides this information in an extremely easy to use package.
    you can obtain SATAN from : http://www.fish.com/satan/

    Strobe:
         Strobe is Super optimised TCP port surveyor. It is a network/security tool that locates and describes all listening tcp ports on a (remote) host or on many hosts in a bandwidth utilisation maximising, and pro- cess resource minimising manner. It is simple to use and very fast, but doesn't have any of the features newer port scanners have.
    Strobe is available at: ftp://suburbia.net/pub/.

    Nmap:
        Nmap is a newer and much more fully-featured host scanning tool.
    Specifically, nmap supports:

    • Vanilla TCP connect() scanning
    • TCP SYN (half open) scanning
    • TCP FIN, Xmas, or NULL (stealth) scanning
    • TCP ftp proxy (bounce attack) scanning SYN/FIN scanning using IP fragments (bypasses some packet filters)
    • TCP ACK and Window scanning
    • UDP raw ICMP port unreachable scanning
    • ICMP scanning (ping-sweep) TCP Ping scanning Direct (non portmapper) RPC scanning Remote OS Identification by TCP/IP Fingerprinting, and Reverse-ident scanning.

    Nmap is available at: http://www.insecure.org/nmap/index.html.

    Network Superscanner:
        http://members.tripod.de/linux_progz/

    Portscanner:
         PortScanner is a Network Utility especially designed to "scan" for listening TCP ports. It uses a simple method to achieve its goal, and it is extremely compact taking in account all of the options available. It's opensource and free to use, you can get it at: http://www.ameth.org/~veilleux/portscan.html.

    Queso:
         Queso is a tool to detect what OS a remote host is running with a pretty good degree of accuracy . Using a variety of valid and invalid tcp packets to probe the remote host it checks the response against a list of known responses for various operating systems, and will tell you which OS the remote end is running. You can get Queso from:  http://www.apostols.org/projectz/queso/.

    Intrusion Scanners
         Intrusion scanners are software packages that will actually identify vulnerabilities, and in some cases allow you to actively try and exploit them.

    Nessus:
         Nessus is very fast, reliable and has a modular architecture that allows you to fit it to your needs.Nessus is one of the best intrusion scanning tools. It has a client/server architecture, the server currently runs on Linux, FreeBSD, NetBSD and Solaris, clients are available for Linux, Windows and there is a Java client. Nessus supports  port scanning, and attacking, based on IP addresses or host name(s). It can also search through network DNS information and attack related hosts at your request. Nessus is available from http://www.nessus.org/.

    Saint:
         SAINT is the Security Administrator's Integrated Network Tool. Saint also uses a client/server architecture, but uses a www interface instead of a client program. In its simplest mode, it gathers as much information about remote hosts and networks as possible by examining such network services as finger, NFS, NIS, ftp and tftp, rexd, statd, and other services. Saint produces very easy to read and understand output, with security problems graded by priority  (although not always correctly) and also supports add-in scanning modules making it very flexible. Saint is available from: http://www.wwdsi.com/saint/.

    Cheops:
         Cheops is useful for detecting a hosts OS and dealing with a large number of hosts quickly. Cheops is a "network neighborhood" on steroids, it builds a picture of a domain, or IP block, what hosts are running and so on. It is extremely useful for preparing an initial scan as you can locate interesting items (HP printers, Ascend routers, etc) quickly. Cheops is available   at: http://www.marko.net/cheops/.

    Ftpcheck / Relaycheck:
         Ftpcheck and Relaycheck are two simple utilities that scan for ftp servers and mail servers that allow relaying. These are available from: http://david.weekly.org/code/.

    BASS:
        BASS is the "Bulk Auditing Security Scanner" allows you to scan the Internet for a variety of well known exploits. You can get it  from: http://www.securityfocus.com/data/tools/network/bass-1.0.7.tar.gz

    Firewall scanners:
         There are also a number of programs now that scan firewalls and execute other penetration tests in order to find out how a firewall is configured.

    Firewalk:
         Firewalking is a tool that employs traceroute-like techniques to analyze IP packet responses to determine gateway ACL filters and map networks. Firewalk the tool employs the technique to determine the filter rules in place on a packet forwarding device. System administrators should utilize this tool against their systems to tighten up security. Firewalk is available from: http://www.packetfactory.net/Projects/Firewalk/.

    Conclusion:

    "Security is not a solution, it's a way of life". System Administrators must continuously scan their systems for security holes and fix the hole on detection. This will tighten the security of system and reduce the chance of security breaches. This process is a continuous process. The security vulnerabilities will keep on arising and process of fixing the security holes will never end! After all, "Precaution is better than cure".


    Copyright © 2000, Kapil Sharma
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Making a Simple Linux Network Including Windows 9x

    By Juraj Sipos


    I decided to write this article because often, when I read some howtos or general help texts, I find it difficult to navigate myself in the sea of information due to the fact that I often need only a little to know. Instead, I'm compelled to read hundreds of pages of texts to find an answer.

    I will give you an example. This article will help you make a simple network between two computers, but I haven't been able to mail between the two machines. The sendmail configuration is very complicated and before I find a solution, I have to read many pages of texts. But I actually need only few words, something like: "Put this in sendmail configuration file and you're done". I do not want to say that I will not be able to solve this, but time is money and I often have to do other things for living. People like me like examples that can be applied without problems. So I also give such examples in my article. I will not tell you to put 20.0.0.0 in your /etc/hosts, as some authors write about configuring a home network, because this is not a private network IP number. I will give you the numbers and expressions I have and use with assurance that it works with me. Please do not email me with questions like "my Linux doesn't see a parallel port", "I cannot connect to another machine from second parallel port..." or questions that are answered in this article, but mail me with how to configure a sendmail!

    This article also expects some work and study to be done on your part assuming you will not use some prehistoric kernels or hardware.

    I would like to comment the following "Help Wanted" question from Linux Gazette, August 2000.

    Hi there, My name Sergey, I use Slackware 7.0. I have read a document about serial connection between Win95 and Linux, but never have seen about parallel. Help me, tell me how can I use my parallel NULL modem. Is lp0 the LPT1 port? Thanks, Sergey

    I successfully connected my two computers with PLIP Install HOWTO (which is very well written), but I found some problems that I had to solve using my own creativity. Furthermore, the part of "kernel reconfiguration" in the PLIP HOWTO may be misleading for newbies, because reconfiguration is not necessary. So here I will give you some more detailed info on connecting two Linux machines with PLIP or network-card interfaces, and I will also mention connecting a Windows machine with Linux box.

    I use RedHat 6.0 because it looks to be the most stable Linux on my computer configuration. I have quite a difficult configuration, I use FreeBSD, Linux, DOS-Windows, Zip Drive, and other things that are quite complicated (for example, my C Drive is totally encrypted, so I boot with a password and then a boot manager is started). I have 8 OS's on two hardisks including OS/2 and BEOS and loop boot disk files (Linux booting from a large file). I tested network (PLIP and NE2000 Network Cards and compatibles) connection on RedHat 6.0, 6.1, 6.2 and SuSE 6.4.

    PLIP

    If you have the above-mentioned systems, you don't have to compile a kernel to use PLIP. This may also work for Mandrake, but I haven't tested it. Linux generic kernel is sufficient (the one you have after Linux is installed). To make a PLIP connection, do the following:

    1. get a laplink parallel cable
    2. install Linux on both machines (I hope you have two machines:) with appropriate network services (inet, etc.)
    3. open /etc/hosts with your favorite text editor on both computers and put the following lines there:
      127.0.0.1               localhost localhost.localdomain
      10.0.0.1                one
      10.0.0.2                two
      
      10.0.0.0 is a private network IP address that will not interfere with your Internet connection.
    4. Go to BIOS setup of the second computer and change "Halt on all Errors" to "halt on all Errors except Keyboard".
    5. edit /etc/rc.d/rc.local on the second computer and put there this somewhere at the end:
      modprobe plip
      ifconfig plip0 two pointopoint one up
      

    (Linux will automatically assign plip0 service to lp0 or LPT1, so you never have to use lp0 in plip connection configuration. But if you want to use a printer, issue "ifconfig plip0 down", then remove the plip module from kernel with rmmod command. Don't mail me with questions about printer problems.)

    Thus, the second computer will start automatically, without keyboard, with plip interface. Restart it to see if it boots. You do not worry about looking in it, you will be able to telnet, ftp or ssh to it from another computer. But you can attach a keyboard to it. The important thing is that it will be able to boot with Linux. Here is a problem where the PLIP HOWTO did not help me: The command "modprobe plip" did not work and the system gave me something like "unresolved symbols" "...device busy". I realized that the problem was in Linux's inability to assign IRQ number to my PLIP interface. I solved the issue with the following command:

    echo 7 > /proc/parport/0/irq
    

    This command will put the IRQ number to kernel processes directory (/proc) and plip will be working. But first you must run the "modprobe plip" command anyway because it will make a parport directory in the /proc directory. Now change the IRQ number to 7 in /proc/parport/0/irq. Then run the "modprobe plip" command again. The second running of the modprobe command will install plip. This also worked on my SuSE system. So if you receive, on the second computer, an error like "unresolved symbols...", "the device busy", put this command in your /etc/rc.d/rc.local:

    modprobe plip
    echo 7 > /proc/parport/0/irq
    ifconfig plip0 two pointopoint one up
    
    Obviously, the same command must be also issued on the first computer, but you will have to change the order of "ifconfig plip0 two pointopoint one up" to "ifconfig plip0 one pointopoint two up", so on the first computer you will issue:
    modprobe plip
    echo 7 > /proc/parport/0/irq
    ifconfig plip0 one pointopoint two up
    
    You can run this manually; you can put it in rc.local, or make a script for these commands (for use on your computer ONE). Now you should be able to telnet, ssh, ping, ftp or lynx, if Apache server is running, to the second machine and telnet back from the second machine to the first machine. But first check you have working network daemons running. Then try "telnet two" or "telnet 10.0.0.2".

    NOTE: Because telnet does not allow you to login as root, create another user without root privileges on both systems.

    Connecting Windows machine and Linux box

    PLIP

    There is a plip driver for DOS/Windows: ftp://ftp.crynwr.com/drivers/plip.zip. The problem with this driver is that it does not work with Windows 9x (it supposed to work with DOS), but you may give it a try. This issue remains open for a software development and I was not successful to make a link between Linux and Windows machines via a parallel cable.

    If you have a computer with Windows 9x you do not want to tamper with, try a Linux emulator like Bochs - to install Linux under Windows (ftp.bochs.com, www.bochs.com), or make a loop boot file. A howto for making a file with booting abilities is included in Linux Howto Documentation with "Loopback-Root-FS" title. It is an excellent article. Then you will do just the same I described here to your loop file.

    Network Cards

    I would suggest using a thinnet because it is the cheapest network in the price of a good and long parallel cable. This network is officially called 10Base 2. You need a coaxial cable with BNC connectors, tees and terminators. Tees look something like T where the bottom of the tee is put to NIC (Network Card Interface), and the ends opposite each other are used for connecting to computers and terminators.

     here you will              > ------|------  <        here you will
     put a coaxial cable                |              put a terminator
     going to another computer          |
                                        |
    The bottom of the TEE will be connected to NIC in your computer.
    

    If you have two computers, you have to use a terminator to put it in the tee on both computers (one end of the tee is already in the NIC, one end of the upper part of the tee is used for connection to other computer, another end of the upper part of the tee is used for connection to other computer, too, or for connection to a terminator). If you have 5 computers in this order "1comp 2comp 3comp 4comp 5comp", the 1comp and 5comp must have terminators in their tees, but 2comp, 3comp and 4comp will have coaxial cables in both sides of their tees. Your network will not work without terminators.

    Generally, there are three types of widely used networks:

    10Base 5  - it is a little bit old
    10Base 2  - the one I write about here, the coaxial cable may be 185
    meters long;
                you may have a network with 30 computers per one segment
    10Base 10 - requires a hub and may be expensive for you.
    

    I would suggest buying NE2000 cards (10Mbit) or compatibles. I have had not problems with them so far. You may buy each for less than 10 dollars. Do the same as I already said - put your network information in /etc/hosts file:

    127.0.0.1               localhost localhost.localdomain
    10.0.0.1                one
    10.0.0.2                two
    
    Make sure the NIC's are well seated in their slots. Run a diagnostic program (rset8029.exe in my case). Change configuration from 10Base 10 (which is default) to 10Base 2 on both network cards. Restart. Put this command in computer one:
    modprobe ne2k-pci
    ifconfig eth0 one
    
    Put this command in computer two:
    modprobe ne2k-pci
    ifconfig eth0 two
    
    Now you should see something like this on both computers:
    eth0    Link encap:Ethernet  HWaddr 52:54:AB:1F:7A:51
              inet addr:10.0.0.1  Bcast:10.255.255.255  Mask:255.0.0.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:100
              Interrupt:11 Base address:0xe800
    
    I read some network howtos that recommended using the "route add" command, but it appeared irrelevant using it here.
    [2.2 kernels don't need "route add". 2.0 kernels do. 2.2 kernels automatically add the standard route; i.e., the route necessary for this IP to connect to its local network. -Ed.]

    This network with two computers works just fine. Now you may telnet, ssh, ping, ftp to the second machine and vice versa. If you want to use NFS, read the NFS Howto. Or simply read my instructions: install nfs daemon. Edit /etc/exports file or create it if it is not there. Put the following line there:

    / two (rw)
    
    Run rpcinfo -p to check if it works. You should see something like:
    program vers proto   port
      100000    2   tcp   110   rpcbind
      100000    2   udp   111   rpcbind
      100024    1   udp   903   status
      100024    1   tcp   905   status
      100011    1   udp   914   rquotad
      100011    2   udp   914   rquotad
      100005    1   udp   924   mountd
      100005    1   tcp   926   mountd
      100005    2   udp   929   mountd
      100005    2   tcp   931   mountd
      100005    3   udp   934   mountd
      100005    3   tcp   936   mountd
      100003    2   udp   2049  nfs
    
    If you do not see mountd or nfs, you badly installed the nfs package (install the package knfsd* and netkit-base-*, which contains inetd). Every change to /etc/exports file must be followed by "exportfs -av" command. Run it and then simply run: mount -t nfs two:/ /mnt -o rsize=8192,wsize=8192,nolock and you should have a working NFS. Just peek in the /mnt directory. You should see your second machine there. If you have problems with write permissions, give permissions on your second computer to everybody (for one directory only; this must be done by telnet).

    Communication between Linux and Windows

    Linux and Windows (and other systems too) can communicate each other via network cards. Do the following: Windows will install your network card automatically, if not, open Control Panel, Add Hardware. Click to add hardware. I had problems with PCI Steering, where Windows assigned a different IRQ number to my NIC than BIOS, so my card didn't work, and Windows appeared as a total zero dollar crap where simple users would be completely lost. Check the IRQ number in the system icon in Control Panel. If it is different from the one BIOS assigned to your card, remove PCI steering. Here Linux is much more flexible. If Windows sees your network card, appropriate network drivers should be installed automatically. If installed, open Control Panel, Network, and edit TCP/IP > Realtek RTL8029 Ethernet Adapter and Compatibles (or other NIC, if you have one) in Properties. Click "Specify an IP Address" and change the IP on the IP Address Tab to 10.0.0.1, Subnet Mask should be 255.255.255.0. That's all. Restart computer. Windows contains C:\windows\lmhosts.sam and C:\windows\hosts.sam in its directory. Copy them or create new files in C:\windows - hosts, lmhosts. Put the following lines in both of them:

    c:\windows\hosts
    c:\windows\lmhosts
    127.0.0.1               localhost localhost.localdomain
    10.0.0.1                one
    10.0.0.2                two
    
    Now, if you telnet to Linux machine from the Windows machine, you can run Linux commands and programs - you will have Linux running in Windows. I had not time to test X Server, but most console commands work. If you have Apache Server running on Linux, you can http to Linux box from Netscape using this command:
    http://two
    
    If you have a user but not anonymous ftp access to your Linux box, you can ftp to Linux from Netscape by using this command: ftp://user:password@two (where "user" is a working user you added to your Linux box; it cannot be root).

    The parallel cable, if too long, may be quite expensive. Cheap network cards I mentioned here may be only a little more expensive. And if bought as second-hand, they may be even cheaper than a long parallel laplink cable. So I would suggest using the network cards similar to mine to have a simple and working home network. Such connection is considerably faster than using parallel port cable.

    If you're in a trouble, you may always look in Linux howtos like PLIP, NET, etc. If something doesn't work, consider testing your hardware under a different OS.

    BTW, does anybody out there know how to configure sendmail to use mail in our home network?


    Copyright © 2000, Juraj Sipos
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    More Web Tools

    By Martin Skjoldebrand


    Mail and notes

    I got some mail after my latest article on HTML editors. Some of them from people talking about an editor called SCREEM. As a matter of fact just after sending off the August article I seemed to recall a listing on Freshmeat about a HTML editor called Screem. So I had already had a look - and it seems very promising. I've included a look at here.

    Dmitry Poplavsky from the Quanta team wrote to point out a couple of things: the extended ASCII problem is real. Although not always reproduceble. One solution to this bug he mentions:

    Sometimes setting export LC_ALL = <your language> solve problem, but not every time.

    Other points are that you can update the preview by hitting F5, which is pretty obvious from looking at the menus but something I missed. What is not as clear - and certainly not as intuitive is the linux-questions-only@ssc.completion feature:

    Maybe you'll find ueful feature of quick list of tag attributes (alt+down key, like linux-questions-only@ssc.completion from HomeSite, but still not so smart ;)
    That is also quite true but hitting ALT+DOWN instead of SPACE is not very well thought out IMHO. Also the function is Quanta is a bit less potent than the one found in SCREEM. (Check out the review of the latter here).

    Someone wrote me saying I might enjoy Bluefish in its new incarnation. I just downloaded version 0.5 and the general impression is more. Of everything really. But I still miss the file browsers and linux-questions-only@ssc.completion of other Quanta and SCREEM. There are more wizards, more menus and a new toolbar which is users customizable. At least the form wizard bug has gone away and the other minor things I noticed last time should have been taken care of too. There is new support for WML and the DTD section now includes XHTML. And there are new bugs. Try (for example) this handy trick to shut down your HTML editor: Write a HTML document. Go to the Frames Tab. Click on the Frameset button. Hope you did save first though.

    Tool tip of the month: wget and Kwebget

    One problem facing me recently was getting one of my sites onto my new home computer. (My old one having gone into retirement as a Windows game box for my 4-year old). The prospect of having to use FTP to transfer all files from the web server was not appetizing. I've done that once and found it slow, and for us unfortunate people with dial-up accounts payable by the minute, costly. The obvious alternative is to find a tool to automate the process. Fortunately there are quite a few such tools around - do a search on web mirror och web site copy on Freshmeat to get an idea of the number of applications.

    In a Mandrake 7.1 installation you may have two of these installed. The command line alternative is wget wget is a command line tool for grabbing whole web sites (1.5.3) which gets a lot of mentions on newsgroups and such fora. I haven't tried it - as I do most of my computing in an X environment. There are a gazillion options to set to tweek its performance to suit your need however, as a screenshot of the --help options shows.

    What I have tried however, is Kwebget (0.5) which is a frontend to wget. What is great about Kwebget is Kwebget is a graphical frontend for wget summarized in the help file: "It has nearly the same functionallity, but you still don't have to type such long commandline-arguments. It's main use is to download whole sites to your local harddisk, and to browse them, while you're offline. " This tool would apparently fit the bill nicely. It is designed for use in KDE although running KDE isn't necessary. (KDE must be installed however).

    Kwebget comes fitted with a wizard to help you grab a site as painlessly as possible. It is however also possible to run it in "advanced" mode. The most advanced of which is to select which options you wish to specify to the underlaying wget. (Why not call it manual mode rather than use Microsoft-speak?) Anyway. Kwebget is a nice tool for those of us who have not grown up with the Unix command line and still am learning at using and scripting those tools to do approximately what we want them to do. (Recently I finally got a grip on grep ....).

    Did it actually prove to be faster than FTP and less costly? Well, at least I think so. It definately is great to just say "go get my site" to Kwebget and it does quite rapidly. The speed increase (at least when dealing with my server) is that there is no pauses when the FTP-server moves into each and every directory to list and get files.

    Finally

    This completes the phase where I've reviewed some HTML-editors. I've also looked at tools to automatically download an entire site onto your own machine. If you feel - these articles are useful and/or have some other tip for future article - drop me a line!

    BTW, here is the review of SCREEM again. If you don't want to scroll up all the way to the link.


    Copyright © 2000, Martin Skjoldebrand
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    CD-Writing with an ATAPI CDR Mini-HOWTO

    By Chris Stoddard


    This document provides the least amount of information needed to get an ATAPI CDR running under Linux.

    Table of Contents

    • 1. Introduction
    • 2. Installing the Hardware
    • 3. Loading the Drivers
    • 4. Quick and Dirty Burn
    • 5. Final Note

    1. Introduction

    The documentation for getting a CDR up and running under Linux is an excellent piece of work, however, if all you want to do is burn a few MP3's the documentation can be overwhelming. I'm going to attempt to boil installing and configuring a CDR down to as few steps as necessary to get the job done. I will be focusing only on ATAPI drives as SCSI drives work well with out too many problems. The information here was culled from 4 hours of trial and error as well as the CD-Writing-HOWTO.

    2. Installing the Hardware

    The first thing to do, even before you buy the drive is to check the hardware compatibility list. Things will go easier for you if your drive is listed. If it is not, don't despair, my Iomega ZIPCD is not on the list, but still works fine. Any ATAPI CDR "should" work, should being the key word here.

    I'm not going into much detail about installing the hardware, if you don't know how to install your drive or you don't understand what I'm talking about, please get some one to help you. I have found I get the best results by jumpering the drive to slave and connecting it to the same IDE cable as your CDROM. Make sure your BIOS recognizes the new drive and when the system boots make sure it is recognized as an ATAPI drive, if it is not, this will never work. After the system boots up, check your kernel messages to see if the drive is properly recognized by typing "dmesg | grep ATAPI" at the command line, you should see something like this;

      hdc: FX162N, ATAPI CDROM drive
      hdd: ZIPCD 4x650, ATAPI CDROM drive
      hdc: ATAPI 16X CD-ROM drive, 128kB Cache
      scsi0 : SCSI host adapter emulation for IDE ATAPI devices
    

    Make note of the device name Linux gives your drive, my ZIPCD is recognized as hdd by the kernel, this becomes important later.

    3. Loading the drivers

    Before getting started, you must be able to log into the system as root to preform these steps. Make sure cdrecord and mkisofs are installed on the system, to do this type "rpm -q cdrecord mkisofs", this will tell you if the packages are installed or not, if they are not, you will need to install them. Also make sure the ide-scsi module is present, to verify this, type "ls -lR /lib | grep ide-scsi". If the module is not present you will need to recompile the kernel, which is beyond the scope of this document. We now need to get the proper drivers installed and loading at boot time. Open /etc/rc.d/rc.local and add the following line, to the end of the file, then save and exit the file.

      /sbin/insmod ide-scsi
    

    Next we need to configure the drivers so they work properly Open /etc/conf.modules and add the following lines at the bottom;

      alias scd0 srmod
      alias scsi_hostadapter ide-scsi
      options ide-cd ignore=hdd
    

    On the final line, notice I placed the device name of my ZIPCD, replace hdd with the device name of your CDR. Save the file and exit. To associate the driver with the proper drive, open /etc/lilo.conf, add the following line, right before or right after the "root=" line;

      append="hdd=ide-scsi" 
    

    Save the file and exit, rerun lilo by typing "/sbin/lilo" at the command line. Now reboot the system. Once it has come back up type "dmesg" at the command line, if all went well the last few lines should look similar to this;

      scsi0 : SCSI host adapter emulation for IDE ATAPI devices
      scsi : 1 host.
        Vendor: IOMEGA    Model: ZIPCD 4x650       Rev: 1.04
        Type:   CD-ROM                             ANSI SCSI revision: 02
      Detected scsi CD-ROM sr0 at scsi0, channel 0, id 0, lun 0
      sr0: scsi3-mmc drive: 24x/24x writer cd/rw xa/form2 cdda tray
      VFS: Disk change detected on device ide1(22,0)
    

    You should now be able to use cdrecord, to test this, type "cdrecord -scanbus" at the command line, output should look something like this;

    Cdrecord release 1.8a29 Copyright (C) 1995-1999 Jorg Schilling
    scsibus0:
       0,0,0       0) 'IOMEGA  ' 'ZIPCD 4x650     ' '1.04' Removable CD-ROM
       0,1,0       1) *
       0,2,0       2) *
       0,3,0       3) *
       0,4,0       4) *
       0,5,0       5) *
       0,6,0       6) *
       0,7,0       7) *
    

    Please note the three numbers separated by commas to the left of where your drive is listed. These numbers will be used in cdrecord's command line. If you get an error message, try going over the steps again and make sure you have the correct device name for the CDR. Read the CD-Writing-HOWTO, there are a few tricks to try listed in this file. If it still does not work it is possible your drive is incompatible.

    4. Quick and dirty Burn

    To burn a CD you will need to log in as root, if you want any user to be able to burn CD's type the following command, "chmod +s /usr/bin/cdrecord". Burning a CD in Linux is a two step process, first you must make the image, this is done with mkisofs. The syntax for mkisofs is;

      mkisofs -r -o image.img /folder/to/burn/
    

    Make a new directory and copy all the files you want to burn into this directory. As an example I make a directory in my home directory called mp3, I then copied about 600MB worth of MP3's into the folder. To make my image I typed the following;

      mkisofs -r -o mp3_cd.img /home/chris/mp3/
    

    After a few moments I have a 600MB CD image named mp3_cd.img. The second step is burning the image to the CD. This is done with cdrecord To burn my image I type the following;

      cdrecord -v speed=4 dev=0,0,0 -data mp3_cd.img
    

    The speed option should be set to the highest speed your drive will take, mine is a 4x burner, older drives may only be 1x or 2x, newer drives can be up to 8x or even 12x. The dev option can be had from "cdrecord -scanbus", which we ran earlier, my drive appeared next to 0,0,0 you should use whatever your drive appears next to. Again several minutes later I had a newly burned CD. For further information on mkisofs and cdrecord and their many options, please read the documentation.

    5. Final Note

    To use the drive as a normal cdrom, you must keep in mind the system now thinks your drive is a SCSI device, the device name is no longer hdd, it is now scd0. Go into the /dev directory and make a link, type "ln -s scd0 cdr", then go to /mnt and type "mkdir cdr". Move to the /etc directory and open fstab and add the following line right under the entry for the cdrom;

      /dev/cdr     /mnt/cdr     iso9660 noauto,owner,ro 0 0
    

    Now you can mount the drive the same way you mount the cdrom, something like "mount -t iso9660 /dev/cdr /mnt/cdr".

    [There is another article about CD recording in this issue. -Mike.]


    Copyright © 2000, Chris Stoddard
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    Hash Tables in Java

    By Ben Tindale


    Hash tables in Java

    A hash table is conceptually a contiguous section of memory with a number of addressable elements, commonly called bins, in which data can be quickly inserted, deleted and found. Hash tables represent a sacrifice of memory for the sake of speed - they are certainly not the most memory efficient means of storing data, but they provide very fast lookup times. Hash tables are a common means of organising data, so the designers of the Java programming language have provided a number of classes for easily creating and manipulating instances of hash tables.

    Hashtable is the class which provides hash tables in Java. Hashtable inherits directly from Dictionary and implements the Map, Cloneable and Serializable interfaces. The implementation of the Map interface is new with Java 1.2. You can view the documentation on Hashtable here.

    A key is a value that can be mapped to one of the addressable elements of the hash table. The Java programming language provides an interface for generating keys for a number of the core classes: as an example, the snippet below prints out the key representation of a string for later use in a hash table.

         String abc = new String("abc");
         System.out.println("Key for \"abc\" is "+ abc.hashCode());
    

    A hashing function is a function that performs some operation on a set of data such that a key is generated from that data with the key being used as the means of identifying which memory element of the hash table to place the data in. There are a number of properties that it is desirable for a hashing function to possess in order that the hash table be effectively used:

    • Data should be dispersed as randomly as possible across the hash table to minimise the chances of a collision. For example, a good hashing function would place the letter ``b'' fairly far from the letter ``a''.
    • The hashing function should execute in a reasonable period.
    Unfortunately, as we shall see below, the hashing functions provided by Java do not satisfy the first criterion.

    The load factor of a hash table is defined as the ratio of the number of filled bins to the total number of bins available. A bin is full when it points to or contains a data element. The load factor is a useful parameter to use in estimating the likelihood of a collision occurring. The Java programming language will allocate more memory to an existing hash table if the load factor exceeds 75%. The user can also choose to set the initial capacity of the hash table with the aim of reducing the number of rehashing methods required. The code snippet below demonstrate how this can be achieved.

           int initialCapacity = 1000;
           float loadFactor = 0.80;
           Hashtable ht = new Hashtable(initialCapacity, loadFactor);
    
    If you want to allocate more space for your hash table before the load factor reaches the specified value then use the rehash() method like this:
           ht.rehash();
    

    A collision occurs when two pieces of data are denoted with the same key by the hashing function. Since the point of using a hash table is to maximise the efficiency with which data is inserted, deleted or found, collisions are to be avoided as much as possible. If you know the hashing function used to create a key then it can be very easy to create collisions. For example, the Java code illustrates how two different strings can have the same hashcode. (text version)

    import Java.util.*;
    import Java.lang.*;
    
    // x + 31x = x(31 + 1) = x + 31 + 31(x-1)
    public class Identical
    {
        public static void main(String[] args)
        {
    	String s1 = new String("BB");
    	String s2 = new String("Aa");
    	System.out.println(s1.hashCode());
    	System.out.println(s2.hashCode());
        }
    }
    
    This code generates the following output on my RedHat 6.2 box using the kaffe compiler.
    [bash]$ javac Identical.java
    [bash]$ java Identical
    2112
    2112
    [bash]$
    

    Chaining is a method of dealing with collisions in a hash table by imagining that each bin is a pointer to a linked list of data elements. When a collision occurs, the new data element is simply inserted into this linked list in some manner. Similarly, when attempting to remove an element from a bin with more than one entry, the list is followed until the element matching the one to be deleted is found.Actually, there is no need for collisions to be dealt with by solely using a linked list - a data structure such as a binary tree could also be used. The Hashtable class in the Java foundation classes uses this method to insert elements into a hash table. Interestingly, using chaining means that a hashtable can have a load factor that exceeds 100%.

    Open addressing occurs when all of the elements in a hash table are stored in the table itself - no pointers to linked lists: each bin holds one piece of data. This is the simplest means of implementing a hash table, since the table reduces to being an array where the elements can be inserted or deleted from any index position at any time.

    Linear probing is a means of implementing open addressing by choosing the next free bin to insert a data element into if a collision occurs while trying to insert data. Each of the subsequent bins is checked for a collision before the data is inserted.

    The String class contains a method hashCode() which is used to generate a key which can used as the key for a hash table. The hashcode for a String object is computed as
    s[0]*31^(n-1)+s[1]*31^(n-2)+...+s[n-1]
    using integer arithmetic and where s[i] is the ith character of a string of length n. The hash value of an empty string is defined as zero.

    I've included a small sample program called CloseWords which finds words in the system dictionary which are ``close'' to the command line argument. To do this the program explicitly exploits one of the traits of the class String's hashing function which is that the hashcode generated tends to cluster together words which are of similar alphanumeric composition. This is actually an undesirable trait, since if the input data is comprised of a limited set of characters then there will tend to be a large number of collisions. The ideal hashing function would distribute the data randomly over the hash table, with trends in the data not leading to an all over tendency to cluster.

    Another limitation of the hashCode method is that by making the key of type integer the designers of Java unnaturally limited the possible magnitude of the key to just 2^32 -1 meaning that the probability of a collision occurring is much larger than if the key was represented by a 64 bit data type.

    The Hashtable class and methods supplied in the Java Foundation Classes are a powerful tool for data manipulation - particularly when rapid data retrieval, searching or deleting are required. For large data sets, however, the implementation of the hashing functions in Java will cause a tendency for clustering - which will unnecessarily slow down execution. A better implementation of a hash table would involve a hashing function which distributed data more randomly and a longer data type used for the key.

    References and links

    For a more complete discussion of the limitations of hash tables in Java and a better implementation see.

    Java is a superbly documented language - check it out at SUN.

    For information on the open source Kaffe compiler visit the website.

    CloseWords

    Note: you can grab the source code here
    import java.lang.*;
    import java.util.*;
    import java.io.*;
    
    /** CloseWords: Exploit the clustering tendency of the native hashCode() method
     * in the String class to find words that are "close" to the argument.
     */
    public class CloseWords
    {
        Hashtable ht;
        String currString;
    
        /** In the code below we create an instance of the Hashtable class in which to store
         * our hash of all of the words in the system dictionary (yes, this is a very memory
         * inefficient way of indexing the words).
         * 
         * @param args 
         */
        public static void main(String[] args)
        {
    	ht = new Hashtable();
    	try
    	    {
    		DataInputStream in = new DataInputStream(
    							 new BufferedInputStream(
    										 new FileInputStream("/usr/dict/words")));
    		while((currString = in.readLine())!=null)
    		    ht.put(new Integer(currString.hashCode()), currString);
    
    		int i = args[0].hashCode();
    		int found=0;
    
    		while(found < 5)
    		    {
    			i--;
    			if(ht.get(new Integer(i))!=null)
    			    {
    				System.out.println(ht.get(new Integer(i)));
    				found++;
    			    }
    		    }
    		i = args[0].hashCode();
    		found=0;
    
    		while(found < 5)
    		    {
    			i++;
    			if(ht.get(new Integer(i))!=null)
    			    {
    				System.out.println(ht.get(new Integer(i)));
    				found++;
    			    }
    		    }
    	    }
    	catch(IOException ioe)
    	    {
    		    System.out.println("IO Exception");
    	    }
        }
    }
    


    Copyright © 2000, Ben Tindale
    Published in Issue 57 of Linux Gazette, September 2000

    "Linux Gazette...making Linux just a little more fun!"


    The Back Page


    About This Month's Authors


    Matthias Arndt

    Matthias Arndt, 20 years old, I'm a Linux enthusiast, I'm from Germany and I will start studying computer sciences in conjuction with economics in October. I like good old Rock'n'Roll and, of course, Linux.. I love writing stories and I have always wanted to write for a computer magazine.

    Shane Collinge

    Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.

    Fernando Correa

    Fernando is a computer analyst just about to finish his graduation at Federal University of Rio de Janeiro. Now, he has built with his staff the best Linux portal in Brazil and have further plans to improve services and content for their Internet users.

    Pat Eyler

    "pate" is a linux/unix/networking geek who enjoys playing on the command line. When he's not puttering with or writing about computers and networks he likes to play with his kids, cook, and read. Talk to him at pate@gnu.org.

    Daniel Feenberg

    Daniel Feenberg is an economist and IT director at the National Bureau of Economic Research, a non-profit foundation in Cambridge, where his main work is the analysis of the behavioral effects of income taxation. He first used Unix (Bell Labs Version 6) in the mid 1970s on a PDP-11 with 3 glass teletypes and 10 megabytes of disk storage. But it ran 'ed' fine.

    Ray Ferrari

    I am a newbie to the Linux community, but have been following the rise of popularity for almost two years now. I continue to learn as much as possible about the Linux operating system in an attempt to become proficient and knowledgeable in it's use as an internet platform. I have volunteered on the Debian mailing list, and I continue to assist the Linux Professional Institute(LPI) promote their agenda. I help staff the LPI booths at events and write articles updating their achievements.

    Eric Kasten

    I'm a software developer by day and an artist, web developer, big dog, gardener and wine maker by night. This all leaves very little time for sleep, but always enough time for a nice glass of Michigan Pinot Gris. I have a BS double major in Computer Science and Mathematics and an MS in Computer Science. I've been using and modifying Linux since the 0.9x days. I can be reached via email at kasten@sunpuppy.com or through my website at http://www.sunpuppy.com.

    Mark Nielsen

    Mark founded The Computer Underground, Inc. in June of 1998. Since then, he has been working on Linux solutions for his customers ranging from custom computer hardware sales to programming and networking. Mark specializes in Perl, SQL, and HTML programming along with Beowulf clusters. Mark believes in the concept of contributing back to the Linux community which helped to start his company. Mark and his employees are always looking for exciting projects to do.

    Ben Okopnik

    A cyberjack-of-all-trades, Ben wanders the world in his 38' sailboat, building networks and hacking on hardware and software whenever he runs out of cruising money. He's been playing and working with computers since the Elder Days (anybody remember the Elf II?), and isn't about to stop any time soon.

    Kapil Sharma

    Kapil Sharma is a Linux/Unix and Internet security consultant. He has been working on various Linux and Unix systems for more than 2 years. He is providing commercial support for Linux/Unix systems and writing technical articles. His professional web site is linux4biz.net.

    Juraj Sipos

    I live and work in Bratislava, Slovakia as a library information worker, translator and research reader at the Institute for Child Psychology. I published some of my poetry here and in USA, I translated some books from English (e.g., Zen Flesh, Zen Bones by Paul Reps). You can see some of my stories and poetry at http://www.crosswinds.net/~aproximetri/index.htm. Computers are my hobby.

    Martin Skjöldebrand

    Martin is a former archaeologist who now does system administration for a 3rd world aid organisation. He also does web design and has been playing with computers since 1982 and Linux since 1997.

    Chris Stoddard

    I work for Dell Computer Corporation doing "not Linux" stuff. I have been using computers since 1979 and I started using Linux sometime in 1994, exclusivly since 1997. My main interest is in networking implementations, servers, security, Beowulf clusters etc. I hope someday to quit my day job and become the Shepard of a Linux Farm.

    Ben Tindale

    I'm working full time for Alcatel Australia on various xDSL technologies and writing Java based web apps. I've currently taken a year off from studying to work, and have just sold my share in an internet cafe I helped to found. So much to learn, so little time :)


    Not Linux


    Here are some of the more amusing spams LG received this month:

    • "I am searching for only 10 elite individuals with the work ethic necessary to generate a cash-flow for themselves of $2,000 - $5,000per week."

    • "Somos XXXXX - COLOMBIA, ONG que trabaja en favor de los Niños de las Calles Colombianas. Trabajamos por la infancia que ha recibido como herencia un país destruido y roto por la violencia y la corrupción.... En nuestra TIENDA SOLIDARIA encontrara la satisfaccion solidaria a su gusto. En ella entre otras cosas puedes afiliarte a nuestro "Club Niños de Papel" el cual sera nuestra gran base humana, para conseguir la recuperación de los niños de la calle de Colombia."
      [I'm not sure of this translation since I'm not a native Spanish speaker, but here goes: "We are XXXXX - Colombia, (NGO, Non-Governmental Organization?), who labor for Colombia's street kids. We work for the child who has inherited a country destroyed by violence and corruption.... In our SOLIDARITY SHOP you'll find the means to satisfy your yearning for solidarity with them. In it among other things you can join our "Paper Children Club", which will be our great human base to help the street kids of Colombia to recover." Hmm, save the children by buying a trinket. And how much of the profits go to the children?]

    • "Dear Friend! How are you today? We hope you're doing great. We are happy to announce that our famous Secret Wealth Money-Making System is now being released again to the general public."

    • This is a legitimate business announcement, being sent in compliance with all rules & regulations that govern internet commerce. We respect your privacy. You received this email because we are on the same opt in list or we've had contact in the past."

    • "^[$BFMA3$N%a!<%k$K$F!"BgJQ<:Ni$$$?$7$^$9!#^[(B"

    • "CONFIDENTIAL! The SOFTWARE They Want BANNED In all 50 STATES. Why? Because these secrets were never intended to reach your eyes... Get the facts on anyone! Locate Missing Persons, find Lost Relatives, obtain Addresses and Phone Numbers of old school friends, even Skip Trace Dead Beat Spouses. This is not a Private Investigator, but a sophisticated SOFTWARE program DESIGNED to automatically CRACK YOUR CASE with links to thousands of Public Record databases."

    • "We need a bank account number to which we will wire US$31 MILLION, of which you will keep 30% as your share...."

    Michael Orr
    Editor, Linux Gazette, gazette@ssc.com


    This page written and maintained by the Editor of the Linux Gazette.
    Copyright © 2000, gazette@ssc.com
    Published in Issue 57 of Linux Gazette, September 2000