...making Linux just a little more fun!

October 2007 (#143):


This month's answers created by:

[ Amit Kumar Saha, Ben Okopnik, Faber Fedor, Kapil Hari Paranjape, René Pfeiffer, MNZ, Neil Youngman, Rick Moen, Suramya Tomar, Mike Orr (Sluggo), Steve Brown, Thomas Adam ]
...and you, our readers!

Editor's Note

You may have noticed the absence of Mailbag, Talkback, and other mail-related features of LG in the previous edition (LG#142) - your Mailbag Editor was engaged in an urgent project launch (the birth of our son, Michael Kaishu Okopnik) just as the LG editorial cycle started. I'm compiling this edition with the "help" of our newest staffer, who waves his first "Hello, World!" to the LG readership.
-- Kat

Our Mailbag

Bash Ref. Manual 8.4.3 Commands for Changing Text

tuxsun1 [tuxsun1 at gmail.com]

Tue, 04 Sep 2007 12:01:58 -0500

I do not understand how to implement negative arguments as documented in the following excerpts from the Bash Reference Manual, Chapter 8. Command Line Editing, Section 8.4 Bindable Readline Commands, sub-section 8.4.3 Commands for Changing Text:

|upcase-word (M-u)|
    Uppercase the current (or following) word. With a negative argument,
    uppercase the previous word, but do not move the cursor.
|downcase-word (M-l)|
    Lowercase the current (or following) word. With a negative argument,
    lowercase the previous word, but do not move the cursor.
|capitalize-word (M-c)|
    Capitalize the current (or following) word. With a negative
    argument, capitalize the previous word, but do not move the cursor.
Pressing ALT-u, ALT-l, or ALT-c to uppercase, lowercase, or capitalize the current (or following) word is very straightforward.

But for the life of me I'm baffled as to what they mean, "With a negative argument," to uppercase, lowercase, or capitalize the previous word.

What negative argument? From where? What keystroke(s)? I'm lost as to how to uppercase, lowercase, or capitalize the previous word.

What am I missing in their explanation?

Thank you in advance!


[ Thread continues here (2 messages/2.74kB) ]

Sound problems after Kubuntu upgrade

Mike Orr [sluggoster at gmail.com]

Sat, 11 Aug 2007 11:48:07 -0700

My sound isn't working right. How do I fix it?

Seriously, I upgraded from Kububtu 6.10 to 7.04 in a different root partition, though they're sharing the same home partition. Now where the old system makes a big sound, the new system makes a teensy-weensy little sound. I go back and startle the neighbors; I sally ho (or ho Sally), and I have to wait till the refrigerator shuts off to hear the sound.

The configuration in both appears to be identical. KInfoCenter says it's a VIA 8237 device with ALC655, Type 10 ALSA emulation. KMix has "Master" and "PCM" unmuted and their volumes near maximum. In the Switches tab, all the lights are off except "External Amplifier", which doesn't do anything either way.

The KDE Peripherals dialog has "Enable the sound system", "Run with the highest possible priority", "sound buffer as large as possible", "auto-suspend if idle after 60 seconds", "autodetect audio device", "default quality", "Midi through midi through port-0 - ALSA device". The test sound is quiet and distorted on the new system, but loud and clear on the old one.

Mike Orr <sluggoster@gmail.com>

[ Thread continues here (9 messages/11.29kB) ]

Clock problem

René Pfeiffer [lynx at luchs.at]

Sun, 5 Aug 2007 20:28:53 +0200

[[[ This is a followup to a question published previously. http://linuxgazette.net/141/lg_mail.html - Kat ]]]

On Jul 30, 2007 at 1227 +1200, Jimmy Kaifiti appeared and said:

>Hi , my name is Jimmy,can anyone help me fix the time on my PC.  I
>change the Battery so many time ,I mean the new CMOS Battery ,but my
>time is still not read correct

What hardware are you using? Which operating system do you use on top of this hardware? What did you try to keep the system clock in sync?

Modern PC hardware has several time sources, and not all mainboards have reliable counters for keeping the time in synchronisation. The CMOS battery may be one source of problems, but if you use an operating system with a Linux kernel it will read the time from the nonvolatile RAM on your board only once when booting. If you correct your time after that you may have to write it to the nonvolatile RAM on your board. This can be done with commands such as "hwclock --systohc" or similar.

Here is some information on the difficult task of keeping the system time sufficiently accurate:






Best wishes, René.

Things Debian Project Leaders Do

René Pfeiffer [lynx at luchs.at]

Sun, 5 Aug 2007 12:46:49 +0200

[[[ This is a followup to http://linuxgazette.net/141/misc/lg/things_debian_project_leaders_do.html - Kat ]]]

On Jul 27, 2007 at 1004 -0400, Ben Okopnik appeared and said:

> [...]
> I seem to recall a couple of programs in the Debian kit that do
> something like this by feeding random data to apps, but this one seems
> to be a real star. E.g., http://sam.zoy.org/zzuf/lol-firefox.gif crashes
> my Firefox ( very nicely - one-click functionality. :)

It does even crash the latest Iceweasel with the same one-click functionality. Maybe one of the libraries doesn't like it.=20

> [...] I haven't taught a security class in a
> while, but it seems like 'zzuf' would make a very nice teaching aid
> for when I'm explaining vulnerability discovery. [...]

I have some workshops on this topic in October and November, and thanks to Kapil the students will have a new toy. :)

Best wishes, René, just returned from the Adriatic coast.

[ Thread continues here (3 messages/2.49kB) ]

Extending GIMP Plugins

Amit Kumar Saha [amitsaha.in at gmail.com]

Tue, 21 Aug 2007 15:34:53 +0530

Hello all,

GIMP plugins extend GIMP, we know that.

Is there any way to make GIMP plugins themselves extensible. My point here is that, can we make a GIMP plugin which allows itself modifiable by a end-user to include a new functionality without much of a hassle on part of the end-user. Does any one know of a method to do this or any GIMP plugin which already does this?

I would appreciate even the slightest of insights.

Thanks a lot!


Amit Kumar Saha

[ Thread continues here (2 messages/2.01kB) ]

Latex Editors

Amit Kumar Saha [amitsaha.in at gmail.com]

Sun, 26 Aug 2007 00:36:35 +0530

[[[ This is a followup to http://linuxgazette.net/141/misc/lg/latex_editors.html - Kat ]]]

Hi all,

Its been a long time since this thread was alive. Well, during this phase I just completed a project report, completely prepared in LATEX, my first. I used Kile (kile.sourceforge.net) for my purpose. I was too dumb probably to either even try the Vim or Emacs extensions as suggested.

oh yes, the report has got such a neat, professional look to it. Thanks to Latex!

Thank you all for all the suggestions.


Amit Kumar Saha

I have a silly question

dinesh sikhwal [disikh at yahoo.com]

Sun, 19 Aug 2007 03:22:11 -0700 (PDT)

[[[ The original subject for this thread was "Talkback:128/lg_foolish.html". The only truth in that was the foolish part. No connection at all to the alleged inspiration, so I've taken the liberty of renaming it. And who's surprised that this came in as html mail? - Kat ]]]


[ Thread continues here (3 messages/2.41kB) ]

Random Numbers

MNZ [mnzaki at gmail.com]

Tue, 7 Aug 2007 17:48:20 +0300

Hey Gang, A question about computer "generated" random numbers, in short how random are they really?

I was trying to write this little perl script to play random songs and I noticed it tends to repeat a certain collection of songs (which changes everyday, or even every couple of hours). So what's really happening?



[ Thread continues here (19 messages/25.62kB) ]


Thomas Adam [thomas.adam22 at gmail.com]

Tue, 14 Aug 2007 22:21:40 +0100

[[[ This is a followup to http://linuxgazette.net/141/misc/lg/shuttle_sd39p2__should_i_buy_one.html - Kat ]]]

---------- Forwarded message ----------

From: Paul <sayhellotopaul@gmail.com>
Date: 14-Aug-2007 22:20
Subject: Shuttle-SD39P2
To: thomas.adam22@gmail.com


I read that you are using the above Shuttle and I am thinking about getting one myself. I was wondering though what graphics did you use and whether it is so silent that you do not notice it or that you can hear it but it is not bothering at all. I am thinking about getting a 8800gts, but am not sure whether it will be too loud for the system.. I would appreciate your info!


[ Thread continues here (2 messages/3.44kB) ]

C Programming [Debian Issues]

René Pfeiffer [lynx at luchs.at]

Fri, 17 Aug 2007 17:32:13 +0200

Hello, Steve!

On Aug 17, 2007 at 1607 +0100, Steve Brown appeared and said:

> [...]
> Do you know, sometimes getting an answer just allows you to think of
> more questions. I'm off to look into C programming again all because
> of one little email and make as well.

I know what you mean. :) While you are at it, I can recommend the book "Deep C Secrets" written by Peter van der Linden. http://www.taclug.org/booklist/development/C/Deep_C_Secrets.html

The book holds treasures of tips, hints and plain shocks for all who know C and like to revisit their skills. The author packs everything into a kind of technical novel with lots of entertaining examples.

Best, René, who also did a lot of C stuff on the Amiga.

[ Thread continues here (8 messages/7.71kB) ]


Suramya Tomar [security at suramya.com]

Tue, 14 Aug 2007 20:03:17 +0530

Hey Mohsen, Sorry I have no idea of what that particular error means as I have never seen it before. I am cc'ing this email to the TAG (The Answer Gang) @Linux Gazette and maybe one of the smart people who frequent the list might be able to help you with this.

TAG: Got this email a few days ago. Any idea whats causing this error and how to fix it?

> Dear Suramya, 
> I have readed Steve article,(Thanks Steve),
> i wanna DFS & i wrote a dfs.cfg which i attached it.
> i run following command:
> dfsbuild -V -c /etc/dfsbuild/dfs.cfg -w /myworkdir
> It works good,It retrieve packages while i get receive following message:
> **************************************************************
> D: Return code: 0
> D: call action: helper-remove
> P: Deconfiguring helper cdebootstrap-helper-makedev
> D: Execute "dpkg -P cdebootstrap-helper-makedev" in chroot
> O: (Reading database ...
> O: 7141 files and directories currently installed)
> D: Return code: 0
> D: call action: apt-cleanup
> D: Execute "apt-get update" in chroot
> O: Reading packages lists...
> O:
> D: Return code:0
> P: Writing apt sources.list
> [dfs/DEBUG] Saving sources.list
> [dfs/CRITICAL] Exception: (NoOption "mirror","get (i386/mirror)")
> dfsbuild: (NoOption "mirror","get (i386/mirror)")
> *********************************************************************
> I know that 0 code is correct action.But 2 or 3 last line confused me....
> Please help me......

Thanks, Suramya

Name : Suramya Tomar
Homepage URL: http://www.suramya.com

[ Thread continues here (2 messages/2.78kB) ]

Problem with openvpn certificate Revoke

Smile Maker [britto_can at yahoo.com]

Tue, 21 Aug 2007 04:49:24 -0700 (PDT)


I am running openvpn-2.0.9-1 in Fedora core 4.I installed as rpm.

When i try to revoke the certificate from the data base , I am getting the following error

"Using configuration from /etc/openvpn/easy-rsa/openssl.cnf
error on line 282 of config file '/etc/openvpn/easy-rsa/openssl.cnf'
6819:error:0E065068:configuration file routines:STR_COPY:variable has no value:conf_def.c:629:line 282
Using configuration from /etc/openvpn/easy-rsa/openssl.cnf
error on line 282 of config file '/etc/openvpn/easy-rsa/openssl.cnf'
6820:error:0E065068:configuration file routines:STR_COPY:variable has no value:conf_def.c:629:line 282
cat: crl.pem: No such file or directory "
Can u guys help me out


[ Thread continues here (2 messages/2.54kB) ]

Debian Issues

Amit Kumar Saha [amitsaha.in at gmail.com]

Fri, 17 Aug 2007 14:45:25 +0530


I have been facing a couple of "weird" issues on Debian 4.0

1. Every time (on a new boot) I have to "listen" to a video song, I have to run "alsaconf". I can see the video and listen to audio files but not the video songs until i run "alsaconf"

2. GCC reports linker error while using "sin", "cos", etc. However it is solved when i append a "-lm" during compilation

Waiting for some insights!


Amit Kumar Saha

[ Thread continues here (20 messages/24.13kB) ]

XMMS2 (Was: Random Numbers)

Thomas Adam [thomas at edulinux.homeunix.org]

Wed, 15 Aug 2007 00:19:07 +0100

On Tue, Aug 07, 2007 at 06:03:18PM +0200, Ren? Pfeiffer wrote:

> On Aug 07, 2007 at 1637 +0100, Thomas Adam appeared and said:
> > On 07/08/07, Ren? Pfeiffer <lynx@luchs.at> wrote:
> > > You could probably add a counter per song and check if the selected song
> > > was played more than $n times and prefer a song that was played less
> > > times.
> > 
> > "Collections".  XMMS2 allows for this:
> > 
> > http://wiki.xmms2.xmms.se/index.php/Collections
> Oh, I missed that feature apparently (but I am still "stuck" with
> xmms1, too lazy to upgrade).

It's not a straight upgrade -- the design of XMMS2 is completley different to XMMS1. You can't compare the two. In many respects, XMMS2 mimicks how MPD works, in that it's a client/server idea.

The GUi tools for it are all still in development. It lacks anything like ncmpc for the console though, alas. Something I am considering porting, actually.

-- Thomas Adam

"He wants you back, he screams into the night air, like a fireman going
through a window that has no fire." -- Mike Myers, "This Poem Sucks".

PPM Image Reader in C

Amit Kumar Saha [amitsaha.in at gmail.com]

Fri, 17 Aug 2007 11:47:38 +0530

Hi all,

Has anyone got a "ready-made" PPM image reading program in C? I have to meet a deadline for a project, so it would really be useful if anyone could provide me with such a program.


Amit Kumar Saha

[ Thread continues here (6 messages/4.40kB) ]

Talkback: Discuss this article with The Answer Gang

Published in Issue 143 of Linux Gazette, October 2007


This month's answers created by:

[ Ben Okopnik, Rick Moen ]
...and you, our readers!

Editor's Note

These are the e-mail exchanges that lead to Rick Moen's Domain Expiration article in LG142 (http://linuxgazette.net/142/moen.html)

Our Mailbag

V. 3.2 results on all possible domains

Rick Moen [rick at linuxmafia.com]

Tue, 14 Aug 2007 14:55:33 -0700

I wrote to Ben, early Monday:

> OK, I've finished testing all 270 TLDs.

271. As an extreme oddity, it turns out that .arpa is both a valid TLD and serves up meaningful port-43 WHOIS data. See: http://en.wikipedia.org/wiki/.arpa (DNS admins know .arpa from their reverse-DNS zonefiles, but there are other portions of the namespace.)

However, .arpa's WHOIS data lacks expiration dates, FWIW.

http://en.wikipedia.org/wiki/Category:Generic_top-level_domains points out a variety of other wacky valid TLDs and proposed TLDs, but none other than .arpa is both valid and serves WHOIS data.

My favourite of the wacky TLDs is .root, described here: http://en.wikipedia.org/wiki/Vrsn-end-of-zone-marker-dummy-record.root

> As noted, .my, .tk, and .tw aren't yet handled, but can be.

Ben, no hurry, but any time you can tweak the regexes for those three final additions, it would be good.

Other readers: http://linuxmafia.com/pub/linux/network/domain-check-testdata has notes, and the latest release of domain-check is in that directory.

[ Thread continues here (6 messages/8.33kB) ]

Domain auto-renew mechanisms

Rick Moen [rick at linuxmafia.com]

Mon, 6 Aug 2007 15:32:50 -0700

I figured this will be of general interest. Ben and I (with Ben doing the real work) have been working, off and on, on a scriptable mechanism for checking a list of Internet domains for impending expirations, and then do some sort of appropriate notification. Ben's Perl script, "domain-check", for this purpose relies on the commodity open-source client for the TCP port 43 "WHOIS" information service, offered by most domain registries.

We've run into a fascinating tangle of complications, most of them (primarily) non-technical. It's been a lovely case study in all of the manifold ways that the interface between people and automated systems can go wrong.

Domain registrars (typically) don't want their customers' domains to expire: That leads to loss of business, after all -- though most are pleased to put expired domains in a "held" state for a while, in which one can ransom them back for significant money. Registrars have a few ways to help customers avoid accidental expiration: One of them is called "auto-renew". With this feature enabled for one's domain (now a default setting at many registrars), the registrar will attempt to charge a renewal fee to the customer's credit card of record, at some time prior to -- or exactly at -- the time of expiration.

Most registrars' implementation of that idea involves, say, three successive attempts to charge the customer's credit card, at, say, 30 days from expiration, 20 days, and then 10 days -- or something like that. At least, I recall seeing such a policy once or twice: I think Joker.com does that, for example.

But then there's Gandi.net, of France. Look at the first line of the domain-check report below:

----- Forwarded message from root <root@linuxmafia.com> -----

Date: Sun, 5 Aug 2007 08:46:12 -0700
To: rick@linuxmafia.com
Subject: domain-check: Expired domains
From: root <root@linuxmafia.com>
According to 'whois', the following domains have expired:

laconiv.org (0 days ago) nic.edu (5 days ago) nic.cx (7 days ago) nic.hm (280 days ago)

----- End forwarded message -----

I've corresponded with Chaz Boston-Baden (a friend), owner of the laconiv.org domain (which was the site of the recent World Science Fiction Convention in Los Angeles, L.A.con IV): He thanked me for keeping laconiv.org on my monitoring list, and assured me that he's arranged with Gandi.net, his registrar, to auto-renew.

Here's the relevant snippet from WHOIS:

$ whois laconiv.org | more [Querying whois.publicinterestregistry.net] [whois.publicinterestregistry.net] [...] Domain ID:D99628485-LROR Domain Name:LACONIV.ORG Created On:06-Aug-2003 22:18:41 UTC Last Updated On:10-Sep-2005 01:00:04 UTC Expiration Date:06-Aug-2007 22:18:41 UTC Sponsoring Registrar:Gandi SAS (R42-LROR) Status:OK Registrant ID:0-622473-Gandi Registrant Name:SCIFI INC. c/o BADEN Registrant Organization:SCIFI INC. c/o BADEN

So, holy mama! Expiration is set for 10:18 PM UTC today, which is 3:18 PM Pacific Daylight Time -- and it's 3:26 PM PDT right this moment. (Probably some timing glitch.)

[ ... ]

[ Thread continues here (4 messages/11.10kB) ]

Talkback: Discuss this article with The Answer Gang

Published in Issue 143 of Linux Gazette, October 2007

An Ongoing Discussion of Open Source Licensing Issues

Our Mailbag

More on the Eben Moglen / Tim O'Reilly argument

Rick Moen [rick at linuxmafia.com]

Tue, 21 Aug 2007 14:01:00 -0700

Tim O'Reilly has now written about the ~1/2 hour interview he recently did with Prof. Eben Moglen on "Software Licensing in the Web 2.0 Era" at OSCon 2007, in a post entitled "My Tongue-Lashing from Eben Moglen": http://radar.oreilly.com/archives/2007/08/my_tonguelashin.html

It's interesting and thoughtful -- and also includes an .ogg-format clip of the interview that will let me finally see it on my Xubuntu G3 iBook. ;->

Pending my doing so, here's an excerpt from Moglen's remarks, concerning the much-discussed ASP loophole aka SaaS (Software as a Service) loophole:

"We've got to conclude that what Google does, they have a right to do in freedom. They shouldn't need anyone's permission to run programs. Stallman was right about that at the very beginning. If you have to ask other people's permission to run a program, you don't have adequate freedom. And that means you've got to have the right to run programs for someone else. And what's more, you have to have the right to make private modifications. Because if you don't have a right to make private modifications, and keep them to yourself whenever you want to, then the principle of freedom of thought is being rudely disrupted by a required responsibility to disclose what you are thinking to someone else... So if we take the philosophical responsibility to provide freedom seriously, we're going to have to say ... their rights, properly protected, may conflict with other people's rights, properly protected. The solution isn't to reduce anyone's rights."

As I feared, Prof. Moglen (and impliedly the FSF) is saying that ASP deployments should be seen as private modification and usage, and that private usage must not be encumbered by, say, copyleft obligations tied to the act of offering codebases up for public usage via network access.

This stance, ironically, ends up amounting to a BSD-licensing position. Advocates of that position, sooner or later, tend to voice the standard BSD-advocacy mantra: "So what if it permits creation of proprietary forks? If you're any good at what you do, you'll just outcompete them."

Which is fine if you are OK with working hard to produce free / open source software, only to see a competitor take a branch of your work proprietary. FSF has always, until now, been the prime examplar of a group that is not OK with that -- that, in fact, invented the concept of copyleft as a means by which software authors can, if they wish, prevent that dilution of (their portion of) the commons.

Matt Asay of former badgeware firm Alfresco has just said pretty much the same thing, and I think he's right (http://news.com.com/8301-10784_3-9763068-7.html):

GPL is the new BSD in Web 2.0, and why this matters The Internet turns open-source licensing on its head. Copyleft is neither copyright nor copyleft anymore in the Web world. It's just copy, because distribution of a service over the Internet doesn't count as distribution in the archaic licensing language that plagues most open-source licenses.

[ ... ]

[ Thread continues here (1 message/5.14kB) ]

When will CPAL actually be _used_?

Rick Moen [rick at linuxmafia.com]

Tue, 21 Aug 2007 13:16:21 -0700

More to come, I'm sure.

----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----

Date: Tue, 21 Aug 2007 13:04:03 -0700
To: Ross Mayfield <ross.mayfield@socialtext.com>
Cc: license-discuss@opensource.org
From: Rick Moen <rick@linuxmafia.com>
Subject: When will CPAL actually be _used_?
Hello, Ross. I noted with great interest Socialtext's submission of Common Public Attribution License to OSI, at the beginning of July, and in fact posted favourable comments on it to the license-discuss mailing list, at that time. The OSI Board then, of course, approved it on July 25.

Since then, your firm's press releases and numerous bits of news coverage (The Register, CMS Wire, eWeek, and quite a few others) have proclaimed your firm's conversion of Socialtext Open to this new OSI-certified licence.

In addition, your firm's Web pages began prominently featuring OSI's "OSI Certified" regulated certification mark logo, which may be used only for codebases released under OSI certified open source licences.

So, I am obliged to ask: When will your product actually use CPAL? To date, it is not. To wit:

o The SourceForge.net project at http://sourceforge.net/projects/socialtext/ has, as the latest downloadable tarball, Socialtext Open release It's perhaps understandable that that form of access to source code still gets the user only code under the "Socialtext Public Licence 1.0" MPL 1.1 + Exhibit B badgeware licence -- because that tarball, your latest full release, was dated May 22, 2007, prior to your CPAL announcements.

o However, what's a bit more difficult to understand is that following hyperlinks for source code access on your coprorate Web site takes you to http://www.socialtext.net/open/index.cgi?socialtext_open_source_code, which cites a svn command to check the "head" development codebase out of repo.socialtext.net -- and that code, your very latest developer code, is likewise under Socialtext Public Licence 1.0.

So, when is Socialtext going to actually use the OSI-certified licence that it's been claiming in public to be using?

Also, would you mind please removing the "OSI Certified" logo from your pages until such time as you are legally entitled to use it? Thank you.

As a reminder, I called your attention here on December 29, 2006 to your then-advertised wiki page http://www.socialtext.net/stoss/ claiming in error that Socialtext had submitted SPL 1.0 to OSI's certification process, when it had not done so. You acknowledged the critique, but Socialtext did not fix the misstatement of fact until I reminded you of it a second time, here, on January 22, 2007. I hope that your firm's correction of its erroneous public licensing information, this time, will be significantly faster.

Best Regards, Rick Moen rick@linuxmafia.com (speaking only for himself)

----- End forwarded message -----

[ Thread continues here (3 messages/8.86kB) ]

SCO Group follies

Rick Moen [rick at linuxmafia.com]

Sun, 12 Aug 2007 22:47:19 -0700

----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----

Date: Fri, 10 Aug 2007 19:44:37 -0700
To: conspire@linuxmafia.com
From: Rick Moen <rick@linuxmafia.com>
Subject: [conspire] Novell beats SCO;  also, CABAL meeting tomorrow!  ;->
I thought I'd pass this along. Also, mail from Darlene reminded me to mention that tomorrow, 4 PM to midnight, is the CABAL meeting here in Menlo Park.

I made the same sort of plum-jam marinade as last time, except this time I'll have had a chance to let the meat (mostly beef, this time) soak it up for 3+ days instead of a couple of hours.

So, don't be strangers.

Date: Fri, 10 Aug 2007 17:59:12 -0700
To: svlug@lists.svlug.org
From: Rick Moen <rick@linuxmafia.com>
Subject: [svlug] (forw) SCO v. Novell Decided
Groklaw coverage is at http://www.groklaw.net/article.php?story=20070810165237718 .

Note also: (1) Judge Kimball left Novell's claim against SCO Group for slander of title still undecided. (2) SCO Group now owe to Novell the copyright royalties money it collected from Microsoft and Sun, which is considerably more money than it now has. (3) The SCO v. IBM and Red Hat v. SCO cases can now proceed.

----- Forwarded message from David Chait <davidc@bonair.stanford.edu> -----

Date: Fri, 10 Aug 2007 16:12:07 -0700
From: David Chait <davidc@bonair.stanford.edu>
To: "sulug-discuss@lists.Stanford.EDU" <sulug-discuss@mailman.Stanford.EDU>
Subject: SCO v. Novell Decided
For all of those who haven't heard yet, there has been a ruling on SCO v. Novell this afternoon, and SCO lost massively.

(reposted from groklaw.net)


For the reasons stated above, the court concludes that Novell is the owner of the UNIX and UnixWare copyrights. Therefore, SCO's First Claim for Relief for slander of title and Third Claim for specific performance are dismissed, as are the copyright ownership portions of SCO's Fifth Claim for Relief for unfair competition and Second Claim for Relief for breach of implied covenant of good faith and fair dealing. The court denies SCO's cross-motion for summary judgment on its own slander of title, breach of contract, and unfair competition claims, and on Novell's slander of title claim. Accordingly, Novell's slander of title claim is still at issue.

The court also concludes that, to the extent that SCO has a copyright to enforce, SCO can simultaneously pursue both a copyright infringement claim and a breach of contract claim based on the non-compete restrictions in the license back of the Licensed Technology under APA and the TLA. The court further concludes that there has not been a change of control that released the non-compete restrictions of the license, and the non-compete restrictions of the license are not void under California law. Accordingly, Novell's motion for summary judgment on SCO's non-compete claim in its Second Claim for breach of contract and Fifth Claim for unfair competition is granted to the extent that SCO's claims require ownership of the UNIX and UnixWare copyrights, and denied in all other regards.

[ ... ]

[ Thread continues here (3 messages/24.77kB) ]

[ILUG] Solution needed for terminale server licensing problem.

Rick Moen [rick at linuxmafia.com]

Tue, 14 Aug 2007 09:38:27 -0700

Another one for the list.

----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----

Date: Tue, 14 Aug 2007 09:31:52 -0700
To: ilug@linux.ie
From: Rick Moen <rick@linuxmafia.com>
Subject: Re: [ILUG] Solution needed for terminale server licensing problem.
Quoting Michael Armbrecht (michael.armbrecht@gmail.com):

> There are alternatives to MS Project, most of them done in Java.
> This one (http://sourceforge.net/projects/openproj/) looks very
> MS-Projectish. It's Open Source....

Sadly, no, it's not. Quoting my summary at "Project Management" on http://linuxmafia.com/kb/Apps/:

Licence is deceptively claimed to be open source, but in fact is MPL 1.1 plus a proprietary "badgeware" addendum that impairs third-party commercial usage by requiring that derivative works include mandatory advertising of OpenProj publisher Projity's name and trademarked logo on "each user interface screen" while specifically denying users a trademark licence.

Rick Moen                                     Age, baro, fac ut gaudeam.
Irish Linux Users' Group mailing list
About this list : http://mail.linux.ie/mailman/listinfo/ilug
Who we are : http://www.linux.ie/
Where we are : http://www.linux.ie/map/

[ Thread continues here (4 messages/9.93kB) ]

Suggestion for domain-check licensing

Rick Moen [rick at linuxmafia.com]

Mon, 3 Sep 2007 10:54:25 -0700

----- Forwarded message from Matty <matty91@gmail.com> -----

Date: Mon, 3 Sep 2007 10:37:56 -0400
From: Matty <matty91@gmail.com>
To: Rick Moen <rick@linuxmafia.com>
Subject: Re: Suggestion for domain-check licensing
On 7/16/07, Rick Moen <rick@linuxmafia.com> wrote:

> Hi, Ryan!  Thanks for the version 1.4 update.  You still might want to
> consider the following suggested small tweak, if you want the script to
> be open source:
> # License:
> #  Permission is hereby granted to any person obtaining a copy
> #  of this software to deal in the software without restriction
> #  for any purpose, subject to the condition that it shall be
> #  WITHOUT ANY WARRANTY; without even the implied warranty of

Hi Rick,

I came across your Linux Gazette article this weekend. Great write up! I thought I updated the licensing for domain-check last month, but it looks like I saved the file with the wrong name. Ooops.:( I just moved the updated file into place, and it's now licensed under the GPL (the same as Ben's Perl script). This should allow you (and anyone else) to use it for whatever you want.

Thanks, - Ryan

UNIX Administrator

Groklaw's OSI item

Rick Moen [rick at linuxmafia.com]

Wed, 22 Aug 2007 13:18:26 -0700

----- Forwarded message from Matthew Flaschen <matthew.flaschen@gatech.edu> -----

Date: Wed, 22 Aug 2007 16:12:51 -0400
From: Matthew Flaschen <matthew.flaschen@gatech.edu>
To: License Discuss <license-discuss@opensource.org>
Subject: Re: Groklaw's OSI item
Rick Moen wrote:

> (I call it bad because it trivially impairs usage, while
> simultaneously utterly failing in its probable goal of being a copyleft
> licence within the ASP industry.

Actually, CPAL does have a network clause (unrelated to the advertising clause, which does nothing for the SaaS problem):

"The term ???External Deployment??? means the use, distribution, or communication of the Original Code or Modifications in any way such that the Original Code or Modifications may be used by anyone other than You, whether those works are distributed or communicated to those persons or made available as an application intended for use over a network. As an express condition for the grants of license hereunder, You must treat any External Deployment by You of the Original Code or Modifications as a distribution under section 3.1 and make Source Code available under Section 3.2."

This is taken from the Open Software License.

This would seem to require SaaS deployments provide source code to end users.

Matt Flaschen

----- End forwarded message -----

----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----

Date: Wed, 22 Aug 2007 13:17:06 -0700
To: license-discuss@opensource.org
From: Rick Moen <rick@linuxmafia.com>
Subject: Re: Groklaw's OSI item
Quoting Matthew Flaschen (matthew.flaschen@gatech.edu):

> Actually, CPAL does have a network clause (unrelated to the advertising
> clause, which does nothing for the SaaS problem):
> "The term ???External Deployment??? means the use, distribution, or
> communication of the Original Code or Modifications in any way such that
> the Original Code or Modifications may be used by anyone other than You,
> whether those works are distributed or communicated to those persons or
> made available as an application intended for use over a network. As an
> express condition for the grants of license hereunder, You must treat
> any External Deployment by You of the Original Code or Modifications as
> a distribution under section 3.1 and make Source Code available under
> Section 3.2."
> This is taken from the Open Software License.
> This would seem to require SaaS deployments provide source code to end
> users.

I stand corrected! Thank you. That's the problem with writing from unaided memory, before getting around to checking of original documents.

CPAL thus does indeed join GPL w/Affero, OSL 3.0, ASPL 2.0, and Honest Public License as licences with SaaS-oriented copyleft clauses.

----- End forwarded message -----

when is an open source license open source?

Rick Moen [rick at linuxmafia.com]

Fri, 17 Aug 2007 16:51:48 -0700

----- Forwarded message from 'Rick Moen' <rick@linuxmafia.com> -----

Date: Mon, 2 Jul 2007 19:34:19 -0700
From: 'Rick Moen' <rick@linuxmafia.com>
To: Aaron Fulkerson <aaronf@mindtouch.com>
Subject: Re: when is an open source license open source?
Quoting Aaron Fulkerson (aaronf@mindtouch.com):

> Dreamhost must have been having probs:
> http://www.oblogn.com/2007/06/26/open-letter-to-osi/ try again.
> Alternatively: http://www.mindtouch.com/blog/2007/06/26/open-letter-to-osi/
> (same post just cross-posted)

Hi, Aaron. Thank you for the URL. Being tied down in a software upgrade at the moment, I don't have much time to comment, but here are a few thoughts. I speak, here, largely as a student of rhetoric.

1. Just so you know, "open letters" have always been non-starters: They pretty much scream "ignore me", to most people. This has nothing to do with the merit of what's said; it's just a reflexive reaction people tend to develop towards anything described as that.

IMO, if you want to be taken seriously, refactor to avoid that unfortunate language. E.g., you could send an actual letter to the OSI Board and then reproduce on your Web site a copy of it.

2. The paragraph starting with "If you read my personal blog..." will tend to give to casual observers the impression of your dwelling on and boring them with your past complaints. To be really blunt, it has the surface flavour of whining (e.g., "been called infantile names"), and nobody likes a whiner. Again, this has nothing to do with the substance of what you're saying. It's a presentation issue, and presentation is important, because people filter what they read mercilessly, and you want to avoid all the obvious reasons (common heuristics) why people stop reading something and move on.

3. The phrase "I've left lengthy comments at OSI" in this paragraph is unfortunate for the same reason: The reader of your current point doesn't have any use for this information, and it just gives such readers another candidate reason for ignoring you. They may dismiss you as one of those tireless cranks who deluge online forums with barrages comments and then complain that nobody takes them seriously.


> I've blogged about SugarCRM and it's CEO John Roberts previously (the
> last bit of this post). 

And your point? None is evident. Again, since there's no obvious reason cited why you mention this, the casual impression this gives is of someone who simply blogs a great deal, and wants to send readers on a long chain of links because what you say is so very vital in its full detail that you couldn't possibly summarise, i.e., like a crank.


[ ... ]

[ Thread continues here (1 message/16.87kB) ]

Junk CNet story from Matt Asay (MySQL, badgeware)

Rick Moen [rick at linuxmafia.com]

Tue, 14 Aug 2007 16:37:15 -0700

Lawyer, ex-badgware firm Alfresco executive, and OSI Board member Matt Asay writes a "NewsBlog" at CNet news.com, in which he comments on open source. A couple of days ago, that column published a compendium of whoppers[1], attempting to accuse open source users of hypocrisy.

Quoting key parts:

The open-source community's double standard on MySQL posted by Matt Asay [...]

Remember 2002? That's when Red Hat decided to split its code into Red Hat Advanced Server (now Red Hat Enterprise Linux) and Fedora. Howls of protest and endless hand-wringing [http://news.zdnet.com/2100-3513-5102282.html] ensued:

Enter 2007. MySQL decides to comply with the GNU General Public License and only give its tested, certified Enterprise code to those who pay for the service underlying that code (gasp!). Immediately cries of protest are raised [http://www.linux.com/feature/118489], How dare MySQL not give everything away for free?

Ironically, in this same year of 2007, SugarCRM received universal plaudits (from me, as well) for opening up _part of its code base_ under GPLv3. Groklaw crowed [http://www.groklaw.net/articlebasic.php?story=20070725161131598], "SugarCRM Goes GPLv3!" People everywhere flooded the streets to wax fecund and celebrate by multiplying and replenishing the earth.

[...] I'm criticizing the open-source community for applying a hypocritical double-standard.

No, Matt. Sorry, you lose.

First, the ZDNet link you cited simply did not feature even one member of the Linux community being quoted as criticising Red Hat in any way, let alone failing to comply with GPLv2 or any other open source licence -- for the simple reason that Red Hat didn't violate any licence, and in fact has published full source code RPMs for the full software portions of RHEL, downloadable free of charge and fully accessible to the public (which is far more than the licences require).

(RHEL includes two non-software SRPMS that contain trademark-encumbered image files. People who wish to have non-trademark-encumbered RHEL can create same by using different image files in their place, or can rely on CentOS et alii's ongoing work in doing exactly that. I've previously pointed this set of facts out to Matt directly, on an occasion when he attempted to defend his own firm's then-usage of badgeware licensing through the tu-quoque fallacy of criticising Red Hat.)

Likewise, the linux.com story you cited concerning MySQL AB's cessation of offering source tarballs to the general public (though public access to the SCM repository will remain) does not feature even one cry of protest -- for the simple reason that MySQL AB is still offering a fully, genuinely open source product.

(Accordingly, Matt's assertion about whom MySQL AB will "give its tested, certified Enterprise code to" is simply false.)

[ ... ]

[ Thread continues here (1 message/4.50kB) ]

Moglen And O'Reilly exchange at OSCon

Rick Moen [rick at linuxmafia.com]

Fri, 17 Aug 2007 17:55:57 -0700

----- Forwarded message from Steve Bibayoff <bibayoff@gmail.com> -----

Date: Fri, 17 Aug 2007 15:04:50 -0700
From: Steve Bibayoff <bibayoff@gmail.com>
To: Rick Moen <rick@linuxmafia.com>
Subject: Moglen And O'Reilly exchange at OSCon
Hi Rick,

Found a video of the Eben Moglen/ Tim O'Reilly exchange at OSCon. It was titled "Licensing in the Web 2.0 Era". Unfornatley, the video appears to be in a .mov quicktime format, but seems playable on Free software players(w/ unFree codecs). http://www.mefeedia.com/entry/3282956/ http://blip.tv/file/get/Radar-EbenMoglenLicensingInTheWeb20Era126.mov

Funny comment about trying to get the O'Reilly OSCon copy of the video/audio: "Regrettably, we missed the assault. Stories needed to go out, and we assumed the chat would follow familiar, boring lines. After about ten people later asked if we caught the spectacular show, The Register contacted the OSCON audio staff to obtain a recording of the session. "No problem," they said, "It will just take a couple of minutes, but you need to get O'Reilly's permission first." O'Reilly corporate refused to release the audio, saying it would cause a slippery slope. (We're still trying to understand that one.) They, however, did add that Moglen appeared to be "off his meds." http://jeremy.linuxquestions.org/2007/07/29/open-source-and-the-future-of-network-applications/



ps. What was that name of that book that you where discussing at Linux Picnix about the Renaissance? It was a case study done about Florence Nightingale, the father of the rugby school, and someone else. TIA.

----- End forwarded message -----

----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----

Date: Fri, 17 Aug 2007 17:54:43 -0700
From: Rick Moen <rick@linuxmafia.com>
To: Steve Bibayoff <bibayoff@gmail.com>
Subject: Re: Moglen And O'Reilly exchange at OSCon
Quoting Steve Bibayoff (bibayoff@gmail.com):

> Found a video of the Eben Moglen/ Tim O'Reilly exchange at OSCon. It
> was titled "Licensing in the Web 2.0 Era". Unfornatley, the video
> appears to be in a .mov quicktime format, but seems playable on Free
> software players(w/ unFree codecs).
> http://www.mefeedia.com/entry/3282956/
> http://blip.tv/file/get/Radar-EbenMoglenLicensingInTheWeb20Era126.mov

I'll be checking this out when I'm next at a computer with the necessary software. (My Xubuntu iBook will play many Quicktime .mov files, but, since it's a G3 PPC, the unfree codecs are out, even if I wanted them present, which I really don't.)

Meanwhile, there have been a number of text excerpts from Moglen's commentary. Here's a typical one, from http://www.linux.com/feature/118201 :

[ ... ]

[ Thread continues here (1 message/6.17kB) ]

Talkback: Discuss this article with The Answer Gang

Published in Issue 143 of Linux Gazette, October 2007



[ In reference to "GRUB, PATA and SATA" in LG#141 ]

Matthias Urlichs [smurf at smurf.noris.de]

Mon, 06 Aug 2007 23:04:32 +0200

The reason UUIDs do not help has nothing to do with Linus' article: this is not a kernel vs. userland issue.

This is a kernel vs. BIOS device numbering issue. UUIDs don't help here because GRUB doesn't interpret them. In fact it cannot even read them because they're in the menu.lst file which grub cannot find in the first place. Catch-22, anyone?

In fact you need to go grub's command line on the bare metal (i.e. after installing it on a floppy), not from Linux. Otherwise there's no point. Your article should mention that.

Matthias Urlichs <smurf@smurf.noris.de>
{M:U} IT Design


[ In reference to "Serving Your Home Network on a Silver Platter with Ubuntu" in LG#141 ]

Raj Shekhar [rajlist at rajshekhar.net]

Mon, 06 Aug 2007 10:41:39 +0530

Some comments -

Section 8 - Configure your LAN machines

Use dhcp instead of trying to configure each machine individually.

Section 11 BIND DNS Server

another option is to use pdnsd as a caching nameserver.

Section 13 Samba file sharing

If you are only using linux boxes, you can also share using nfs. Caveats: it takes long time to mount large directories.

Section 14 TorrentFlux

Nice tip!


raj shekhar
facts: http://rajshekhar.net | opinions: http://rajshekhar.net/blog
I dare do all that may become a man; Who dares do more is none.

[ Thread continues here (4 messages/5.20kB) ]


[ In reference to "Who is using your Network?" in LG#141 ]

Ramanathan Muthaiah [rus.cahimb at gmail.com]

Tue, 7 Aug 2007 06:37:56 +0530


How would this be possible in computers running on leased IP addresses via DHCP ?

Section 3 Secure Shell

. . . . . . . . . . . .

The simplest way to do this is to go to each computer and copy these files to a USB stick:

   cp /etc/ssh/ssh_host_rsa_key.pub /media/usb/<ip_addr>.rsa.pub
   cp /etc/ssh/ssh_host_dsa_key.pub /media/usb/<ip_addr>.dsa.pub
. . . . . .


[ Thread continues here (2 messages/2.46kB) ]


[ In reference to "Build a Six-headed, Six-user Linux System" in LG#124 ]

Ben Okopnik [ben at linuxgazette.net]

Wed, 15 Aug 2007 00:29:07 -0400

----- Forwarded message from Peter Sanders <plsander@us.ibm.com> -----

Subject: Re: [TAG] tkb: Talkback:124/smith.html
To: Ben Okopnik <ben@linuxgazette.net>
From: Peter Sanders <plsander@us.ibm.com>
Date: Fri, 10 Aug 2007 09:49:14 -0500
I just need to figure out which is easier:

- modifying a Live CD/DVD installation -- probably not the easiest to change on the fly or - get a big enough flash drive to hold a reasonalbly functional installation

Peter Sanders
t/l 553-6186  (507)253-6186

[ Thread continues here (7 messages/7.94kB) ]


[ In reference to "An NSLU2 (Slug) Reminder Server" in LG#141 ]

T Ziomek [t_ziomek at yahoo.com]

Thu, 2 Aug 2007 09:29:54 -0700 (PDT)

To pick a nit..."Mozilla Firefox use up, 3.7% in 4 months in Europe" is incorrect. FF use was up 3.7 [percentage] points, not 3.7 percent. The text under the headline gets this right when referring to a ~7 point gain in the past year.

Regards, Tom Ziomek

[ Thread continues here (2 messages/1.27kB) ]


[ In reference to "Preventing Domain Expiration" in LG#142 ]

s. keeling [keeling at nucleus.com]

Sun, 2 Sep 2007 17:17:42 -0600

Entertaining and informative article, Rick, as always. I hope I can consider some of my recent floundering with whois and linuxmafia.{com,net} as partial inspiration.

However, getting down to the picture at the .sig block, shouldn't you be doing more long distance bicycling? I always pictured you as one of those ca. 150 lb., wiry joggers gorging yourself on tofu. Instead, the picture shows you're either a sumo wrestler, a linebacker (a geek linebacker?!?), or you need to avoid burger joints more often. Oh, and that perl book behind you is pink. Doesn't that need an upgrade?

My perl books are pink too, and they suit me just fine (damnit). Sorry for the crack about the picture. I'd just prefer that folks like you outlive me (I'm a selfish that way).

P.S. Ben, looks like we need to fiddle with pinehelper.pl again. It's ignoring the bit in parens in the subject line, making "Subject: Talkback". Crap.

My version of your pinehelper.pl attached. This is pinehelper.pl called by FF (Debian stable/Etch Iceweasel), fwiw.

I've also just installed flashplayer-mozilla, which appears to handle swf better than swf-player, so maybe I can actually read the cartoons this time. :-)


Any technology distinguishable from magic is insufficiently advanced.
- -

[ Thread continues here (12 messages/28.80kB) ]


[ In reference to "Booting Knoppix from a USB Pendrive via Floppy" in LG#116 ]

Ben Okopnik [ben at linuxgazette.net]

Sun, 9 Sep 2007 17:46:48 -0400

[ cc'd to TAG ]

On Sun, Sep 09, 2007 at 05:00:03PM +0100, Andrew JRV Hunt wrote:

> I'm emailing about the following article:
> http://linuxgazette.net/116/okopnik1.html
> I'd like to know, for what version of knoppix is this (i tried with
> 5.1, it didn't work), and then also where you could get an old version
> (I've searched for Knoppix 3.8 which was the latest version of knoppix
> at the time but I can't find a download anywhere).

Well, that's exactly the problem. Shortly after I wrote that article, the Knoppix folks stopped using the "boot.img" method and switched to using "isolinux". If you can get hold of 3.3 - the version that I used - it should work fine. Otherwise, well, the kernel is just too big - period.

> If this fails, would it be possible that the script could be rewritten
> for knoppix 5.1 or that I could be sent the image to be written onto
> floppy?

It's not a question of the script; the problem is that the kernel itself - even before you add any modules, libs, etc. - is just too big to fit into a boot.img image (1.44MB). That pretty well shuts off the possibility.

You could approach it the other way, though: try downloading Puppy Linux, or Damn Small Linux. Either of those should fit on a flash drive just fine - and Puppy Linux definitely had a "boot from a flash drive" description on their site the last time I looked.

> PS. I tried emailing the tag email address which didn't work.

I assume you got it from some old resource. Try tag@lists.linuxgazette.net - we've been using that one since May 2005 or so.

* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


[ In reference to "An NSLU2 (Slug) Reminder Server" in LG#141 ]

Silas S. Brown [ssb22 at cam.ac.uk]

Thu, 2 Aug 2007 17:24:28 +0100

Hi, I wonder if you'd like to put this in Talkback or 2-Cent Tips in the next issue:

Since writing the article, I found a couple of minor problems.

One was that the sound sometimes fails in such a way that is not detectable by the NSLU2 (for example the speaker is not connected properly), and to mitigate the possible consequences of this, I set my NSLU2 to start playing a recognisible tune using its internal speaker if an important alarm is not acknowledged. I wrote a script that can convert any MIDI file into commands for Johnathan Nightingale's "beep" utility (which is available in the "beep" package on the NSLU2 Debian distribution), and my "MIDI to beep" script is available at


The other is that the NSLU2 may sometimes hang if the USB connection to its primary disk is somehow jolted; use of its built-in hardware watchtog helps, but it's best if the watchdog is used in such a way that a disk hang will be noticed by the watchdog (which might not happen if the process that's writing to /dev/watchdog is constantly in RAM), so rather than running a watchdog program I decided to do it in root's crontab:

* * * * * echo >/dev/watchdog;sleep 20;echo >/dev/watchdog;sleep 20;echo >/dev/watchdog
i.e. write to /dev/watchdog 3 times a minute (the system will reboot if it's not written to for a whole minute). Doing it this way rather than in a self-conained process should result in a watchdog failure if the primary disk gets pulled out (resulting in it being unmounted) because bin/sleep will become unavailable. This is so even though it will normally be cached in RAM and incur little or no disk activity. The cron job is quite lightweight, since the code for "bash" can be shared with any other bash scripts that are running at the same time and therefore should not take up much of its own memory.

[ Thread continues here (5 messages/6.03kB) ]


[ In reference to "A Quick Introduction to R" in LG#138 ]

Urmi Trivedi [urmi208 at gmail.com]

Tue, 31 Jul 2007 09:20:51 +0100

Dear Answer-gang,

I have gone through this article regarding plotting the data-labels in R. I found quite helpful functions but didn't find what i required. I have already plotted two matrices against one another and wish to label them I want to label the data-points plotted on the scatter-plots. Can you tell me if there is any way to do that?

Thanking you,

Yours truly,

Urmi Trivedi.

[ Thread continues here (2 messages/1.99kB) ]


[ In reference to "/okopnik.html" in LG#issue52 ]

Ben Okopnik [ben at linuxgazette.net]

Thu, 2 Aug 2007 11:34:49 -0400

Hi, Garrett -

[ cc'd to The Answer Gang ]

On Tue, Jul 31, 2007 at 08:37:39PM -0700, garrett.ellis@cox.net wrote:

> (Also sent to ben-fuzzybear@yahoo, but I wasn't sure that address remained valid.)
> Mr. Okopnik,
> An article of yours from the April 2000 Linux Gazette may have just
> saved me from hours of beating my head against my desk. Granted, I'd
> already spent about 12 hours beating my head against my desk, so
> further damage (if possible) has been averted.

Heh. I've left the imprint of my forehead on at least a few brick walls, so - yes, that sounds like a positive effect.

> See, I was doing some basic security changes on several hundred
> systems. My Expect powers are not yet strong enough to automate this
> task, but that's an aside. I've spent the last several days running
> find commands with various -exec, and as it turns out, I managed to
> annhiliate a RHEL4 Selinux system with the following:
> find / -user root -perm -o+w -exec chmod 0600 '{}' \;
> # I forgot "-type f" eek!

('-type f' wouldn't have helped much, I'm afraid. It would have stopped you from messing up, say, the entries in '/dev' - but that's about it.)

Ouch! You've "discovered" (and I use that in the most sympathetic way possible) the power of the 'find' command. It's very similar to a Milwaukee "Hole Hawg" drill: it will drill any hole you want it to - whether it's through concrete, steel, or your leg...

(See this love paean to a Super Hawg: http://www99.epinions.com/content_246549155460)

Unix tools are kinda like that. Huge amounts of power - and the safety is assumed to come from experience and forethought. Experience, however, is what you get when you didn't have enough experience and forethought!

> For hours afterwards, nothing would execute even when I restored
> execute permissions (on binaries only) from a Rescue CD. Your article
> pointed out the need for +x on some (if not all) shared libs, and that
> allowed me to rescue the machine.
> Thank you. Thank you. I owe you one, two, or ten beers.

[smile] Thanks, Garrett. I probably shouldn't say this - it's likely to lose me a beer or two - but if I was in your place, I'd actually reinstall the system from scratch; at the very least, I'd run a comparison of everything in '{,/usr}{/bin,/sbin,/lib}/' against a "normal" system. Given that you now have an uncertain set of permissions, all sorts of security vulnerabilities and possible future problems seem likely. The fact that your system works now gives you some breathing room time-wise - but I wouldn't call it a closed case.

* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *

[ Thread continues here (2 messages/5.43kB) ]

Talkback: Discuss this article with The Answer Gang

Published in Issue 143 of Linux Gazette, October 2007

2-Cent Tips

2 cent tip: Determining dynamic shared libraries loaded during run-time of a program

Mulyadi Santosa [mulyadi.santosa at gmail.com]

Wed, 15 Aug 2007 20:34:22 +0700

Good day LG readers!

How do you find out the library dependency of your program? Sure, ldd helps but doesn't always help in every circumstances. Some programs load the needed dynamic libraries with the help of dlopen() function, making ldd unaware of them. But you need to track them all, let's say in order to setup a chroot jail. So, how do you detect them?

To explain this concept, take a look on the below codes (taken from man dlopen):

#include <stdio.h> #include <dlfcn.h>

int main(int argc, char **argv) { void *handle; double (*cosine) (double); char *error;

handle = dlopen("libm.so", RTLD_LAZY); if (!handle) { fprintf(stderr, "%s\n", dlerror()); exit(1); }

cosine = dlsym(handle, "cos"); if ((error = dlerror()) != NULL) { fprintf(stderr, "%s\n", error); exit(1); }

printf("%f\n", (*cosine) (1.0)); dlclose(handle); return 0; } Assume you save it as trace.c. Later you compile it with: $ gcc -ldl -o trace trace.c

ldd shows you this: $ ldd ./trace libdl.so.2 => /lib/libdl.so.2 (0x40029000) libc.so.6 => /lib/tls/libc.so.6 (0x42000000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

OK, where's that libm.so? It doesn't show up. This won't be a serious problem if you have its source code, but what if not? ltrace comes to rescue. You just need to tell ltrace to find the occurence of dlopen(): $ ltrace -f -e dlopen ./trace dlopen("libm.so", 1) ....

-f tells ltrace to trace its forked children, not just the parent process. Now, the rest is just a matter of finding those libraries in library search path i.e any paths that are mentioned in /etc/ld.so.conf or LD_LIBRARY_PATH environment variable.

reference: man dlopen



[ Thread continues here (4 messages/5.96kB) ]

2 cent tip: Prevent Vim to accidentally write to opened file

Mulyadi Santosa [mulyadi.santosa at gmail.com]

Wed, 15 Aug 2007 16:46:51 +0700

Hello everybody

Most of us use "view" or "vim -R" in order to execute Vim in readonly mode. However, sometimes we accidentally force writing by issuing ":w!" command.

To prevent this, you can try: $ vim -m <file>

It really stops you from writing the modified buffer back to the backing file. The only exception is when you manually turn write mode on via: :set write

have fun!

NB: Mr. Okopnik, please include this tip in the next LG release if you think this tip is useful for the LG audience. Thanks in advance.



[ Thread continues here (2 messages/2.12kB) ]

Talkback: Discuss this article with The Answer Gang

Published in Issue 143 of Linux Gazette, October 2007


By Howard Dyckoff

News in General

SCO files Chapter 11

Just before their September 17th day in court with Novell, SCO filed for protection with the US Bankruptcy Court in Delaware. The case numbers are 07-11337 and 07-11338, for The SCO Group and SCO Operations, Inc.

A few days later, Novell filed its own motions to limit the bankruptcy law protections and continue with the damages part of the landmark case. The SCO actions occurred after the judge had ruled against most of SCO's pretrial motions and granted Novell's motion for a judicial trial rather than a jury trial. Novell had dropped any claim for punitive damages, paving the way to drop a jury trial. According to Groklaw [http://www.groklaw.net/article.php?story=20070907215715563], this last trial was to determine what SCO owed because Novell retained an equity interest in royalties from the System V Unix licenses.

SCO filed a voluntary petition for reorganization under Chapter 11 of the United States Bankruptcy Code. This effectively stops the on-going legal proceedings and allows SCO to reorganize and restructure its debt as it continues its operations.

SCO also received a notice from the NASDAQ Stock Market indicating that its securities will be delisted from NASDAQ on September 27, 2007.

Intel Launches 'LessWatts.org' Open Source Project for Linux Systems Power Savings

At its DEVELOPER FORUM (IDF) in San Francisco, Intel launched an open source community project designed to increase energy efficiency spanning servers in data centers to personal mobile devices. The new LessWatts.org initiative brings together the community of Linux developers, OSVs and end users to facilitate technology development, deployment and tuning and sharing of information around Linux power management.

For large data centers, server power consumption imposes limits on a center's growth and has significant financial and environmental costs. In addition to the large data center customers, mobile users are also constrained by power consumption limits as battery space is continually squeezed with the overall reduction in size of mobile devices. In both the server and the mobile markets, Linux operating systems continue to grow in relevance and market segment share.

"We created LessWatts.org to accelerate technology development and simplify information sharing for effective power management across a broad spectrum of devices and industry segments that are utilizing Linux," said Intel's Renee James. "A focused initiative that aggregates the disparate efforts into a holistic system and builds on our existing efforts with the industry in the Climate Savers Computing Initiative will serve as a strong catalyst to get energy-efficient solutions into the market segment faster, thereby benefiting the customers who purchase Intel-based products."

The LessWatts.org initiative encompasses several key projects including Linux kernel enhancements (such as the "tickless idle" feature that takes better advantage of power saving hardware technologies), the PowerTOP tool that helps tune Linux applications to be power aware and the Linux Battery Life Toolkit to measure and instrument the impact of Linux code changes on power savings. Additionally, LessWatts.org provides Linux support for hardware power saving features being implemented in current and upcoming Intel platforms.

Tips on the site include:
-- Enable the power aware SMP scheduler [ http://www.lesswatts.org/tips/cpu.php#smpsched ]
-- Use SATA link power management [ http://www.lesswatts.org/tips/disks.php#alpm ]

At the launch, representative from Oracle, Novell, and Red Hat expressed support for the project and efforts to reduce the environmental impacts of computing.

"Novell is working hard to be eco-friendly and customer-friendly at the same time by providing better power management technologies as part of SuSE* Linux Enterprise," said Jeff Jaffe, Novell executive vice president and chief technology officer. "We are committed to helping drive the technology forward as part of LessWatts.org and providing value to our customers by incorporating that technology into upcoming SuSE Linux Enterprise releases."

With the the tools and tips on the Less Watts website, an enterprise could save 10 watts of power on dual processor servers. More information is available at http://www.lesswatts.org.

LG spoke with Arjan Van de Ven, who was demoing the LessWatt.org web site at the IDF showcase. Arjan works in Portland, Or., at Intel's Open Source Tech Center, which is part of thier Software Solutions Group. "In the communijty, individual and small project efforts have been under way for some time and some developments, like PowerTOP resulted from these. But we wanted to have a system-wide perspective on power consumption. It was the next logical step for us." Arjan said that he and his coworkers wanted to draw on both the developer community and users at large to focus on power and utilization issues in an end-to-end way and the wiki style of the project should accomplish that. He noted that user imput had improved PowerTop and other tools. "The next thinkg we want to do is perform a gap analysis to see where we need to address new effort. This is about really impacting power use, not just tech for tech's sake."

VMWorld shows off 32 MB Hypervisor

VMWorld, the user conference for VMware, occurred in mid-September and attendees were surprized and delighted by a preview of VMware ESX 3i. This announcement followed a long period of rumors concerning a VMware lite product. ESX 3i is a tiny but fully functional version of ESX 3 that stripped out its 2 GB service console -- based on a hardened version of RHEL 3 -- and leaving only the petite 32 MB core ESX kernel. That's small enough to fit on an ancient digital camera SD memory card and Dell was demonstrating a concept server that used just such an SD card to boot ESX on bare metal.

After the keynote announcement, IBM and HP also showed ESX 3.1 server implementations. All are planned for release by year-end or early 2008.

VMware has provided most of the conference keynotes and presentations - slides and audio - on links from here:

BEA announces "Genesis" - Web 2.0 tools for the Enterprise - and Virtual WebLogic

Playing a new card in its SOA strategy at its annual BEAWorld conference, BEA Systems CEO Alfred Chuang introduced Project Genesis, their next-generation business application platform that can "mash up" a Web 2.0 gumbo of open-source technologies, service-oriented architecture (SOA), social-computing features and add a dash of business process management.

CEO Chuang told BEAWorld attendees that the software paradigm is changing and the "era of innovation with packaged enterprise applications is over." Application customization still takes too long, he added, because customers today need to adjust their IT systems in real time to meet changing business conditions

BEA's Project Genesis will try to radically simplify how businesses deliver at new or revised applications. With Genesis, both business users as well as IT staff will be able to quickly assemble and deploy dynamic business applications for competitive advantage. Genesis extends the BEA AquaLogic product family with tools for a simplified approach for assembling and modifying business applications.

CEO Chung told analysts that BEA is moving to support SAAS (software as a service) both within and outside the enterprise. "User-driven applications will be everywhere. Genesis realigns the entire application landscape. It will include user-based pricing and monitoring built into the architecture to make it truly deployable. It's architected to last."

Chung told LG that many of the components in Genesis will be derived from Open Source projects and that BEA will also contribute some of its Genesis IP to the software community as the project develops. This is a significant strategic shift for BEA and represents an attempt to supports businesses of all sizes.

Also at BEAWorld, BEA announced WebLogic Server 10.3, the next release of their leading Java application server. New features include support for Java SE 6, SAML 2 and Spring 2.1 which will provide better performance and control, improved security and OSS enhancements. BEA WebLogic Server 10.3 should be available in Q1 2008.

Attendees previewed early lab test results by BEA and Intel on the WebLogic discrete virtual appliance, called LiquidVM. The technology behind it runs Java about two times more efficiently than using the normal OS-based software stack on VMware Infrastructure. Because LiquidVM is a JVM that runs directly on top of the VMware ESX Server hypervisor platform, without a traditional OS, it can manage memory resources for Java applications much more efficiently. This can increase hardware utilization and reduce TCO. This approach to Java virtualization is unique in the industry.

Icahn boosts BEA stake, calls for sale of company

Right after BEAWorld SF and its announcement of new directions for BEA, billionaire investor Carl Icahn increased his stake in BEA to 8.5 percent and then urged the company's board to put BEA up for sale. "It is becoming increasingly difficult for a stand-alone technology company to prosper, especially in light of the very strong competitors in the area", Icahn said in his filing. Icahn may also seek to have BEA hold a shareholder meeting, since the last one was in 2006.

Icahn said a larger software maker would be able to give BEA's WebLogic products to its sales force to sell, increasing the product's exposure while reducing its overhead.

BEA shares rose to $13.25, in Nasdaq trade, on the news. The stock has fallen about 12 percent over the past year, while IBM and Oracle shares had double digit gains.

Motley Fool postings on BEA note it's less than stellar financial performance in the past year. BEA has been transitioning to Service-Oriented Architecture (SOA) software with mixed results. But at BEAWorld several sessions highlighted a deepening primary partnership with Accenture on SOA for government agencies and fortune 100 companies, resulting in an SOA best-practices web site for Accenture and other primary partners. Perhaps BEA is finally ready to reap some rewards from its investments in technology and in consulting services.

On Sept. 20th, Icahn raised his stake in BEA to 9.9 percent, or a total of 38.7 million shares.


Intel adds to its Roadmap with 45 nm, 32 nm chips and USB 3.0

Intel added details to its chip road map with plans for new CPUs after its first 45 nm 'Penryn' chip. The details were in an avalanche of announcements during its Developer Forum (IDF) in September.

First up was 'Nehalem', demonstrated on-stage by Intel's Paul Otellini. Nehalem is an entirely new scalable processor with 731 million transistors, simultaneous multithreading, and a multi-level cache architecture. Nehalem is anticipated to provide three times the peak memory bandwidth of current competing processors. It features the new 'QuickPath' memory interconnect for a high speed system data path to Nehalem's processor cores. This is similar to features currently in AMD's Opteron family of multicore processors.

Nehalem is on track for production in the second half of 2008.

Otellini said that Intel plans to offer greener line of 25-watt mobile dual-core Penryn processors for thinner and lighter laptop PCs. With low power chipsets, the 25W Penryn processor will be the next generation Centrino technology, codenamed 'Montevina' for mid-2008.

Otellini wowed the IDF attendees by showing off the first test wafers for 32 nm chips. Intel has reached a milestone in the next-generation of 32nm static random access memory (SRAM) chips with more than 1.9 billion transistors each. Intel is on track to ramp to 32nm technology in 2009.

Intel and key partners formed the USB 3.0 Promoter Group to develop a super-speed personal USB interconnect that is expected to be over 10 times the speed of today's connection. USB 3.0 is backward compatible with USB 2.0, and the specification is expected to be complete in the first half of 2008. As an example, a 27GB HD-DVD could be downloaded 70 seconds using USB 3.0.


Conferences and Events

Main conference: http://www.socallinuxexpo.org/scale6x/documents/scale6x-cfp.pdf
Women in Open Source mini-conference: http://www.socallinuxexpo.org/scale6x/documents/scale6x-wios-cfp.pdf
Open Source in Education mini-conference: http://www.socallinuxexpo.org/scale6x/documents/scale6x-education-cfp.pdf


October 1 - 5, Las Vegas, NV

BEAWorld 2007 Barcelona
October 2 - 4, Palau de Congressos de Catalunya

BI, Warehousing, and Analytics (BIWA) Summit
October 2-3, 2007 Oracle Corp, Reston, VA [ $200 for both days]

Zend/PHP Conference & Expo 2007
October 8 - 11, San Francisco, California

Designing and Building Business Ontologies
October 9 - 12; San Francisco, California

Ethernet Expo 2007
October 15 - 17, Hilton New York, New York

October 16 - 18, San Jose, CA

Interop New York
October 22 - 26

LinuxWorld Conference & Expo
October 24 - 25, London, UK


CSI 2007
November 3 - 9, Hyatt Regency Crystal City, Washington, D.C.

Interop Berlin
November 6 - 8, Berlin, Germany

Oracle OpenWorld San Francisco
November 11 - 15, San Francisco, CA

Supercomputing 2007
November 9 - 16, Tampa, FL

Certicom ECC Conference 2007
November 13-15, Four Seasons Hotel, Toronto
[focused on the use of Suite B crypto algorithms in the enterprise]

Gartner Identity & Access Management Summit
14-16 November 2007, Los Angeles, CA

Gartner - 26th Annual Data Center Conference
November 27-30, Las Vegas, NV


Agile Development Practices Conference
December 3-6, Shingle Creek Resort, Orlando, FL

LPI Offers Discounted Certification Exams at Ohio LinuxFest 2007

The Linux Professional Institute (LPI), the world's premier Linux certification organization (http://www.lpi.org), will offer discounted LPIC-1, LPIC-2 and the Ubuntu Certified Professional exams to attendees of the Ohio LinuxFest 2007 (Columbus, Ohio) on Sunday, September 30, 2007.

Exam labs will be at 1 p.m. and 3:30 p.m. on September 30th at the Drury Inn & Suites Columbus Convention Center (88 East Nationwide Boulevard, Columbus, Ohio) in the lobby ballroom. "All Conference Pass" attendees of the Ohio Linux Fest will be able to take any exam for $85 each. Other discounts are available for free registered attendees and those not participating in the Ohio LinuxFest--please see details on discounts and other information on the exam labs at http://www.ohiolinux.org/lpi.html. All exam candidates are asked to obtain an LPI ID at http://www.lpi.org/en/lpi/english/certification/register_now and bring valid photo identification and their LPI-ID number to the exam labs. Payment for exams can be made by cash or cheque.

During the Ohio LinuxFest, Don Corbet (long time LPI volunter and LPI Advisory Council member) will provide LPIC-1 exam cram sessions to prepare for this exam lab. Exam Cram sessions will be from 9 - 12 noon (LPI Exam 101) and again from 1 - 5 p.m. (LPI Exam 102), at the Greater Columbus Convention Center C-Pods. These exam cram sessions are part of the Ohio LinuxFest University program. For more information on the Ohio LinuxFest University program please see:http://www.ohiolinux.org/olfu.html Linux professionals, exam candidates, and IT trainers are also invited to visit LPI representatives at booth #27 during the Ohio LinuxFest.

For those interested in attending Ohio LinuxFest please know that the the "All Conference Pass" registration has been extended to September 23rd. Free conference passes are also available for a limited number of events. For more information please see http://www.ohiolinux.org/register.html


Release Candidate for openSUSE 10.3 available

The openSUSE development team has announced the availability of a release candidate for openSUSE 10.3, the last development build before the stable release on October 4th. This release is stable, and suitable for testing by any user. Technical changes include libzypp 3.24; VirtualBox 1.5; OpenOffice.org 2.3RC3 and countless bug fixes. Note that this release does not include the final packages of GNOME 2.20, which will be added later.


SWsoft unveils the beta version of Virtuozzo 4.0

SWsoft Virtuozzo differs from traditional hardware virtualization solutions in dynamically partitioning a single Windows or Linux operating system instance into scalable virtual environments or "containers". Virtuozzo can enable dozens to hundreds of containers to immediately leverage real hardware, resulting in better datacenter efficiency.

Virtuozzo 4.0's improved OS virtualization technology addresses many of the common performance, density, usability and infrastructure compatibility issues that plague traditional hardware virtualization solutions via a number of improvements and new features, including:

Serguei Beloussov, CEO of SWsoft, said, "With new enterprise-class capabilities such as clustering and real-time backups along with a host of new management and automation features, Virtuozzo is now the most effective path to a streamlined, automated datacenter."

Virtuozzo 4.0 will be available for public beta testing by the end of September, with general availability expected later this year. To sign up for the beta program, users can go to www.swsoft.com/virtuozzo4/beta. A free, fully functional trial of Virtuozzo for Windows and Virtuozzo for Linux is available at http://www.swsoft.com/virtuozzo/trial.

Talkback: Discuss this article with The Answer Gang

Bio picture Howard Dyckoff is a long term IT professional with primary experience at Fortune 100 and 200 firms. Before his IT career, he worked for Aviation Week and Space Technology magazine and before that used to edit SkyCom, a newsletter for astronomers and rocketeers. He hails from the Republic of Brooklyn [and Polytechnic Institute] and now, after several trips to Himalayan mountain tops, resides in the SF Bay Area with a large book collection and several pet rocks.

Copyright © 2007, Howard Dyckoff. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007

Linux Console Scrollback

By Anonymous

We refer here to the GNU/Linux text console - which is not what you have in a terminal window of a X windows manager or desktop environment. There is so much confusion on this point that I'm going to go into trivially-nitpicky detail: a text console is what you get when you press 'Alt-Ctrl-F1' or 'F2' and so on while the focus is on the desktop.

0. What are we talking about?

You are in a text console and text flies by. There is a way to recall it and have a look by pressing 'Shift-PgUp' or 'Shift-PgDn' or possibly other keys if you have modified the default keymap (no, not the X keymap.) You are then using the scrollback buffer (scrollbuffer, for short.)

When trying to put the scrollbuffer to good use, a couple of things quickly become apparent:

At this point, the web search starts and preliminary results emerge:

Indeed, you have to be selective with what you read: if you find any advice prior to 2.6.x it is likely to confuse you and lead you astray. Believe me - I have put in the time so you don't have to.

The scrollback behaviour is determined in 'vc.c', a nasty little file in the kernel source, where 'vc' stands for virtual console.

First, note that when you switch consoles, the scrollbuffer is lost - so the entire size of the scrollbuffer is available for the current console. (Here especially, there is a pile of junk from pre-2.6.x times.)

The default scrollbuffer size is 32K. With that, you get 4 key presses - each scrolling half a screen, 25 lines per screen. Figure it out: that's about 50 lines. You will not get more than that, even if you have 30 or 50 lines per screen.

However, 50 lines are just a small fraction of what flies by at boot time. So two questions need to be answered:

  1. How can the scrollbuffer be increased?
  2. Why are the log files not catching some of the messages you see at boot time?

1. How can the scrollback buffer be increased?

One solution to (1) would be to change the default size in the kernel source and recompile. Let me assume you are as inclined to do so as I am (NOT) and would rather look for something more flexible.

Well, there is a way - and it goes through a device called framebuffer console, 'fbcon' for short. This device has a documentation file by the name of 'fbcon.txt'; if you have installed the kernel documentation, you have it. Sniff it out somewhere in the '/usr/share' branch (I am not stating a path because it may differ from distro to distro.) Anyway, you can get it here as a single file:


At this point, sorry: detour! We have got to talk briefly about the framebuffer.

A framebuffer is a buffer between the display and the graphic adapter. The beauty of it is that this buffer can be manipulated: it allows for tricks that would be impossible with the adapter acting directly on the display.

One of the tricks concerns the scrollbuffer; for instance, you can tell the framebuffer to allocate more memory for it.

This is done via the kernel boot parameters. First, you request a framebuffer; then you request a bigger scrollbuffer.

The following example refers to GRUB but can be easily adapted to LILO. In GRUB's 'menu.lst', find the appropriate kernel line, and:

  1. delete option 'vga=xxx', if present
  2. append option 'video=vesabf' or whatever fits your hardware
  3. append option 'fbcon=scrollback:128'

The kernel line would then look something like the following;

    kernel /vmlinuz root=/dev/sdb5 video=radeonfb fbcon=scrollback:128

Why are we deleting the option 'vga=xxx'? Because it may conflict with the video option. On my ATI adapter, I cannot change the scrollbuffer if 'vga=xxx' is present. This may not apply in your case.

The options just listed do work but what if you want more lines and a smaller font on the screen? You used to have that with the 'vga=xxx' option - and that option is gone.

Take heart - it can still be done with fbcon parameters as detailed in 'fbcon.txt' (but not detailed here.) Under Ubuntu and Debian there is a more comfortable way:

    dpkg-reconfigure console-setup

It makes sense to use this command because it also adjusts the initial ram disk (initrd) to your new settings.

2. Pushing up the cap

The option 'fbcon=scrollback:128' gives you 12-13 key presses to traverse the scrollbuffer. That is approx. 150 lines - quite a bit, but possibly still not enough to catch all the boot messages. Can we say more than 128, e.g. 256?

The cap of 128 is set in 'vc.c'. If you want more, you can edit it and recompile.

For myself, I decided that it is not worth it. Before that I almost believed that I found a higher cap - I quote from 'fbcon.txt':

        The 'k' suffix is optional, and will multiply
        the 'value' by 1024.

I rushed to try it out... and whatever the author was thinking, rest assured that 128 and 128k will give you exactly the same result. The default cap is 128KB of memory and that's the end of it.

Finally, note that using an extended scrollbuffer implies writes to both the graphic adapter hardware and the buffer in memory. If you use the default of 32KB, there is only a write to the hardware. In practice, I was unable to notice any slowdown.

3. What's missing in the logs?

In Ubuntu and Debian and other distros all system messages are logged to the file '/var/log/messages'. This applies even if the log service (daemon) is not the old syslog but syslog-ng ('ng': "new generation").

In both cases, you can view the messages by issuing 'dmesg' at the command prompt. It still doesn't help, though: you saw messages at boot time and they are certainly not in the log. How come?

It is a feature, not a bug! The messages logged to the file come from various subsystems of the operating system. The contributing subsystems are referred to as 'facilities', and there are 8 of them. If, at boot time, scripts or applications are run that do not belong to any facility, you will see their messages rushing past your eyes - but nothing will be written to the log file!

As an example you will not see the messages produced by 'loadkeys' (openSUSE) or 'consolechars' (Ubuntu and Debian) when loading keymaps at boot time. Another example: when running a text console editor you can scroll back its display (including colour) using the scrollbuffer but what the editor produced will never go to the system log file. The reason is, of course, that 'loadkeys', 'consolechars', and the editor do not belong to any facility.

Can this be changed? Yes - "just" customize and recompile those applications you want logged. And/or modify the boot scripts. And force logging of messages outside the system facilities.

My bet is you are not going to do it.

(And your most likely response is: "nothing is missing from my log files." Oh well...)

Talkback: Discuss this article with The Answer Gang

Bio picture A. N. Onymous has been writing for LG since the early days - generally by sneaking in at night and leaving a variety of articles on the Editor's desk. A man (woman?) of mystery, claiming no credit and hiding in darkness... probably something to do with large amounts of treasure in an ancient Mayan temple and a beautiful dark-eyed woman with a snake tattoo winding down from her left hip. Or maybe he just treasures his privacy. In any case, we're grateful for his contribution.
-- Editor, Linux Gazette

Copyright © 2007, Anonymous. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007

Securing Apache Web Server with mod_security

By René Pfeiffer

The Internet has its share of packet filters and proxy servers in order to increase the security for clients and servers alike. Filtering network traffic is never a bad idea since it provides a basic level of protection. When it comes down to protecting web servers your packet filter will most probably allow HTTP and HTTPS traffic to your server application. Unless you deploy an application proxy that inspects HTTP you can't do more. But you can equip your Apache web server with mod_security which in turns helps you to analyse any request thrown at it.

Application Layer Inspection

When you do any network traffic filtering or inspection you have to keep in mind that usually nothing understand the things that should be inspected better than the application in question. This is one of the reasons why proxy filters are "better" suited for this job. They know the protocol and can normalise fancily encoded requests. mod_security is in a very similar position. It sits right inside the Appache httpd process and inspects the HTTP requests. This is an important advantage over proxies since it can also see compressed or even encrypted content without difficulties.

So, what needs to be inspected? Apache's httpd surely does inspect HTTP requests. What do I need more? Well, there are some things mod_security can do for you.

Sounds pretty impressive if you ask me. Now we only need to know how to add it to an existing Apache deployment.


The current released version is 2.x. It works well with Apache 2.0.x and 2.2.x. Apache 1.3.x is not supported anymore (you should really upgrade your Apache servers, seriously). mod_security has some more requirements.

mod_security doesn't use autoconf. You have to check its Makefile and tell it where the ServerRoot directory of your Apache installation is. Then you can try make and see if everything compiles. If you get compilation errors, make sure your compiler environment and your development packages are complete. After the make command finishes, stop your Apache server and issue a make install. The Makefile will copy the module into the Apache server modules directory (usually /usr/local/apach2/modules/ for a compiled web server, your distribution may put the modules elsewhere). Now you only need to active the module by adding the following lines to your Apache configuration.
LoadFile /usr/lib/libxml2.so # optional
LoadModule security2_module modules/mod_security2.so
There we go. The only thing we need is to configure the module and the rule sets.


One word of caution: Every security measure must be applied with a specific purpose. You can't just add filters without thinking about the consequences for applications. You or your users may have web applications running that break when special security measures are activated. If you are not sure about not breaking something you can use all rule sets and actions in "audit mode". Then mod_security will only log but not block. This is a good idea to test everything until you are sure to switch to "live mode". It also keeps your users happy.

A very simple test is to add a single rule by using the following two lines:

SecRuleEngine On
SecRule REQUEST_URI attack
Now send your web server a request with the word attack in it. You should get the error code 403 Forbidden and the blocked request should generate an entry in Apache's error log file. If you set the first option to
SecRuleEngine DetectionOnly
then the module will only detect and block nothing. We will now take a look at the classes of different options available. Please make sure you take a look at mod_security's documentation and at the file in the sample core rules archive that can be downloaded.

General Options

mod_security has several groups of options. Here are some of the basic configuration directives.

SecRuleEngine On
SecRequestBodyAccess On
SecResponseBodyAccess On
The first line switches request inspection on or off. The other two lines control whether request and response body data will be inspected by the module. Keep in mind that the data has to be buffered. This means that it is required for inspecting HTTP POST requests, but it has to be buffered and thus needs memory. You can reduce the amount of memory used by this directive.
SecRequestBodyInMemoryLimit 131072
You can also limit the size of the HTTP request body data. This is very handy for disabling large data in HTTP POST requests.
SecRequestBodyLimit 10485760
Every request bigger than 10485760 byte will be answered by the error code 413 Request Entity Too Large. The default is 134217728 bytes (131072 KB).

Web servers typically include the MIME type of data they put into responses. You can control the types you want to inspect. Usually you will not want to parse JPEG or PDF data.

SecResponseBodyMimeType (null) text/plain text/html text/css text/xml
The first statement clears the list of types to be checked. The second line sets the types we are interested in. File uploads may be something you wish to control. You can redirect every file upload to a seperate directory. In addition you can collect all uploaded files to your server provided you have the disk space. This may be useful for forensic activities after something has gone wrong.
SecUploadDir /var/spool/apache/private
SecUploadKeepFiles Off
It is good practice not to use the usual directories for temporary files for storing uploads. Create a directory for this purpose and set the permissions accordingly. Only Apache needs to access this space.


You can enable audit logging which can help you a lot during debugging or worse situations. You have the choice of logging everything or only events that triggered one of mod_security's internal checks. This includes the rule sets.

SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus "^[45]"
We want to log only relevant events and concentrate on responses that generate 4xx or 5xx status codes. That's what the regular expression is for. You can do a full logging if you need to. The log can be written into a single file or into a directory with one file per log event. The latter is usually only for high volume sites.
SecAuditLogType Serial
SecAuditLog /var/log/www/modsec_audit.log
# SecAuditLogStorageDir /var/log/www/modsec_audit
As mentioned before the content that can be logged is extended. You can log portions of the original HTTP request along with the response.
SecAuditLogParts "ABEIFHZ"
This means we want to log the audit header and trailer (options A and H), the original request (option B), an intermediate response body (option E), the modified request body (option I), the final response header (option F) and the final boundary of the log event (option Z, which is mandatory). The intermediate response body is either the original response or the modified response by mod_security. The modified request body is either the original request body or a shortened variant whenever multipart/form-data encoding was used. The default options are "ABIFHZ".


The security module uses five distinct phases for processing requests to and responses from the web server.

  1. Parse request headers
  2. Parse request body
  3. Parse response headers
  4. Parse response body
  5. Do logging
It is important to keep this in mind when designing rule sets, almost just as in designing packet filters. The directive SecRule describes a rule.
When written in this form mod_security will
  1. expand the variables,
  2. apply the operator if present,
  3. trigger once for a match in every variable,
  4. and execute
    1. the default action or
    2. the actions described in the rule.
Remember the test rule with the string "attack". We told the module to check the variable REQUEST_URI of the HTTP request and apply the regular expression operator consisting of the desired string to look for. We didn't give any action, so the default action applies. You can combine variables by using logical operators.
This does the same but with two different variables. The action will be triggered if the string fragment is found in either variable. You can use well known operators inside rules. Matching of regular expressions is done by the PCRE library, so you can use any constructs PCRE understands (which is basically everything you can do in Perl's pattern matching). Long lines can be split by using "\" just as in Bash shell scripts.

Keep in mind that the VARIABLES section contains variables. Their content changes. If the variable is empty or not present, the rule doesn't match. This is importent and desired for parameter checking. Variables can be ARGS, FILES, FILES_TMPNAMES, ENV, REMOTE_PORT, QUERY_STRING, SCRIPT_BASENAME, AUTH_TYPE, REQUEST_COOKIES, SESSIONID, TIME and many more. The reference has a complete list.


mod_security rules can contain operators. They are used to validate the request or look for special anomalies. The "@" indicates that an operator follows.

SecRule ARGS "@validateUtf8Encoding"
SecRule ARGS "@validateByteRange" 10,13,32-126
The first rule checks the request for valid UTF-8 encoding. The second example checks for a specific range of characters in the request. If the request contains other characters than linefeed, carriage return or the US ASCII characters then the action is triggered. You can also invoke additional scripts.
SecRule FILES_TMPNAMES "@inspectFile /usr/local/bin/check_file.pl" 
This redirects any uploaded files to the Perl script for further checks. The exit code of the script tells mod_security whether to invoke the action or not. You can even use realtime blacklists for your rules.
SecRule REMOTE_ADDR "@rbl bad.guys.and.girls.example.net"


There are five general types of actions.

  1. Disruptive actions - abort current transaction.
    • deny - stops transaction and generates an error.
    • drop - drops transaction without error.
    • redirect - responds with a redirection (such as 301 or 302).
    • proxy - forwards the request to another server.
    • pause - slows down the execution of the request.
  2. Non-disruptive actions - change the state of the current transaction.
  3. Flow actions - change the flow of the rules.
    • allow - stops processing of subsequent rules.
    • chain - combines the active rule with the next one.
    • pass - ignores a match in the current rule; useful for commenting rules out, but leave them still active.
    • skip - skip the next or more rules.
  4. Meta-data actions - contain meta data for rules as additional information; useful for logging.
  5. Data actions - are placeholders for other actions.
The default action can be set with the directive SecDefaultAction. It can be changed whenever you need it, so you can define blocks of rules with different default actions. The actions are very similar to the ones used by the intrusion detection/prevention software Snort. They give you a lot of flexibility and allow for quite complex coverage of tests. Here's is one example from the core rules that look for attempts to inject an email message into the request.
SecRule REQUEST_FILENAME|ARGS|ARGS_NAMES|REQUEST_HEADERS|XML:/* "[\n\r]\s*(?:to|bcc|cc)\s*:.*?\@" \
  "t:none,t:lowercase,t:urlDecode,capture,ctl:auditLogParts=+E,log,auditlog,msg:'Email Injection Attack.
  Matched signature <%{TX.0}>',,id:'950019',severity:'2'"</%{TX.0}>
Note that I inserted a line break for better readability. The action parameter is a single string and tell the module to log every match to the normal log and the audit log with the message text Email Injection Attack along with some parameters of the request. The core rules have more examples for other attacks such as cross-site scripting, SQL injection, HTTP anomalies and the like.

Rule Management

Make sure that you keep your rule files in good order and make sure that you document every change. This is very important. Any kind of filter can break whole applications and protocols. Therefore you need to now what changes caused which effects. You will also need this information when developing your own rules. Bear in mind that custom web applications need custom rules. Some attacks may be the same, but customised applications have their peculiarities that have to be considered. A good place to start are the core rule sets. You can disable rules without deleting them from the configuration. This is extremely useful in case you wish to distribute rules to multiple servers. You can do that by splitting your rule into multiple files and have a master configuration that enables or disables selected sets.

Performance and Deployment

Everything has a price and so does filtering HTTP requests. mod_security needs to holds the request in a buffer or has to store it to a temporary file. You have to take this into account. The parsing add a little overhead in terms of CPU cycles to the web server as well. If you install the module on a server that already has performance issues things won't get better. That's what the reverse proxy method is for. Hard hit sites probably won't go anywhere without additional proxies.

One last thing to keep in mind are your own web applications. Don't just set up the core rules and accept all defaults. Inspect the rule sets and decide for yourself if you need all the rules. Things can break if you are not careful enough. No one knows your web apps better than you. Use this knowledge to your advantage.

Useful resources

Talkback: Discuss this article with The Answer Gang

Bio picture

René was born in the year of Atari's founding and the release of the game Pong. Since his early youth he started taking things apart to see how they work. He couldn't even pass construction sites without looking for electrical wires that might seem interesting. The interest in computing began when his grandfather bought him a 4-bit microcontroller with 256 byte RAM and a 4096 byte operating system, forcing him to learn assembler before any other language.

After finishing school he went to university in order to study physics. He then collected experiences with a C64, a C128, two Amigas, DEC's Ultrix, OpenVMS and finally GNU/Linux on a PC in 1997. He is using Linux since this day and still likes to take things apart und put them together again. Freedom of tinkering brought him close to the Free Software movement, where he puts some effort into the right to understand how things work. He is also involved with civil liberty groups focusing on digital rights.

Since 1999 he is offering his skills as a freelancer. His main activities include system/network administration, scripting and consulting. In 2001 he started to give lectures on computer security at the Technikum Wien. Apart from staring into computer monitors, inspecting hardware and talking to network equipment he is fond of scuba diving, writing, or photographing with his digital camera. He would like to have a go at storytelling and roleplaying again as soon as he finds some more spare time on his backup devices.

Copyright © 2007, René Pfeiffer. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007

Introducing Python Pickling

By Amit Kumar Saha

The motivation for writing this article came when I was working on my first major Python project and I wanted a way to write my class data to a on-disk file, just the way I had done it on numerous occasions in C, where I wrote the structure data to a file. So if you want to learn the Pythonic way of persistence storage of your class data, this is for you. Let us start!

A. Pickle, Unpickle

A pickle is a Python object represented as a string of bytes. Sounds utterly simple? Oh well, it is that simple! This process is called Pickling. So we have successfully converted our object into some bytes, now how do we get that back? To unpickle means to reconstruct the Python object from the pickled string of bytes. Strictly speaking its not reconstruction in a physical sense - it only means that if we have pickled a list, L, then after unpickling we can get back the contents of list simply by again accessing L.

The terms 'pickle' and 'unpickle' are related to object serialization and de-serialization respectively, which are language-neutral related terms for a process that turns arbitrarily complex objects into textual or binary representations of those objects and back.

A.1 The 'pickle' Module

The pickle module implements the functions to dump the class instance's data to a file and load the pickled data to make it usable.

Consider the Demo class below:

import pickle

class Demo:
	def __init__(self):
		self.a = 6
		self.l = ('hello','world')
		print self.a,self.l

Now, we will create an instance of Demo and pickle it.

>>> f=Demo()
6 ('hello', 'world')
>>> pickle.dumps(f)

The dumps function pickles the object and dumps the pickled object on the screen. I am sure that this is not really comprehensible and doesn't look very useful - but if we dump the pickled object to a on-disk file, the utility increases many fold. This is what we'll do next. Let's modify our code slightly to include the pickling code:

import pickle

class Demo:
	def __init__(self):
		self.a = 6
		self.l = ('hello','world')
		print self.a,self.l

if __name__ == "__main__":
        pickle.dump(f, file('Demo.pickle','w'))

Now, let us unpickle:

>>> f3=pickle.load(file("Demo.pickle"))
>>> f3.a
>>> f3.l
('hello', 'world')

So far, so good.

A.2 The 'cPickle' Module

cPickle is an extension module written in C to provide pickling facilities which is about 1000 times faster than the pickle module. The usage is the same as pickle. Pickles produced by each are compatible.

>>> import cPickle
>>> f3=cPickle.load(file("Demo.pickle"))
>>> f3.l
('hello', 'world')

B. A Glimpse Behind the Scenes

The data format used by pickle is Python specific, which obviously discards pickling as an option for persistent storage if you are looking for a language-neutral solution. Human-readable and thus easily debuggable ASCII is the default format used by Python for writing pickled objects. There are 3 different protocols which can be used for pickling:

  1. Protocol version 0 is the original ASCII protocol and is backward compatible with earlier versions of Python.
  2. Protocol version 1 is the old binary format which is also compatible with earlier versions of Python.
  3. Protocol version 2 was introduced in Python 2.3. It provides much more efficient pickling of new-style classes.

C. Conclusion

The basic goal of this short tutorial was a hands-on introduction to pickling in Python as a method of writing class data to persistent storage, especially for new Python programmers. I have intentionally left out issues related to working with complex and bigger classes, for which some good resources are listed below. Again, more basic things such as pickling simple lists and dictionaries have been omitted, but this will not require much looking around to find the answers.

I hope that you are ready to use pickling in your projects. Happy coding!


  1. Python persistence management
  2. pickle module
  3. cPickle module

Talkback: Discuss this article with The Answer Gang

Bio picture

The author is a freelance technical writer. He mainly writes on the Linux kernel, Network Security and XML.

Copyright © 2007, Amit Kumar Saha. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007

A Question Of Rounding

By Paul Sephton


Schools teach scholars, as part of their curriculum, how to round a number. We are taught that a number such as 0.1 may be approximated as 0, whilst the number 0.5 should be seen as approximately 1. Likewise, 1.4 is 1, whilst 1.5 is 2; 1.15 is 1.2 whilst 1.24 is also 1.2 when approximated to a single digit of precision.

Many industries use rounding; Chemistry, for one, recognises the accuracy of tests to be only within a finite range of precision. For example, whilst a Gas Chromatograph might produce the value 45.657784532, we know that it is actually only accurate to three digits of precision, and it is senseless not to round the number to 45.658 before doing further calculations.

Financial and Banking industries have similar rules for rounding. It is well known that there are very few absolutes in mathematics- for example the number Pi, the ratio of the circumference of any circle to it's diameter, is an irrational number and cannot be accurately represented. A simpler example is 1/3, or decimal 0.333333...(add an infitite number of 3's).

In modern floating point processors, we have a finite number of decimals of precision to which we store numeric floating point values. This limit of precision is dependent on the size (in bytes) of the storage unit. We primarily find 4 byte, 8 byte and 10 byte floating point numbers stored according to the IEEE-754 specification for floating point representation.

In the process of specifying the standard, the ISO had to decide what to do about the matter of loss of precision. They determined that the most stable mode for rounding at the time of precision loss, was to round towards the nearest value, or in the case of equality, to round towards the nearest even value. What this means, assuming floating point values had a single decimal of precision, is that the value 0.5 would be rounded to 0, whilst 1.5 and 2.5 would be rounded to 2. In doing this, the rounding error in calculations is averaged out, and you are left with the most consistent result.

Of course, the 8 byte double precision IEEE value has 15 decimals of precision, and not a single decimal as described above. The rounding error would only apply to the last decimal, so by rounding that last decimal of precision, we are left with a pretty accurate mathematic representation.

Probably due to a few misunderstandings, a problem has come about in the implementation of certain programming library functions, in particular the Gnu C library, GlibC and the functions used for converting IEEE floating point numbers into displayable text. This is the main interest of this document; to highlight a particular problem in the hope that this will lead to a change in approach.

The sign, mantissa and exponent

IEEE-754 double precision floating point values are stored as binary (base 2) values having a sign bit (s), a 11 bit exponent (e), and a 52 bit mantissa (f). Logically, this means that we can store positive or negative numbers in the

range 0 to 2^52 – 1 (4503599627370495) with an exponent in the range zero to 2^11 - 1 (2047). This means that we have at most 15 (and a bit) decimals of precision for the mantissa.

Values Represented by Bit Patterns in IEEE Double Format

Double-Format Bit Pattern


0 < e < 2047

(-1)s × 2e-1023 x 1.f (normal numbers)

e = 0; f 0 (at least one bit in f is nonzero)

(-1)s × 2-1022 x 0.f (subnormal numbers)

e = 0; f = 0 (all bits in f are zero)

(-1)s × 0.0 (signed zero)

s = 0; e = 2047; f = 0
(all bits in f are zero)

+INF (positive infinity)

s = 1; e = 2047; f = 0
(all bits in f are zero)

-INF (negative infinity)

s = u; e =2047; f 0 (at least one bit in f is nonzero)

NaN (Not-a-Number)

A more familiar representation which is quite similar might be the scientific decimal representation, in the format [s]f * 10^[+/-]e – eg. -1.23e+20 or +1.23e-20, where M is the mantissa (in this case 1.23) and exp is the exponent (in this instance +20 and -20 respectively).

The problem; GlibC & sprintf()

The sprintf() function is ubiquitously used in the generation of output reports, being the prime candidate for enabling the conversion from numbers to text.

In the light of buffer overflows and other things that might go wrong, the use of snprintf() or other string functions is strongly suggested.
-- René Pfeiffer

In coding the C library, programmers were well aware of the above information about double precision numbers, including the default rounding mode for handling calculation errors. These people knew that the FPU (Floating Processor Unit) automatically rounded the results of calculations to the nearest value, or if on the border between values, to the nearest even value.

The problem arose when this same logic was applied to precisions less than 15.

When converting a value 3.5 (stored at a precison of 15) to text, for display with a precision of zero decimals, the GCC sprintf() function correctly produces the value 4. However, when converting the value 2.5, sprintf() produces not the expected value 3, but the value 2!

It should be stressed here, that the IEEE-754 representation has a full 15 decimal precision (or 53 binary digits of precision) to play with, regardless of the number being represented. Different numbers might be represented exactly or inexactly within the FPU, but all IEEE values are represented with exactly the same precision of 15 decimals. Therefore, no assumption may be made about an implied precision of a stored number, or inferred from the display size.

Whilst a number might be inexactly stored at a precision of 15 decimals, that same number is exact when viewed at 14 decimals of precision. For example, the value 2.49999999999992 is promoted (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with precision of 14) using the same rounding rules.

Where the IEEE rounding mode makes an immense amount of sense for calculations and handling the error at decimal 15, it makes no sense at all when applying that rounding mode to the display of numbers. When displaying a bond, traded on a stock exchange at 3.145 to two decimal points, one would expect 3.15 and not 3.14.

When using the function printf("%.0f", 2.5), one may therefore reasonably expect the number 2.50 to be rounded upwards to 3, since there is no ambiguity or error implied in the storage of the value 2.50.


The default IEEE rounding mode, as applied to calculations using numbers stored to an identical precision of 15 for double precision values, is the most stable way to produce consistent and accurate results when the rounding is consistently applied at decimal 15.

However, when formatting numbers for display, it is more desirable to accurately represent these same numbers in a consistent way according to industry and mathematical standards. Rounding upwards for positive values, or downwards for negative values is the generally accepted norm, and it is senseless to apply the IEEE-754 rules when the known precision of the number, being fixed at 15, is greater than that of the displayed precision.

It is evident that the GlibC developers, in an attempt towards compliance with the IEEE-754 standard have confused these two issues, to the detriment of the industry as a whole. The damage caused by this misunderstanding is far-reaching, and not necessarily easily circumvented. Applications which work flawlessly against the Microsoft platform have to be specifically altered before being compiled against GlibC.

Unless the difference between GlibC and the Microsoft runtime is well understood, and the adjustments made, to cater for these differences before a product is released, it is inevitable that this seemingly innocuous discrepancy will lead to general and widespread mistrust in applications which use the GlibC runtime.

Late Addendum

Subsequent to writing this article, it has come to light that the Microsoft C runtime library, whilst more accurate in most cases than the GNU C library, also fails to correctly convert binary IEEE double precision numbers to decimal representation in some cases.

The following code demonstrates the principles discussed in this article, and properly converts binary IEEE values to decimal format inside a buffer for display according to generally accepted mathematical standards - at least for the Intel platform. Note that

  printf("%.2f", 5000.525); 

fails to produce the expected result in both Microsoft and GNU libraries.

 * Compile with: gcc -std=c99 -lm -o filename filename.c
 * Definition of _GNU_SOURCE required for compilation with the 
 * GNU C Compiler (disables warning about implicit definition of pow10())
#define _GNU_SOURCE

#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>

// Utility function converts an IEEE double precision number to a
// fixed precision decimal format stored in a buffer.
void tobuf(size_t max, int *len, char *buf, 
           double x, int precision, double max_prec, double carry)
  int    sign  = x < 0;                             // remember the sign
  double q     = pow10(-precision);                 // current mask
  double y     = x==0?0:fmod(fabs(x), q);           // modulus
  double l_div = round(y*max_prec)/max_prec+carry;  // significant digit
  int    l_dec = (int)round(l_div*10/q);            // round to decimal
  carry = l_dec>=10?l_div:0;                        // carry forward?
  l_dec = l_dec % 10;                               // this decimal
  x = x>0?x-y:x+y;                                  // subtract modulus
  if (fabs(x) > 0)                                  // recurse while |x| > 0
    tobuf(max, len, buf, x, precision-1, max_prec, carry);
  else {                                            // x == 0 - first digit
    if (*len >= max) return;
    if (sign) { buf[*len] = '-'; *len = *len + 1; }
    if (*len+1 <= max && precision >= 0) { 
      buf[*len] = '0'; *len = *len + 1; 
      buf[*len] = '.'; *len = *len + 1; 
    while (precision-- > 0) {
      buf[*len] = '0'; *len = *len + 1;
      if (*len >= max) return;
    precision = -1;  // don't place another period
  if (*len <= max && precision == 0) { 
    buf[*len] = '.'; *len = *len + 1; 
  // for first and subsequent digits, add the digit to the buffer
  if (*len >= max) return;
  if (l_dec < 0) l_dec = 0;
  buf[*len] = '0' + l_dec;
  *len = *len + 1;
// Convert the value x to a decimal representation stored in a buffer
int dbl2buf(size_t max, char *buf, double x, int precision) {
  const int DECIMALS=15;
  int    max_dec = DECIMALS-(int)(trunc(log10(fabs(x)))+1); // max significant digits
  double max_prec = pow10(max_dec);                   // magnitude for precision loss
  int    len = 0;                                     // buffer length init
  double y       = x==0?0:fmod(fabs(x), 1/max_prec);  // determine error
  double l_carry = round(y*max_prec)/max_prec;        // error is carried forward

  if (x != x) { strncpy(buf, "NAN", max); return 0; }
  if ((x-x) != (x-x)) { strncpy(buf, "INF", max); return 0; }
  tobuf(max, &len, buf, x, precision-1, max_prec, l_carry); // fill in buffer
  buf[len] = 0;                                             // terminate buffer
  return len;                                      // return buffer length used
//  Usage of the dbl2buf function.
int main (void)
  char buf[64];
  double x = 5000.525; 
  dbl2buf(sizeof(buf), buf, x, 2); 
  printf("%.15f = %s\n", x, buf);


Talkback: Discuss this article with The Answer Gang


Paul works as a Software Architect and Technology Officer for a financial information vendor. After abandoning a career in nuclear chemistry, during which he developed an interest in hardware and software, he joined a firm of brokers as a developer. He first started using Linux in 1994 with version 1.18 of Slackware. His first passion will always be writing software. Other interests are composing music, playing computer games, Neural network simulations and reading.

Copyright © 2007, Paul Sephton. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007


By Shane Collinge

These images are scaled down to minimize horizontal scrolling.

Flash problems?

Click here to see the full-sized image

All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.

Talkback: Discuss this article with The Answer Gang

Bio picture Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in his brightly-coloured underwear fighting criminals. During the day... well, he just runs around in his brightly-coloured underwear. He eats when he's hungry and sleeps when he's sleepy.

Copyright © 2007, Shane Collinge. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007


By Javier Malonda

The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.

These images are scaled down to minimize horizontal scrolling.


Click here to see the full-sized image

All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.

These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.

Talkback: Discuss this article with The Answer Gang

Copyright © 2007, Javier Malonda. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 143 of Linux Gazette, October 2007