LINUX GAZETTE

[ Prev ][ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]

"Linux Gazette...making Linux just a little more fun!"


Downloading LinuxToday links and Linux Gazette's TOC with Python (and Perl)

By Mark Nielsen


Contents

  1. Introduction
  2. The Python Script
  3. Setting up a cron job
  4. A Perl Script I wrote to download Linux Gazette TOC.
  5. A Perl Script I wrote to download Debian Weekly News
  6. Conclusion
  7. References

Introduction

I wanted to add Linux Today's links to my website GNUJobs.com, just for the fun of it. Later, I want to add more headlines from other websites, and perhaps LG's latest edition. I had a choice of Perl or Python. I choose Python because I have been using it for quite a while for a mathematical project, and it has proven quite useful. I want to make a habit of using Python now. It tends to be easier for me to program in Python than Perl. Also, in the future, I wish to use threading to download many webpages at the same time, which Python does very well. I might as well do it in Python now since I know I will use it later.

Both Perl and Python will let you download webpages off of the internet. You can do more than just download webpages, such as ftp, gopher, and connect to other services. Downloading a webpage is just one thing these languages can do.

There are several things the programming language has to do:

This article isn't going to be too long. I commented the Python code.

The Python Script

If you want to include the output of this script to a webpage, then you can use the Server-Side Include (SSI) module in the Apache webserver and use a command like:
<!--#include virtual="/lthead.html" -->
in your webpage. Various programming languages (like PHP, Perl ASP, Perl Mason, etc) can also include files.

It is assumed you are using a GNU/Linux operating system. Also, I was using Python 1.5.2, which is not the latest version. You might have to do a

chmod 755 LinuxToday.py
on the script to make it executable. [Text version of this listing.]
#!/usr/bin/python

# One obvious thing to do is apply error checking for url download,
# download must contain at least one entry, and we are able to create the
# new file. This will be done later.

  ### import the web module, string module, regular expression,  module
  ### and the os module
import urllib, string, re, os

  ### define the new webpage we create and where to get the info
Download_Location = "/tmp/lthead.html"
Url = "http://linuxtoday.com/backend/lthead.txt"

#-----------------------------------------------------------
  ### Create a web object with the Url
LinuxToday = urllib.urlopen( Url )
  ### Grab all the info into an array (if big, change to do one line at a time)
Text_Array =  LinuxToday.readlines()

New_File  = open(Download_Location + "_new", 'w');
New_File.write("<ul>\n") 
  ### Set the default to be invalid
Valid = 0
  ### Record the number of valid entries
Entry_No = 0;
Entry_Valid = 0
  ### Setup the defaults
Date = ""
Link = ""
Header = ""
Count = 0
  ### Create the mattern matching expression
Match = re.compile ("^\&\&")

  ### Append && to make sure we parse the last entry
Text_Array.append('&&')
  ### For each line, do the following
for Line in Text_Array :
    ### If && exists, start from scratch, add last entry
  if Match.search(Line) :
      ### If the current entry is valid and we have skipped the first one, 
    if (Entry_No > 1) and (Entry_Valid > 0) :
	### One thing that Perl does better than Python is the print command. I
	### don't like how Python prints (no variable interpolation).
      New_File.write('<li> <a href="' + Link + '">' + Header + '</a>. ' + Date + "</li>\n")
      ## Reset the values to nothing.
    Header = ""; Link = ""; Date = ""; Entry_Valid = 0
    Count = 0 
    
    ### Delete whitespace at end of line
  Line = string.rstrip(Line)

    ### If count is equal to 1, header, 2 link, 3 date
  if Count == 1:    Header = Line
  elif Count == 2:  Link = Line
  elif Count == 3:  
    Date = Line
      ### If all fields are done, we have a valid entry
    if  (Header != "") or (Link != "") or (Date != "") :
      Entry_No = Entry_No + 1
      Entry_Valid = 1  

    ### Add one to Count
  Count = Count + 1

New_File.write("</ul>\n")

New_File.close()

  ### If we have valid entries, move the new file to the real location
if Entry_No > 0 :
    ### We could just do:
    ### os.rename(Download_Location + "_new", Download_Location)
    ### But here's how to do it with an external command.
  Command = "mv " + Download_Location + "_new " + Download_Location
  os.system( Command )

The Cron Script to make it run nightly

Not the best crontab file, but it will do.
#/bin/sh

### Crontab file
### Name the file "Crontab" and execute with "crontab Crontab"

  ### Download every two hours
*/2 * * * *   /www/Cron/LinuxToday.py >> /www/Cron/out  2>&1  

A Perl Script I wrote to download Linux Gazette TOC

Just so you can compare this to a Perl script, I created a Perl script which downloads the LG's TOC for the latest edition. [Text version of this listing.]
#!/usr/bin/perl
# Copyright Mark Nielsen January 20001
# Copyrighted under the GPL license.

# I am proud of this script.
# I wrote it from scratch with only 2 minor errors when I first tested it.

system ("lynx --source http://www.linuxgazette.com/ftpfiles.txt > /tmp/List.txt");

  ### Open up the webpage we just downloaded and put it into an array.
open(FILE,'/tmp/List.txt'); my @Lines = <FILE>; close FILE; 
  ### Filter out lines that don't contain magic letters.
my @Lines = grep(($_ =~ /lg\-issue/) || ($_ =~ /\.tar\.gz/), @Lines );

my @Numbers = ();
foreach my $Line (@Lines)
  {
    ## Throw away the stuff to the left
  my ($Junk,$Good) = split(/lg\-issue/,$Line,2);
    ## Throw away the stuff to the right
  ($Good,$Junk) = split(/\.tar\.gz/,$Good,2);
    ## If it is a valid number, it is greater than 1, save it
  if ($Good > 0) {push (@Numbers,$Good);}
  }

   ### Sort the numbers and pop off the highest
@Numbers = sort {$a<=>$b} @Numbers;
my $Highest = pop @Numbers;
   ## Create the url we are going to download
my $Url = "http://www.linuxgazette.com/issue$Highest/index.html"; 
   ## Download it
system ("lynx --source $Url > /tmp/LG_index.html");

   ### Open up the index.
open(FILE,"/tmp/LG_index.html"); my @Lines = <FILE>; close FILE;
   ### Extract out the parts that are between beginning and end of TOC.
my @TOC = ();
my $Count = 0;
my $Start = '<!-- *** BEGIN toc *** -->';
my $End = '<!-- *** END toc *** -->';
foreach my $Line (@Lines) 
  {
  if ($Line =~ /\Q$End\E/) {$Count = 2;}
  if ($Count == 1) {push(@TOC, $Line);}
  if ($Line =~ /\Q$Start\E/) {$Count = 1;}
  }

  ### Relink all the links to point to the Linux Gazette magazine
my $Relink = "http://www.linuxgazette.com/issue$Highest/";
grep($_ =~ s/HREF\=\"/HREF\=\"$Relink/g, @TOC);

  ### Save the output
open(FILE,">/tmp/TOC.html"); print FILE @TOC; close FILE;

  ### Done!

A Perl Script I wrote to download Debian Weekly News

I like to keep track of Debian Weekly News, so I wrote this one also. One bad thing about programming, is that when you get really good at programming in a certain way, it is hard to switch to another programming language. These two Perl scripts I did without looking at any code. The Python code took me a while, because I am still not used to it. [Text version of this listing.]
#!/usr/bin/perl
# Copyright Mark Nielsen January 20001
# Copyright under the GPL license.

system ("lynx --source http://www.debian.org/News/weekly/index.html > /tmp/List2.txt");

  ### Open up the webpage we just downloaded and put it into an array.
open(FILE,'/tmp/List2.txt'); my @Lines = <FILE>; close FILE; 
   ### Extract out the parts that are between beginning and end of TOC.
my @TOC = ();
my $Count = 0;
my $Start = 'Recent issues of Debian Weekly News';
my $End = '</p>';
foreach my $Line (@Lines) 
  {
  if (($Line =~ /\Q$End\E/i) && ($Count > 0)) {$Count = 2;}
  if ($Count == 1) {push(@TOC, $Line);}
  if ($Line =~ /^\Q$Start\E/i) {$Count = 1;}
  }

  ### Relink all the links to point to the DWN
my $Relink = "http://www.debian.org/News/weekly/";
grep($_ =~ s/HREF\=\"/HREF\=\"$Relink/ig, @TOC);
grep($_ =~ s/\"\>/\" target=_external\>/ig, @TOC);

  ### Save the output
open(FILE,">/tmp/D.html"); print FILE @TOC; close FILE;

  ### Done!

Conclusion

The Python script actually is made much more complex than it needs to be. The reason why I made it longer was to introduce various modules and to be flexible in case LinuxToday's format changes someday. The only thing the script lacks is error detection in case it can't download the web page, write the new file or rename it. Also, watch the regular-expression modules in Python, because they have been changing in recent versions to increase efficiency and incorporate Unicode support.

Python rules as a programming language. I found it very easy to use the Python modules. It seems like the Python module for handling webpages is easier than the LWP module in Perl. Because of the many possibilities of Python, I plan on creating a Python script which will download many webpages at the same time using Python's threading capbilities.

References

  1. LinuxToday's links
  2. Python's urllib module
  3. Original site for this article (any updates will be here)


Copyright © 2001, Mark Nielsen.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

[ Prev ][ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]