Linux Setup

This document describes all of the steps that one would typically take to set up a Linux system (principally RedHat, CentOS, Ubuntu or SuSE) from scratch. The steps are organized in the order one might follow to get a working system up and running. Optional components are also discussed. The various topics covered include packet filtering and firewalls, printing, DHCP and DNS, X-windows, networking (including dialup and PPPoE), email delivery and filtering, time, uninterruptable power supplies, databases, Web servers, indexing, and partitions.

Driver Disks

A driver disk adds support for hardware that is not otherwise supported by the OS installation program. There is usually no need to use a driver disk unless you need a particular device in order to install the OS. Driver disks are most often used for non-standard or very new CD-ROM drives, SCSI adapters, or NICs. These are the only devices used during the installation that might require drivers not included on the OS install disk.

If a motherboard or other hardware vendor gives you a driver disk or you are told to download a driver disk from a Web site, you need to install it onto the system at OS install time (or afterwards, by booting from the OS install disk and doing a reinstall).

To begin, download the image, if you don't already have a disk, and burn it to a diskette or CD using a program like rawwritewin (under Windoze) or dd (under Linux). If you were given a diskette and need a CD, you'll probably need to first create a diskette and then copy all of the files on it to a CD using one of the CD burning programs. Be sure that you have the right driver for your version of the OS, the correct platform (e.g. x686 or 32_64), and the exact hardware that you're trying to support.

Incidentally, the driver disks for the r8168 driver for CentOS are listed on this page:

     http://wiki.centos.org/AdditionalResources/HardwareList/RealTekRTL8111b

Boot from the Linux install disk and at the first prompt, type the following:

     linux dd

Once you've done that, you should either proceed with a fresh install or, if you are simply adding the driver to an existing install (probably, only works for NICs), proceed with a reinstall. At some point in the install/reinstall, you will be prompted to insert the driver disk and the driver will be copied to your system.

After the OS is installed or reinstalled, you may have some additional work to do. If the new driver doesn't get added to the kernel dependencies properly, you can do it by hand. You may also have to do it by hand if you install any updates (especially to the kernel), since the driver installed by the install program is not tracked anywhere (nothing like the right hand keeping secrets from the left hand). Many updates will simply put the original (i.e. non-working) device driver back onto your system.

Begin by checking if the old, non-working driver is installed (if you're just installing a completely new driver, skip these steps). We'll look for the r8169 driver, which is broken when used with RTL8168/RTL8111 chips:

     lsmod | grep r8169

If the old driver is found, remove it:

     su
     rmmod r8169

Note that you should be doing things at your system's console since, at this point, your network, if it was working at all, will stop working.

Next, check to see if the new driver is there or not, for example the new r8168 driver:

     lsmod | grep r8168

If it isn't, install it from the install path where it was put, for example:

     su
     cp /lib/modules/2.6.18-53.el5/i686/r8168.ko \
        /lib/modules/`uname -r`/kernel/drivers/net
     insmod /lib/modules/`uname -r`/kernel/drivers/net/r8168.ko

If you don't know where the install program "hid" the driver, you may be able to find it fairly quickly like this:

     find /lib/modules -name r8168\*

Check again to see that it is now properly installed:

     lsmod | grep r8168

At this point, you should add an alias for any devices that will use the new device driver to /etc/modprobe.conf, while at the same time making sure that nothing references the old device driver. For example:

     su
     edit /etc/modprobe.conf
Then
     # alias eth0 r8169
     alias eth0 r8168

Now, you can update the module dependencies list using:

     su
     depmod -A

Also, the conflicting, original driver may still think that it knows how to handle the hardware (but it doesn't really). To get around this, you may have to blacklist it by adding its name to one of the blacklist files. For example, for network cards:

     su
     cd /etc/modprobe.d
     touch blacklist-network
     edit blacklist-network

Regardless of whether the blacklist file is new or if it already exists, add something that looks like the following lines to it:

     # Don't load the r8169 driver. It is replaced by the r8168 driver.
     blacklist r1869

Finally, some versions of Linux boot from a RAM image of the kernel which will still have the old driver (if any) installed and the new driver missing. If you are using one of these versions of Linux (e.g. Ubuntu, CentOS), you need to update this image or the incorrect driver will still get loaded and take over the show.

To update the RAM image on CentOS, use:

     su
     mkinitrd -f /boot/initrd-`uname -r`.img `uname -r`

To update the RAM image on Ubuntu, use:

     su
     update-initramfs -u

Reboot the machine to see if the changes are permanent and that the new piece of hardware now works.

You can check new ethernet drivers with:

     ethtool -i eth0

Problems With GRUB 2

Now that GRUB 2 is being used as the boot loader for several OS versions, you can expect all kinds of problems with booting your system. Our favorite is that the system comes up with the BIOS splash screen, you may or may not see the GRUB startup screen (where you get to pick which image to boot), and then the screen goes blank until the OS is completely up and running and X-Windows has started. This works especially swell when the boot sequence decides to do an fsck and then prompts you for what to do next. Lots of luck figuring out what's going on and answering the prompts correctly.

The blame for this problem is placed on the video card. Apparently, according to the GRUB developers, certain video cards (e.g. nVidia) do not play well with GRUB2. We prefer to put the blame squarely on the shoulders of those who caused it. Prior to installing GRUB2 (i.e. for an OS that uses GRUB), the system booted fine, with no video card problems. The system boots fine with Knoppix, from the standalone CD. Other operating systems, which shall remain nameless, boot fine. Any number of install disks boot fine. Get the idea?

If your system eventually does come up and you can get logged in (either through the GUI, on the desktop, or via ssh or telnet), you may be able to fix the problem.

From the command shell, using your favorite text editor, remove the hash mark (#) from the front of the line in /etc/default/grub file that reads:

     #GRUB_GFXMODE=640x480

In other words, change it to read:

     GRUB_GFXMODE=640x480

Save the file and then update GRUB with the following commands:

     su
     update-grub

The next time that you boot the system, you may be able to see more of what is going on.

Further changes may be necessary to get GRUB to display the startup messages, especially with some nVidia graphics cards. Again, edit /etc/default/grub and change the line that reads:

     GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

to read:

     GRUB_CMDLINE_LINUX_DEFAULT="splash nomodeset"

Save the file and then update GRUB with the following commands:

     su
     update-grub

An alternative to editing the GRUB configuration file from a working system is to use a standalone system such as Knoppix. Boot the Knoppix CD, mount the file system and using the editor, change the /etc/default/grub file, as noted above, and save it. Unmount the file system and shut Knoppix down.

Another alternative is to convince GRUB to show the recovery menu at startup time. This is done by pressing the down arrow key once, followed by the enter key once, when the system boots. There is a very short window when you can do this, as soon as the BIOS hands control to GRUB. Typically, this is when the BIOS splash screen disappears, the display goes blank, or the "VGA Mode not support" (or whatever broken English is used) message is shown by your display. If you get it right, you'll see a square box with a list of recovery choices, in the center of the screen.

By picking the "root" or "netroot" choice, you'll get dropped into a root shell, either with or without networking enabled (we use "root", since there is no need for the network). You'll need to enter the root password to get into the shell. From there, you can run your favorite CURSES-based editor (e.g. emacs or vi) to edit /etc/default/grub. After you've made your changes and saved them, you can exit the root shell by pressing Ctrl-D.

Whether you edit /etc/default/grub using a standalone OS or via the root shell, you will need to update GRUB so that the changes you've made become permanent. After dropping out of the root shell or by following the steps above, you need to get back to the recovery choices. At that point, pick the "grub" choice to update GRUB. This will run grub-update to make your changes permanent. You can then pick "resume" to resume booting normally.

Incidentally, if you can see the splash screen, with its list of kernels to boot, you can use the "e" option to temporarily edit the GRUB boot options. Begin by selecting the kernel that you'd like to boot. Then, type "e". This will bring up an EMACS-like editor that let's you edit the boot options for that kernel.

You'll see a line that looks something like:

     linux /vmlinuz-3.2.0-24-generic root=UUID=1234abcd-fedc-1234-5678-\
       12345678abcd ro quiet splash vt.handoff=7

By altering that line, you can change the "quiet splash" to "splash nomodeset", for example. Once you've made your changes, press F10 to continue booting. Maybe this will let you get the system up an running long enough to edit the /etc/default/grub file with your regular text editor, as described above.

While your system is booting, you should be able to press Esc to see the boot messges. This should especially work if your system is waiting for you to answer a prompt from fsck or another system startup task.

Additional notes about changing GRUB2 boot options can be found at:

     http://ubuntuforums.org/showthread.php?t=1613132

Getting To Runlevel 5

On RedHat or CentOS, when the system boots up, the first program to be run is "init" which tries to switch the system into the runlevel that is set in /etc/inittab. It runs all of the startup scripts that are defined for that level and then invokes the login shell or display manager that is used for that runlevel. Once these steps are taken, your system is up and running and you should be able to login.

The two most commonly used runlevels are 3 and 5. Runlevel 3 starts up without a windowed GUI (i.e. simple terminal login) while runlevel 5 starts up a display manager, window system and some type of desktop GUI. If you look at /etc/inittab, you'll see something that looks like this:

     id:5:initdefault:

In this case, this tells init to start up in runlevel 5.

If you install a minimal system, a server or any of one or more other predefined configurations, the install program may not install the display manager, window system or desktop GUI. When you boot such a system, it will come up in runlevel 3 and the console will show a simple login prompt. After working with the CLI for a while, you may decide that you really want the GUI back. Here are the simple steps necessary to switch your system to runlevel 5 and start the display manager, window system and Gnome desktop GUI.

Login to the console and install the missing packages. There are two ways to do this:

     su
     yum -y groupinstall basic-desktop desktop-platform x11 fonts

or, you can try:

     su
     yum -y groupinstall "GNOME Desktop Environment"

At this point, you should be able to start up the desktop by typing this at the command line:

     startx

If this goes well, you can proceed to the next step which is convincing the display manager to run after init is all done starting the system. Firstly, you should check that the Gnome Display Manager is actually installed:

     su
     yum -y install gdm

Once GDM is installed, you should create the file, using your favorite text editor, that tells init which display manager to start:

/etc/sysconfig/desktop:

     DISPLAYMANAGER="GNOME"

To test whether you've got it right, you can enter the following at the command line (exit the desktop, if you still have it running as a result of running "startx"):

     su
     init 5

This should bring up the display manager, which will prompt you for a userid and password and then start the Gnome Desktop after you login. If init doesn't start the display manager properly, and you see a message like this in /var/log/messages:

     init:x respawning too fast, disabled for 5 minutes.

You've done something wrong. You should go back and check that all of the packages are installed properly, that startx works properly, and that /etc/sysconfig/desktop defines the GNOME display manager.

Once you get "init 5" to work, you can safely edit /etc/inittab to start the system in runlevel 5:

/etc/inittab:

.

       .

id:5:initdefault:

Do not alter /etc/inittab to runlevel 5 until you're sure that "init 5" works. If you do and "init 5" fails, your system won't reboot. If all is well, you can reboot the system and the familiar window manager login screen should appear automatically. If not, you're probably faced with booting a standalone OS, such as Knoppix, to repair the damage.

What To Do When XOrg Picks The Wrong Values

You just installed a brand new copy of the OS on your brand new system, removed the system DVD and tried to boot for the first time. The system appeared to boot up OK (you saw GRUB loading) and a couple of systems started up (like udev), then the screen went blank and your display said (in broken Engrish, "VGA Mode Not Support"). What do you do?

Lucky you. XOrg has used the EDID values from your display (or the lack of EDID values, if you have a really old display) to pick a set of display parameters that your display doesn't support. You now have to repair this infelicity using a standalone rescue disk (isn't it great that you just installed a brand new, fresh copy of the OS and you're already resorting to booting from a rescue disk and hacking obscure display manager parameters -- no wonder Linux is such a hit with Joe User).

Our favorite standalone rescue disk is Knoppix (hint to XOrg programmers: if Klaus Knopper can figure out the proper display parameters for the display, maybe you could too, if you just tried a bit harder). So, begin by getting yourself a copy of KNOPPIX (the CD works perfectly fine):

     http://knopper.net/knoppix/index-en.html

And burn the ISO image onto a CD (or DVD, if you took that route).

Then, boot up the CD/DVD in the new machine and mount its root file system. Look around in the /media directory for the subdirectory that matches the partition where the root file system resides (Knoppix thoughtfully creates subdirectories in /media for all of the disk partitions that it finds at startup). In order to mount things, you will need to issue the "su" command in a shell window first. You'll need to do something like this:

     su
     mount /dev/sda3 /media/sda3

To edit the XOrg config file, locate the file (usually in /etc/X11/xorg.conf) and invoke the Kate editor (there's no EMACS, sorry) on it:

     cd /dev/sda3
     kate etc/X11/xorg.conf &

This will let you get at the offending display section of the file. A quick fix is to force the display resolution to one of the well-known VESA resolutions by putting the "Modes" line in the Display SubSection. You might also have some success with setting the HorizSync and VertRefresh parameters in the Monitor Section to the correct range for your monitor (although it does appear possible to use values that fall strictly within this range and still offend the monitor's sensibilities). Also, if EDID is not used to get the monitor's parameters, this could be the problem. Here's the basic idea:

/etc/X11/xorg.conf:

.

       .
  Section "Monitor"
      Identifier    "Monitor0"
      ModelName     "My LCD Panel 1024x768"
      HorizSync     31.5 - 48.0
      VertRefresh   56.0 - 65.0
      Option        "dpms"
  EndSection
       .
       .
       .
  Section "Screen"
      Identifier    "Screen0"
      Device        "Videocard0"
      Monitor       "Monitor0"
      DefaultDepth  24
      SubSection "Display"
          Viewport  0 0
          Depth     24
          Modes     "800x600" "640x480"
      EndSubSection

EndSection

If worst comes to worst, you may have to define your own ModeLine information and then use it in the Display subsection. To do this, you must figure out one or more modelines that can be used to set the various screen resolutions supported by your display. There should be a chart in the back of your display's manual that lists all of the supported modes (beware that some of the charts we've seen have the vertical and horizontal frequencies exchanged - a good clue is that the vertical frequency is usually the one that matches the number in the VESA name). First off, you should try to pick one of the standard modes from the list below. Also, it is best to try and pick the highest vertical refresh rate (in Hz) supported by your display:

     # 640x400 @ 85Hz (VGA VESA 85) hsync: 37.9kHz
     ModeLine "640x400"    31.5    640  672  736  832  400  401  404  445 \
       -hsync +vsync
     # 640x480 @ 60Hz (Industry standard) hsync: 31.5kHz
     ModeLine "640x480"    25.2    640  656  752  800  480  490  492  525 \
       -hsync -vsync
     # 640x480 @ 72Hz (VGA VESA 72) hsync: 37.9kHz
     ModeLine "640x480"    31.5    640  664  704  832  480  489  491  520 \
       -hsync -vsync
     # 640x480 @ 75Hz (VGA VESA 75) hsync: 37.5kHz
     ModeLine "640x480"    31.5    640  656  720  840  480  481  484  500 \
       -hsync -vsync
     # 640x480 @ 85Hz (VGA VESA 85) hsync: 43.3kHz
     ModeLine "640x480"    36.0    640  696  752  832  480  481  484  509 \
       -hsync -vsync
     # 720x400 @ 85Hz (VESA 85) hsync: 37.9kHz
     ModeLine "720x400"    35.5    720  756  828  936  400  401  404  446 \
       -hsync +vsync
     # 800x600 @ 56Hz (SVGA VESA 56) hsync: 35.2kHz
     ModeLine "800x600"    36.0    800  824  896 1024  600  601  603  625 \
       +hsync +vsync
     # 800x600 @ 60Hz (SVGA VESA 60) hsync: 37.9kHz
     ModeLine "800x600"    40.0    800  840  968 1056  600  601  605  628 \
       +hsync +vsync
     # 800x600 @ 72Hz (SVGA VESA 72) hsync: 48.1kHz
     ModeLine "800x600"    50.0    800  856  976 1040  600  637  643  666 \
       +hsync +vsync
     # 800x600 @ 75Hz (SVGA VESA 75) hsync: 46.9kHz
     ModeLine "800x600"    49.5    800  816  896 1056  600  601  604  625 \
       +hsync +vsync
     # 800x600 @ 85Hz (SVGA VESA 85) hsync: 53.7kHz
     ModeLine "800x600"    56.3    800  832  896 1048  600  601  604  631 \
       +hsync +vsync
     # 1024x768 @ 43Hz (industry standard) hsync: 35.5kHz
     ModeLine "1024x768"   44.9   1024 1032 1208 1264  768  768  776  817 \
       +hsync +vsync Interlace
     # 1024x768 @ 60Hz (XGA VESA 60) hsync: 48.4kHz
     ModeLine "1024x768"   65.0   1024 1048 1184 1344  768  771  777  806 \
       -hsync -vsync
     # 1024x768 @ 70Hz (VESA 70, HP1070) hsync: 56.5kHz
     ModeLine "1024x768"   75.0   1024 1048 1184 1328  768  771  777  806 \
       -hsync -vsync
     # 1024x768 @ 75Hz (XGA VESA 75) hsync: 60.0kHz
     ModeLine "1024x768"   78.8   1024 1040 1136 1312  768  769  772  800 \
       +hsync +vsync
     # 1024x768 @ 85Hz (XGA VESA 85) hsync: 68.7kHz
     ModeLine "1024x768"   94.5   1024 1072 1168 1376  768  769  772  808 \
       +hsync +vsync
     # 1152x864 @ 75Hz (SXGA VESA 75) hsync: 67.5kHz
     ModeLine "1152x864" 108.0    1152 1216 1344 1600  864  865  868  900 \
       +hsync +vsync
     # 1280x960 @ 60Hz (SXGA VESA 60) hsync: 60.0kHz
     ModeLine "1280x960"  108.0   1280 1376 1488 1800  960  961  964 1000 \
       +hsync +vsync
     # 1280x960 @ 85Hz (SXGA VESA 85) hsync: 85.9kHz
     ModeLine "1280x960"  148.5   1280 1344 1504 1728  960  961  964 1011 \
       +hsync +vsync

If you didn't find your display's resolution in this list, there are more standard modes listed at:

     http://m.domaindlx.com/LinuxHelp/resources/modelines.htm

Another way to determine the display's modeline is to use the modeline calculator at:

     http://xtiming.sourceforge.net/cgi-bin/xtiming.pl

Instead of determining the maximum refresh rate for your display, this will let you put in an arbitrary refresh rate and calculate the modeline from that. You can try a few until you arrive at one that seems likely to work with your display (again, a higher refresh rate is more desirable, since it provides a more flicker-free display but just getting the display to work is our goal here so beggars cannot be choosers).

Now, you can hack the Monitor section of the xorg.conf file in /etc/X11 and set the device identification fields to something useful (here we're using the recalcitrant Syntax LT27HV as an example):

     Identifier   "Syntax LT27HV LCD Display"
     VendorName   "Syntax"
     ModelName    "Olevia LT27HV"

Next, you should probably set the minimum and maximum horizontal and vertical sync frequencies from the chart of your display's capabilities:

     HorizSync    56-87
     VertRefresh  24-72

These values are not used to calculate any actual display settings but the values in the modelines are checked against these limits to ensure that you do not exceed your display's capabilities (and possibly destroy it).

Then, add the modelines to the Monitor section:

     # 640x480 @ 85Hz (VGA VESA 85) hsync: 43.3kHz
     ModeLine "640x480-85"  36.0  640 696 752 832  480 481 484 509  -hsync -vsync
     # 800x600 @ 85Hz (SVGA VESA 85) hsync: 53.7kHz
     ModeLine "800x600-85"  56.3  800 832 896 1048  600 601 604 631  +hsync +vsync
     # 1024x768 @ 85Hz (XGA VESA 85) hsync: 68.7kHz
     ModeLine "1024x768-85"  94.5  1024 1072 1168 1376  768 769 772 808 \
       +hsync +vsync
     # 1280x720 @ 60Hz (1280x720-60) vsync: 60.39Hz, hsync: 45.72kHz, clk: 74.25
     # This is the only true native mode for the LT27HV display.
     Modeline "1280x720-60" 74.25 1280 1312 1592 1624 720 735 742 757 \
       +hsync +vsync

For the Syntax Olevia models, you may want to add the following, as well:

     # Power saving mode
     Option       "DPMS"
     # We know everything about this display
     Option       "UseEDID" "false"
     # We know what the dot pitch is
     Option       "DPI" "96×96"
     # Make sure the video card always picks the CRT.  Or, if you have the
     # display plugged in on the DVI connection, use "DFP" instead of "CRT-0".
     Option       "UseDisplayDevice" "CRT-0"

Next, hack the Screen section to use the display that you just defined by setting the Display SubSection to point to it:

     # The color depth
     Depth        24
     # All of the supported modes that we defined in modelines.  The first is
     # the default mode chosen at startup.
     Modes        "1280x720-60"  "1024x768-85"  "800x600-85"  "640x480-85"

Providing the display driver respects the ModeLine parameters (some of the nVidia drivers don't), you should now be in business. Unmount the partition from Knoppix, shut Knoppix down and reboot to see if it really works.

X Windows on SiS Graphics Cards

On older systems that use XFree86, X Windows can be run on top of the SiS graphics cards (despite the fact that they are basic junk) using the SiS driver (sis_drv.o) from http://www.winischhofer.at/linuxsisvga.shtml.

The driver is downloaded and installed into /usr/X11R6/lib/modules/drivers, replacing the old one. Be sure to get a copy of the driver that matches the version of X11 that you are using. You can determine this by typing "X -version" at the command line. For XFree86 Version 4.2.0 (Red Hat Linux release: 4.2.0-72), use the version marked "XFree86 4.2.1 (gcc 2.95)".

/etc/X11/XF86Config:

This file should be set up as follows (example is for a Samsung Syncmaster 172x running on a PowerColor Xaber 200). Note that you may have to turn off DRI, if you have one of the older SiS chipsets on your video card. Here is the example:

     Section "ServerLayout"
         Identifier    "Default Layout"
         Screen        0 "Screen0" 0 0
         InputDevice   "Mouse0" "CorePointer"
         InputDevice   "Keyboard0" "CoreKeyboard"
         InputDevice   "DevInputMice" "AlwaysCore"
     EndSection
     Section "Files"
         RgbPath       "/usr/X11R6/lib/X11/rgb"
         FontPath      "/usr/X11R6/lib/X11/fonts/local/"
         FontPath      "/usr/X11R6/lib/X11/fonts/misc/"
         FontPath      "/usr/X11R6/lib/X11/fonts/75dpi/:unscaled"
         FontPath      "/usr/X11R6/lib/X11/fonts/100dpi/:unscaled"
         FontPath      "/usr/X11R6/lib/X11/fonts/Type1/"
         FontPath      "/usr/X11R6/lib/X11/fonts/Speedo/"
         FontPath      "/usr/X11R6/lib/X11/fonts/75dpi/"
         FontPath      "/usr/X11R6/lib/X11/fonts/100dpi/"
     #   FontPath      "unix/:7100"
     EndSection
     Section "Module"
         Load          "dbe"
         Load          "extmod"
         Load          "fbdevhw"
         Load          "glx"
         Load          "record"
         Load          "freetype"
         Load          "type1"
     EndSection
     Section "InputDevice"
         Identifier    "Keyboard0"
         Driver        "keyboard"
         Option        "XkbRules" "xfree86"
         Option        "XkbModel" "pc101"
         Option        "XkbLayout" "us"
     EndSection
     Section "InputDevice"
         Identifier    "Mouse0"
         Driver        "mouse"
         Option        "Protocol" "PS/2"
         Option        "Device" "/dev/psaux"
         Option        "ZAxisMapping" "4 5"
         Option        "Emulate3Buttons" "no"
     EndSection
     Section "InputDevice"
         Identifier    "DevInputMice"
         Driver        "mouse"
         Option        "Protocol" "IMPS/2"
         Option        "Device" "/dev/input/mice"
         Option        "ZAxisMapping" "4 5"
         Option        "Emulate3Buttons" "no"
     EndSection
     Section "Monitor"
         Identifier    "SyncMaster172x"
         HorizSync     31.5-81.1
         VertRefresh   56-76
     EndSection
     Section "Device"
         Identifier    "Xaber200"
         Driver        "sis"
         VendorName    "PowerColor"
         BoardName     "Xaber 200 X20L-B1 8x AGP"
     EndSection
     Section "Screen"
         Identifier    "Screen0"
         Device        "Xaber200"
         Monitor       "SyncMaster172x"
         DefaultDepth  24
      Subsection "Display"
          Depth     8
          Modes     "1024x768" "800x600" "640x480"
          ViewPort  0 0
      EndSubsection
      Subsection "Display"
          Depth     16
          Modes     "1024x768" "800x600" "640x480"
          ViewPort  0 0
      EndSubsection
      Subsection "Display"
          Depth     24
          Modes     "1024x768" "800x600" "640x480"
          ViewPort  0 0
      EndSubsection

EndSection

     Section "DRI"
         Group         0
         Mode          0666
     EndSection

On newer systems, that use XOrg, the SiS Graphics Cards should be automatically supported but you can force the issue by editing the XOrg config file. Pay particular attention to the Device and Screen sections, which should look something like this:

/etc/X11/xorg.conf:

.

       .
  Section "Device"
      Identifier    "Videocard0"
      Driver        "sis"

EndSection

     Section "Screen"
         Identifier    "Screen0"
         Device        "Videocard0"
         Monitor       "Monitor0"
         DefaultDepth  24
         SubSection "Display"
             Viewport  0 0
             Depth     24
             Modes     "800x600" "640x480"
         EndSubSection
     EndSection

Note that the Modes line can be used to force a recalcitrant display (as noted in the prior section) to do what you want it to do (i.e. work).

X Windows With Intel Built-In Processor Graphics

Xorg should work with the built-in graphics support that comes with the newer Intel processors (e.g. Haswell). The driver that supports these processors is the i915 driver, which is included in later kernels. The default Xorg.conf, that is built in to Xorg, will try this driver (called "intel") before it tries drivers such as the vesa driver. However, if the intel driver doesn't load, the vesa driver is unlikely to work either so it is important that the intel driver work, because you can't get rid of the i915 driver.

The most likely reason for the intel driver not to load is that it wasn't installed. To fix this problem, install the requisite packages:

     su
     yum install xorg-x11-drv-intel xorg-x11-drv-fbdev

XDMCP

XDMCP is the X Display Manager Control Protocol. It is used to allow remote X servers to attach to an X client on the local machine. Typically, it is used to allow a remote X Windows session to be run on the local machine. For example, one can run XFree86 under Cygwin on a Winduhs workstation and connect to the local machine. Once this is done, Gnome or KDE can be run from the remote machine.

XDMCP is a mondo huge security hole so it is never enabled on any system as it is shipped. It should only be turned on on machines that are on secure networks, behind a firewall. To enable XDMCP on machines that use XFree86 and gdm, do the following:

Under SuSE 8, the display manager seems to be some home grown version ("susedm", I think). Turn on gdm as the display manager by hacking /etc/sysconfig/displaymanager and set the following:

     DISPLAYMANAGER=""
     DISPLAYMANAGER_REMOTE_ACCESS="no"

changed to

     DISPLAYMANAGER="gdm"
     DISPLAYMANAGER_REMOTE_ACCESS="yes"

In the Linux X environment, you need to provide fonts using either the X font server (xfs) or a hard coded font path in the XF86Config and XF86Config-4 configuration files. Using the xfs font server is the way to go. To do this in RH 6.2 and Mandrake 8.x and 9.0, modify /etc/rc.d/init.d/xfs and make the following changes:

     daemon xfs -droppriv -daemon -port -1

to

     daemon xfs -droppriv -daemon -port 7100

In Mandrake 7.2 and SuSE 8, the port is already set to 7100.

In RH 7.x, 8.x and 9.x, all of the Enterprise RedHat versions and, hence, the CentOS versions, xfs by default, for security reasons, no longer listens to the TCP port. To turn it on, change this line in /etc/rc.d/init.d/xfs:

     daemon xfs -droppriv -daemon

to

     daemon xfs -droppriv -daemon -port 7100

Then, in /etc/X11/fs/config, comment out this line:

     # don't listen to TCP ports by default for security reasons
     #no-listen = tcp

Also, for SuSE 8, xfs is not automatically started so turn it on with the command "chkconfig xfs on", as root.

On all systems, in /etc/X11/xdm/Xaccess, change the following to allow all hosts to connect:

     #*    # any host can get a login window

to

Edit /etc/X11/gdm/gdm.conf to activate XDMCP, causing it to listen for requests:

     [xdmcp]
     Enable=false (may be 0 in some distributions)

to

     Enable=true (or 1 in some distributions)

Under SuSE 8, the file to edit is /etc/opt/gnome2/gdm/gdm.conf.

If you want to run XDMCP but not run X on a local display, you should comment out the startup of the local server in gdm.conf. Change:

     [servers]
     0=/usr/bin/X11/X
     1=/usr/bin/X11/X (may already be commented out)

to

     [servers]
     #0=/usr/bin/X11/X
     #1=/usr/bin/X11/X (may already be commented out)

Change the run level in /etc/inittab so that X comes up at startup.

     id:3:initdefault:

to

     id:5:initdefault:

Edit /etc/X11/XF86Config to change the font path (this may already be set on some systems):

     FontPath "unix/:-1"

to

     FontPath "unix/:7100"

Under SuSE 8, comment out all of the FontPath lines that hard code a font file name and insert the reference to xfs on port 7100.

If you have a system that uses XOrg, you can find instructions on setting XDMCP up at this URL:

     http://www.x.org/archive/X11R6.8.1/doc/xdm.1.html

In conjunction with these instructions, you may still find the instructions (above) for setting up the xfs font server useful.

Logging In As The Super User

Many newer operating system versions (e.g. Ubuntu 12.04, 12.10) seem to think that we're all children and can't be trusted to do anything as super user. They really go out of their way to prevent you from logging in as super user or running as super user, and spend all of their time nagging you if you do manage to get logged in as super user. As if running commands with sudo is going to prevent you from doing something stupid.

Maybe if the OS actually worked, we wouldn't have to do everything as super user. We could just go about our business without having to tweak things and adjust things, etc., etc. Who are the children anyway?

But, in the meantime, in order to get anything done, you're going to need to be able to log in as the super user. The first step is to set the root password to something you actually know instead of a randomly-generated one, if your version of the OS uses this clever trick to prevent root logins (e.g. Ubuntu).

Login as any one of the users who has sudo prvileges (e.g. the first userid you created when you installed the OS) and do the following:

     sudo passwd root
     (enter the current user's password)
     (enter the new root password twice)

Since it is the graphical user interface (and not the OS itself) that prevents you from logging in as the super user, you simply need to convince the display manager's greeter to let you login as root.

If your OS uses GDM as the display manager (e.g. CentOS or RedHat), edit the file /etc/pam.d/gdm with your favorite text editor:

Find the following line:

     auth required pam_succeed_if.so user != root quiet

Comment it out so that it reads:

     #auth required pam_succeed_if.so user != root quiet

Frequently, the /etc/pam.d/gdm-password file is a symlink to /etc/pam.d/gdm so that no further editing is necessary. However, if it is a separate file, you may have to make the same edit to /etc/pam.d/gdm-password, if the auth required line appears there too.

You will now be able to login to GDM as root the next time around. Click the "Other Users" choice, and enter "root". Then, you can enter the root password and login as usual.

There is more information and an alternate method for logging in as root at the following location:

     http://motorscript.com/enable-root-login-for-linux-systems/

If your OS uses LightDM as the display manager (e.g. later versions of Ubuntu), edit the file /etc/lightdm/lightdm.conf with your favorite text editor:

Add this line to the end of the file:

     greeter-show-manual-login=true

You will now be able to enter the root userid as the userid to log in the next time you reboot the system. Click the "login" choice, and enter "root". Then, you can enter the root password and login as usual.

You can find a more detailed description about logging in as root under LightDM at this location:

     http://handytutorial.com/login-as-root-ubuntu-12-04-12-10/

Getting In

Apparently, the latest trend is to disable all of the remote login services, as well as file transfer, and just let the user connect to the system via remote desktop. That's because we, once again, can't be trusted to use the evil telnet responsibly but we're all fine when we let Microsoft tell us how to connect to our servers.

To get telnet working, you need to install two packages: xinetd; and telnet-server. While we're at it, we install the telnet package so that we can use the telnet client from the local machine, if need be. You can either install these packages when you build the system or you can install them via the package manager or command line. For example, on CentOS, you could try:

     su
     yum install xinetd telnet-server telnet

Or, on Ubuntu, you could try:

     apt-get install xinetd telnet-server telnet

Once the system is up and running, you need to enable xinetd because, even when it is installed by the package manager, the install does not turn it on or start it. You can do so like this:

     su
     /sbin/chkconfig on xinetd
     /etc/init.d/xinetd start

Or, you can pick the Services item from the System/Administration menu. From there, you can Enable the xinetd service and then start it.

Once xinetd is running, you need to edit the telnet configuration in the xinetd configuration directory. Change the line that reads "disable = yes" to "disable = no". The configuration file should look like this:

/etc/xinetd.d/telnet:

     # default: on
     # description: The telnet server serves telnet sessions; it uses \
     #       unencrypted username/password pairs for authentication.
     service telnet
     {
             disable = no
             flags           = REUSE
             socket_type     = stream        
             wait            = no
             user            = root
             server          = /usr/sbin/in.telnetd
             log_on_failure  += USERID
     }

You should now be able to telnet to the system from a local command window by typing:

     telnet localhost

If this doesn't work, you may need to restart the xinetd server to get it to reread the telnet config file:

     su
     /etc/init.d/xinetd restart

Once you get a local telnet connection working, you need to make sure that the firewall will let outside telnet connections through. If you won't be using the firewall, now is the time to disable using the Firewall item on the System/Adminstration menu. Otherwise, you need to punch through port 23 on the firewall to let telnet in.

If you prefer to use ssh, instead of telnet, you simply need to install the rsh-server package. We also install the rsh package to give us the clients, which can be used on the local system to access other systems. For example, on CentOS, you could try:

     su
     yum install rsh rsh-server

Or, on Ubuntu, you could try:

     apt-get install rsh rsh-server

The sshd server should be enabled and started after the packages are installed. If not, you can start the server like this:

     su
     /etc/init.d/sshd start

Once again, you can test that you can log in via the local ssh client, before you go any further. Then, you need to make sure that the firewall will let outside ssh connections through. If you won't be using the firewall, now is the time to disable using the Firewall item on the System/Adminstration menu. Otherwise, you need to punch through port 22 on the firewall to let ssh in.

EMACS

Probably the very first thing you will want to do is to configure EMACS not to leave turds lying around everywhere. This way, you can then edit files with impunity and not have to worry about cleaning up Stallman's mess everywhere you go:

~/.emacs:
/root/.emacs:

Add, at least:

     (custom-set-variables
     '(make-backup-files nil))
     (custom-set-faces)

Or, if you prefer, set the global default to prevent emacs from leaving turds lying around. To do this, hack the file default.el, which is usually installed in the directory /usr/local/share/emacs/site-lisp (or on some versions of RedHat or CentOS, /usr/share/emacs/site-lisp). If this directory doesn't exist, look for the directory in the library path that contains the EMACS lisp modlues (".el").

/usr/local/share/emacs/site-lisp/default.el:
/usr/share/emacs/site-lisp/default.el:

Add, at least:

     (custom-set-variables
     '(make-backup-files nil))

Incidentally, if you are using CVS and you find EMACS trying to leave CVS turd files too, you might want to consider putting the following:

     (setq vc-cvs-stay-local nil)

in one or both of the places mentioned above. The CVS turd files look like "filename.~1.1~.".

Another annoying feature (or should we say, non-feature) is the behavior of the latest EMACS versions whereby filename completion, which used to be bound to the space bar, no longer works. Apparenly, the whiners convinced the developers (in a rare display of political correctness stupidity) that an existing feature, which has been around forever, should be turned off, by default, to accomodate a small group of really annoying people who are dumb enough to want to put blanks in their file names. God help us. The Macintosh winkies are taking over.

At least we aren't so clueless to not know how to fix default behavior that we don't like by editing the config file. Heaven help us if those few guys who actually use blanks in their file names should have to figure that one out. Better to inconvenience the rest of us. Well, if you don't like it, try adding the following to the EMACS config files mentioned above:

     (define-key minibuffer-local-filename-completion-map
       " " 'minibuffer-complete-word)
     (define-key minibuffer-local-must-match-filename-map
       " " 'minibuffer-complete-word) 

Ethernet

Install one or more ethernet cards according to the directions in the ethernet HowTo. Hack the following:

/etc/modules.conf:
/etc/modprobe.conf (later versions of RedHat/CentOS):

Alias the ethernet device name to the proper module for the card. If the card is an ISA card, you might need to set the options to use for that card. Here's an example of four cards:

     alias eth0 ne                         # Novell NE2000 compatible
     alias eth1 ne2k-pci                   # A Realtek PCI NE2000 compatible
     alias eth2 rtl8139                    # A Realtek PCI 10/100BaseT
     alias eth3 e1000                      # An Intel Pro 1000
     options eth0 io=0x300 irq=10          # NE2000 ISA bus card sets addr/irq

or

     options eth0 io=0x200 irq=5           # NE2000 ISA bus card sets addr/irq

Here's another example with three cards, two of which are handled by the same driver:

     alias eth0 e1000                      # An Intel Pro 1000
     alias eth1 e1000                      # Another Intel Pro 1000
     alias eth2 8139too                    # A Realtek 8139-based card

Incidentally, if you do have multiple cards aliased to the same driver, it is solely up to the driver to assign the cards device names, in whatever order it sees fit. Often this depending on the slot order in which the cards are installed but sometimes it can be timing-dependant. This can result in different cards being assigned a device name, each time the system is booted.

If you don't wish to play device name roulette and you have one of the later versions of Linux that support udev, you can take care of this situation so that device name assignment is fully deterministic (for earlier versions of Linux, without udev, you can just cross your fingers).

Fire up your favorite text editor and take a look at the "/var/log/messages" file. Do a search for "eth". You should see the place in the boot sequence where the network driver or drivers bring up your network interfaces. This will list the MAC address of each card and its ethernet device name (e.g. "eth0"). Remember these two pieces of information for each NIC.

Continue searching from that point forward. You may see where udev is remapping your NIC to some other device name (e.g. "udev: renamed network interface eth0 to eth1"). If you don't want this to happen or if you want to assign specific device names to particular NICs, you should find where udev compares each NIC's MAC address in its rules and map the NIC to whatever name you want. For example:

/etc/udev/rules.d/70-persistent-net.rules:

     Find the spot in this file that maps MAC addresses to device names.  Using
     each NIC's MAC address, change the rules to map each NIC to the device
     name you want:
     # The line will look something like this
     SUBSYSTEM=="net", ACTION=="add", DRIVERS="?", \
       ATTR{address}=="00:19:66:17:ea:ff", ATTR{type}=="1", KERNEL=="eth", \
       NAME="eth0"

Change "eth0" to whatever name you really want. Duplicate the line for each of your NICs.

Under the SuSE Linux variant, you are likely to see something that looks like this:

/etc/udev/rules.d/30-net-persistent-names.rules:

     Find the spot in this file that maps MAC addresses to device names.  Using
     each NIC's MAC address, change the rules to map each NIC to the device
     name you want:
     # The line will look something like this
     SUBSYSTEM=="net", ACTION=="add", SYSFS{address}=="00:19:66:17:ea:ff", \
       IMPORT="/lib/udev/rename_netiface %k eth0"

Again, change "eth0" to whatever name you really want and duplicate the line for each of your NICs.

Once you've got all the cards aliased to the appropriate drivers and assigned the right device names, reboot the system and then, from Gnome, run /usr/bin/netcfg (you should see this program under the menu tree "Programs/System/Network Configuration" or "System/Network". This program will allow you to configure the machine's host and domain names, DNS lookup servers, /etc/hosts data, the network adapter information and the routing information.

Note that later versions of Redhat or CentOS may also include a fabulous little package for managing the network called Network Manager. You can see the discussion in "Networking Your Way" for our thoughts on why this package is a bad idea. In the mean time, you should uninstall the Network Manager package, using the Package Manager, before you proceed with the next steps. This is by far and away the easiest way to disable it, since it will yield nothing but grief, if left installed. Pick the "Networking" section and then do a search for "network-manager". You should see the package at the head of the list. Uninstall it.

Using netcfg, the following information should be added to the sections noted:

Names (or DNS)

     Hostname: mysys
     Domain: leave blank
     Search for hostnames in additional domains: leave blank
     Nameservers: DNS server1
                  DNS server2
                       .
                       .
                       .

Hosts

      IP                Name              Nickname
  127.0.0.1       localhost.homeworld     localhost
  192.168.1.1     stargate.homeworld      stargate    (packet shoveller)
  192.168.1.2     gabriella.homeworld     gabriella
  192.168.1.3     clara-bow.homeworld     clara-bow
  192.168.1.4     phoebe.homeworld        phoebe
  192.168.1.5     laboratory.homeworld    laboratory
  192.168.1.6     scada.homeworld         scada
  192.168.1.61    tivo2.homeworld         tivo2
  192.168.1.62    tivo1.homeworld         tivo1
  192.168.1.63    kinkos.homeworld        kinkos

Interfaces

     Interface        IP         Proto  Atboot   Active
        lo         127.0.0.1     none    yes     active
       eth0       192.168.1.x    none    yes     active
       eth1                      none    no     inactive  (only for DSL/cable)

When setting up IP local addresses, you should use Netmask: 255.255.255.0.

Routing

On the main packet shoveller, don't use anything (unless you have an always on connection, in which case, route out through eth1 to wherever).

On local machines, set:

     Default Gateway: 192.168.1.1
     Default Gateway Device: eth0

/etc/sysconfig/network-scripts/ifcfg-ethx or ifcfg.ethx:
/etc/sysconfig/networking/devices/ifcfg-ethx or ifcfg.ethx:
/etc/sysconfig/networking/profiles/default/ifcfg-ethx or ifcfg.ethx:

If the above setup doesn't give you what you want or you need to hack the definition of your ethernet cards directly, on RedHat/CentOS you can look in /etc/sysconfig/network-scripts/ifcfg-ethx, /etc/sysconfig/networking/devices/ifcfg-ethx and /etc/sysconfig/networking/profiles/default/ifcfg-ethx (where "x" is your network adapter's number) for the NIC setup. Just bear in mind that, when you're hacking these files, you must not use uppercase letters in the hexadecimal MAC address set by the HWADDR parameter. If you do, the brain dead code in /sbin/ifup and /sbin/ifdown will not work properly. Here's a few samples:

/etc/sysconfig/network-scripts/ifcfg-eth0 or ifcfg.eth0 (regular NIC):

     USERCTL=no
     PEERDNS=no
     TYPE=Ethernet
     [IPV6INIT=no]        # Optional for systems that support IPV6
     DEVICE=eth0
     HWADDR=00:50:ba:52:5b:8d
     BOOTPROTO=none
     ONBOOT=yes
     IPADDR=192.168.1.1
     NETMASK=255.255.255.0
     BROADCAST=192.168.1.255

/etc/sysconfig/network-scripts/ifcfg-eth1 or ifcfg.eth1 (gig-o-bit NIC):

     USERCTL=no
     PEERDNS=no
     TYPE=Ethernet
     [IPV6INIT=no]        # Optional for systems that support IPV6
     DEVICE=eth1
     HWADDR=00:1b:21:20:f3:e3
     BOOTPROTO=none
     ONBOOT=yes
     IPADDR=192.168.11.1
     NETMASK=255.255.255.0
     BROADCAST=192.168.11.255

/etc/sysconfig/network-scripts/ifcfg-eth2 or ifcfg.eth2 (PPPoE):

The NIC is used for PPPoE with a modem or DSL/Cable modem. Note that the interface is not brought up at boot time, since it is left to the PPPoE daemon to start:

     USERCTL=no
     TYPE=Ethernet
     [IPV6INIT=no]        # Optional for systems that support IPV6
     DEVICE=eth2
     HWADDR=00:40:33:d3:02:d0
     BOOTPROTO=none
     ONBOOT=no

Once you have hacked the appropriate module in /etc/sysconfig/network-scripts, you should copy it to /etc/sysconfig/networking/devices with the same name:

     cp /etc/sysconfig/network-scripts/ifcfg-ethx
       /etc/sysconfig/networking/devices

or

     cp /etc/sysconfig/network-scripts/ifcfg.ethx
       /etc/sysconfig/networking/devices

After that, make a hard link to the just copied module in the remaining directory:

     cd /etc/sysconfig/networking/profiles/default
     ln /etc/sysconfig/networking/devices/ifcfg-ethx

or

     ln /etc/sysconfig/networking/devices/ifcfg.ethx

Be sure to replace the 'x' above with the NIC's actual number.

/etc/sysconfig/network/ifcfg-eth-id-*:

On SuSE, there is only one file to hack and it is named after the MAC address of the NIC. Here is an example of how to assign an IP address to the NIC at 00:40:F4:1D:37:46:

/etc/sysconfig/network/ifcfg-eth-id-00:40:f4:1d:37:46:

     BOOTPROTO='static'
     BROADCAST=''
     ETHTOOL_OPTIONS=''
     IPADDR='192.168.1.99'
     MTU=''
     NAME='Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+'
     NETMASK='255.255.255.0'
     NETWORK=''
     REMOTE_IPADDR=''
     STARTMODE='auto'
     USERCONTROL='no'

/etc/network/interfaces:

On Debian/Ubuntu-based sytems, there is literally only a single file to hack for all of the NICs. You assign IP addresses (or choose DHCP, for that matter) by editing the "/etc/network/interfaces" file. If you want a static IP address, your file should look something like this:

     auto lo
     iface lo inet loopback
     auto eth0
     iface eth0 inet static
          address 192.168.1.8
          netmask 255.255.255.0
          gateway 192.168.1.1

If you wish to be certain that you always assign your chosen IP address to the same MAC address, you can force the issue by including the actual MAC address in the "/etc/network/interfaces" file, in which case you'll get an error in the log, when you try to bring up networking, if the MAC address ever changes. This may be preferrable to udev just remapping your NIC to eth5 (or whatever), when you install a new one, and having DHCP take over. In that case, networking will appear to be working but when you try to reach the system with its supposed static IP address, you'll get a huge surprise. To force the MAC address to be checked, do this:

     auto eth0
     iface eth0 inet static
          hwaddress ether 00:19:66:17:ea:ff
          address 192.168.1.8
          netmask 255.255.255.0
          gateway 192.168.1.1

Networking Your Way

Some versions of Linux (e.g. Ubuntu) include a fabulous little package for managing the network called Network Manager. While this gem may be a good idea for someone's laptop (where the network address is acquired automagically by DHCP, and/or networking is done via multiple connections), its probably going to be a real pain in the butt if you want your machine to have a single, static IP address. Basically, this package is a piece of scrap that does not work. Sure, as we said, it will configure your network for use by DHCP but, if you want to change one of the interfaces to a static IP address, you'll waste a lot of time before you figure out what a piece of junk it is. Sure, it looks like it is configuring your interfaces the way you want them but in reality it is leading you down the garden path. After you carefully configure the interface just the way you want it, nothing happens. And, when you reboot, all your changes are gone and you're back to DHCP. Who needs this?

So, if you don't want to bother with all of the grief caused by trying to set up static IP addresses via the Network Manager, begin by uninstalling the Network Manager package, using the Package Manager. This is by far and away the easiest way to disable it. Pick the "Networking" section and then do a search for "network-manager". You should see the package at the head of the list. Uninstall it.

/etc/network/interfaces:

Edit this file so that it looks something like this:

     auto lo
     iface lo inet loopback
     auto eth0
     iface eth0 inet static
          address 192.168.1.8
          netmask 255.255.255.0
          gateway 192.168.1.1
     auto ethx
          .
          .
          .

/etc/resolv.conf:

You'll also need to put your DNS servers in the resolv.conf file so that they look something like this:

     nameserver 151.203.0.84
     nameserver 151.203.0.85
     nameserver 204.122.16.8
     nameserver 216.231.41.2

Once you're done, a reboot should prove that everything is working OK.

Hostname/Gateway/Zeroconf

/etc/sysconfig/network:

This file can be edited to set the host name, like this, if all else fails:

     HOSTNAME=host-name

Note that you can also set the name or IP address of a gateway machine that is to forward all packets whose address is unknown. That is, if you don't set the address elsewhere, using the "route" or "ip" command. If you want to set gateway address, do it like this:

     GATEWAY=192.168.2.1

It is also possible to use this same parameter in one of the "ifcfg.ethx" files or the "/etc/network/interfaces" file, if you'd rather.

Finally, if you happen to check your routing tables, you may see a route to 169.254.0.0/16 or "link-local". Huh? Where'd that come from? Well, its a new feature that attempts to automatically discover and configure networked machines and devices, without you ever having set up the network.

Basically, the network startup scripts automatically add a default route to this bogus address so there will always be something in the routing table, even if you forget to configure things properly. The idea was to make all networks work the way Appletalk or NETBIOS did, in that all connected devices would be automagically discovered and connected (and, you wondered what AVAHI did, didn't you). Sweet! Stevie J. finally gets his revenge.

Unfortunately, all this does is make your network stop working while at the same time opening it up to all sorts of attacks. What a concept! You'll probably want to get rid of it and you can do so by adding a parameter to the network file in sysconfig:

/etc/sysconfig/network:

     NOZEROCONF=yes

You'll also want to get rid of the AVAHI daemon itself, since it will otherwise be trying to insinuate itself into all sorts of operations where it doesn't belong. Unfortunately, the smartest people in the world have decided that more than 50 other packages depend on AVAHI. Ripping it out with the package manager or yum will also remove a whole host of useful packages.

Luckily, the easiest way to disable AVAHI is just to not start the daemon. You can do this with these commands:

     su
     /sbin/chkconfig avahi-daemon off

AVAHI will still be installed but it will never run so it shouldn't be a problem.

IPV6

There are all sorts of reasons to disable IPV6 on your system so here is how we do it.

If your system has them (e.g. CentOS, RedHat), the individual network adapter configuration files can be hacked to add a line to turn off IPV6:

/etc/sysconfig/network-scripts/ifcfg.ethx:

.

       .
  IPV6INIT=no
       .
       .
       .

If your system has the /etc/sysconfig/network file, you can set a parameter in it to turn off IPV6:

/etc/sysconfig/network:

.

       .
  NETWORKING_IPV6=no
       .
       .
       .

Finally, the most successful approach may be to add a parameter to the /etc/sysctl.conf file to turn off IPV6:

/etc/sysctl.conf:

.

       .

# Disable IPV6. net.ipv6.conf.all.disable_ipv6 = 1

       .
       .
       .

We don't recommend that you turn off IPV6 by aliasing the ipv6 module to "no" or "/bin/true" in /etc/modprobe.conf as this will lead to all sorts of problems. Certain packages, such as the fabulous selinux, depend on IPV6 being there to go about their business. If it isn't, they will try to load it and generate many bogus error messages when the load fails. By setting the flag in /etc/sysctl.conf, the IPV6 module is loaded but disabled. It has no effect on network routing, timeouts, etc. but no errors are given to any other packages that rely on IPV6.

Incidentally, if you wish to disable/enable IPV6 on the fly, you might try:

     echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6

and

     echo 0 > /proc/sys/net/ipv6/conf/all/disable_ipv6

This only works once the IPV6 module has been loaded.

Packet Forwarding and Source Routing

It is recommended that packet forwarding be turned off initially. If the Linux system that you are setting up won't be acting as a router, this is the proper setting anyway. If it is being used as a router or VPN gateway or in some other configuration that requires packet forwarding, it can be turned on dynamically, when the time comes, in the firewall or other script, after the proper packet filtering rules are set up.

There are a number of security exploits that can be used out by an outside attacker to compromise a system that doesn't have reverse path filtering (or source route verification) turned on. Consequently, it is recommend that this be turned on by default.

Finally, by enabling source routing, it is possible for an outside attacker or a man in the middle to compromise security by redirecting conversations between trusted users to himself. Therefore, source routing should also be turned off by default.

All of these parameters can be set to take effect at startup, in sysctl.conf:

/etc/sysctl.conf:

.

       .

# Controls IP packet forwarding for IPV4 net.ipv4.ip_forward = 0

     # Controls IP packet forwarding for IPV6
     net.ipv6.conf.default.forwarding = 0
     # Controls source route verification
     net.ipv4.conf.default.rp_filter = 1
     # Do not accept source routing for IPV4
     net.ipv4.conf.default.accept_source_route = 0
     # Do not accept source routing for IPV6
     net.ipv6.conf.default.accept_source_route = 0
          .
          .
          .

To dynamically turn packet forwarding on/off for IPV4, you can use:

     echo 1 > /proc/sys/net/ipv4/ip_forward

and

     echo 0 > /proc/sys/net/ipv4/ip_forward

Under IPV6, packet forwarding is turned on/off for each individual interface instead of for them all. So, for example, you might use:

     echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding

or

     echo 0 > /proc/sys/net/ipv6/conf/eth0/forwarding

To dynamically turn IPV6 packet forwarding on/off for all interfaces, you can use:

     find /proc/sys/net/ipv6/conf -name forwarding -exec echo 1 > \{\} \;

and

     find /proc/sys/net/ipv6/conf -name forwarding -exec echo 0 > \{\} \;

Should you wish to see the current status of IPV6 packet forwarding for all of the configured interfaces, you can use:

     find /proc/sys/net/ipv6/conf -name forwarding \
       -exec echo -n \{\}" : " \; -exec cat \{\} \;

This should produce a listing something like this:

     /proc/sys/net/ipv6/conf/all/forwarding : 0
     /proc/sys/net/ipv6/conf/default/forwarding : 0
     /proc/sys/net/ipv6/conf/lo/forwarding : 0
     /proc/sys/net/ipv6/conf/eth0/forwarding : 1
     /proc/sys/net/ipv6/conf/eth1/forwarding : 0
     /proc/sys/net/ipv6/conf/eth2/forwarding : 0
     /proc/sys/net/ipv6/conf/tun0/forwarding : 0

Meanwhile, reverse path filtering only applies to IPV4 and is turned on/off for each individual network interface so to dynamically turn it on/off you need to know the name of the interface in question. For example:

     echo 1 > /proc/sys/net/ipv4/conf/tun0/rp_filter

and

     echo 0 > /proc/sys/net/ipv4/conf/tun0/rp_filter

To dynamically turn reverse path filtering on/off for all interfaces, you can use:

     find /proc/sys/net/ipv4/conf -name rp_filter -exec echo 1 > \{\} \;

or

     find /proc/sys/net/ipv4/conf -name rp_filter -exec echo 0 > \{\} \;

Should you wish to see the current status of reverse path filtering for all of the configured interfaces, you can use:

     find /proc/sys/net/ipv4/conf -name rp_filter \
       -exec echo -n \{\}" : " \; -exec cat \{\} \;

This should produce a listing something like this:

     /proc/sys/net/ipv4/conf/all/rp_filter : 1
     /proc/sys/net/ipv4/conf/default/rp_filter : 1
     /proc/sys/net/ipv4/conf/lo/rp_filter : 1
     /proc/sys/net/ipv4/conf/eth0/rp_filter : 1
     /proc/sys/net/ipv4/conf/eth1/rp_filter : 1
     /proc/sys/net/ipv4/conf/eth2/rp_filter : 1
     /proc/sys/net/ipv4/conf/tun0/rp_filter : 0

Finally, source routing is turned on/off for each individual network interface under both IPV4 and IPV6 so to dynamically turn it on/off you need to once again know the name of the interface in question. For example:

     echo 1 > /proc/sys/net/ipv4/conf/eth1/accept_source_route

and

     echo 0 > /proc/sys/net/ipv4/conf/eth1/accept_source_route

or

     echo 1 > /proc/sys/net/ipv6/conf/eth2/accept_source_route

and

     echo 0 > /proc/sys/net/ipv6/conf/eth2/accept_source_route

To dynamically turn source routing on/off for all interfaces, you can use:

     find /proc/sys/net/ipv4/conf -name accept_source_route -exec echo 1 > \{\} \;

and

     find /proc/sys/net/ipv4/conf -name accept_source_route -exec echo 0 > \{\} \;

or

     find /proc/sys/net/ipv6/conf -name accept_source_route -exec echo 1 > \{\} \;

and

     find /proc/sys/net/ipv6/conf -name accept_source_route -exec echo 0 > \{\} \;

Should you wish to see the current status of source routing for all of the configured interfaces, you can use:

     find /proc/sys/net/ipv*/conf -name accept_source_route \
       -exec echo -n \{\}" : " \; -exec cat \{\} \;

This should produce a listing something like this:

     /proc/sys/net/ipv4/conf/all/accept_source_route : 0
     /proc/sys/net/ipv4/conf/default/accept_source_route : 0
     /proc/sys/net/ipv4/conf/lo/accept_source_route : 0
     /proc/sys/net/ipv4/conf/eth0/accept_source_route : 0
     /proc/sys/net/ipv4/conf/eth1/accept_source_route : 1
     /proc/sys/net/ipv4/conf/eth2/accept_source_route : 0
     /proc/sys/net/ipv4/conf/tun0/accept_source_route : 0
     /proc/sys/net/ipv6/conf/all/accept_source_route : 0
     /proc/sys/net/ipv6/conf/default/accept_source_route : 0
     /proc/sys/net/ipv6/conf/lo/accept_source_route : 0
     /proc/sys/net/ipv6/conf/eth0/accept_source_route : 0
     /proc/sys/net/ipv6/conf/eth1/accept_source_route : 0
     /proc/sys/net/ipv6/conf/eth2/accept_source_route : 1
     /proc/sys/net/ipv6/conf/tun0/accept_source_route : 0

DNS

/etc/resolv.conf:

To set the DNS name servers up directly, hack this file. Here's a sample:

     nameserver 151.203.0.84       # Verizon 1
     nameserver 151.203.0.85       # Verizon 2
     nameserver 204.122.16.8       # Speakeasy 1
     nameserver 216.231.41.2       # Eskimo 1

Restoring The Gnome Shell

If you are using Ubuntu, later versions of the OS have a GUI (called Unity) that looks more like a video game (or Windoze 8) than a real GUI. Good luck trying to get any actual work done under Unity. You'll probably want to replace the Unity GUI with the 3.4 Gnome Shell or, even better Gnome Classic, both of which have a GUI that is more like what you're used to.

Under Ubuntu 12.04, enter the following commands to install the Gnome-Shell 3.4:

     su
     add-apt-repository ppa:gnome3-team/gnome3
     apt-get update
     apt-get install gnome-shell

Under Ubuntu 12.10, you can simply install the Gnome-Shell, as above, but you will probably want to install the whole Gnome 3 desktop (which includes the 3.6 Gnome Shell) by entering the following commands:

     su
     apt-get install ubuntu-gnome-desktop ubuntu-gnome-default-settings

When you are prompted by the install, select GDM as the default display manager (supposedly, you can use LightDM but Gnome doesn't appear to work as well with LightDM as it does with GDM).

If GDM is already installed and the package manager doesn't prompt you to choose between LightDM and GDM, or you selected LightDM by mistake, you can run the following commands to set things right:

     su
     dpkg-reconfigure gdm

This time, select GDM instead of LightDM.

The "ubuntu-settings" package is used to set various Ubuntu defaults, like the window button order or which Rhythmbox plugins are enabled by default, as well as a number of other annoying defaults, so it is also a good idea to remove the "ubuntu-settings" package like this:

     su
     apt-get remove ubuntu-settings

Note that, when you remove the "ubuntu-settings" package, the "ubuntu-desktop" package will be removed as well. This is just a meta package and your system shouldn't be affected by its removal.

Even after you remove the "ubuntu-settings" package, the Gnome Shell will continue to use Ubuntu's overlay scrollbars. Personally, we think these are about the most annoying scrollbars we've ever seen so, if you want to use the Gnome 3 scrollbars instead, remove overlay scrollbars using the following commands:

     su
     apt-get remove overlay-scrollbar*

Once you're done with all of these changes, reboot the computer.

If you want Gnome Classic, when the login prompt appears, choose the "Not listed?" option. At that point, you'll see the "Session" choice. Click that and you'll see a list of sessions. From this list pick "Gnome Classic". You'll have to do this at the same time that you log in.

There is more information at this location:

     http://www.webupd8.org/2012/10/how-to-get-complete-gnome-3-desktop-in.html

Man Pages

Some clever guy at RedHat has decided that we're all using X-terms out here and nobody should ever use a VT-100. This being the case, they've mucked up the way man pages display on a VT-100. All of the boldfaced and highlighted characters come out broken, etc. Here's how to fix it.

/etc/man.config or /usr/etc/man.config:

Change the config file (wherever it is kept) to read something like:

     NROFF     /usr/bin/nroff -Tascii -c -mandoc

Make sure you use "-Tascii".

Note that under later versions of RedHat (and CentOS), the nroff command is actually a shell script that invokes groff. And, bonus, they decided to throw away the "-T" option. So, you can use "-Tascii" until you're blue in the face and nothing happens. Instead, you should use the following, if your nroff is actually a shell script:

     NROFF     /usr/bin/groff -Tascii -c -mandoc 2>/dev/null

Initialization Changes

On RedHat 8.x and above, there is a Gnome UI that effects initialization changes, under "Server Settings"/"Services". This can be used to install and remove services, much like the way chkconfig works.

/etc/rc.d/rc*.d:

Links in these directories are numbered. At system initialization, when the system reaches the run level that matches the number in the directory name (e.g. /etc/rc.d/rc2.d is run level 2), the directory is scanned and each of the links found in it is run in ascending order to start up the system services.

The links are soft links and are added with the "ln" command. For example, to add a link for the packetfilter at run level 3, do:

     ln -s ../init.d/packetfilter /etc/rc.d/rc3.d/S09packetfilter

/etc/rc.d/rc0.d:

After K90network, add these links:

     K91packetfilter  -->  ../init.d/packetfilter
     K92ipchains      -->  ../init.d/ipchains

/etc/rc.d/rc1.d:

After K90network, add these links:

     K91packetfilter  -->  ../init.d/packetfilter
     K92ipchains      -->  ../init.d/ipchains

/etc/rc.d/rc2.d:

Before S10network, add this link:

     S08ipchains      -->  ../init.d/ipchains

/etc/rc.d/rc3.d:

After S05kudzu and before S10network, add the following links:

     S08ipchains      -->  ../init.d/ipchains
     S09antispoofing  -->  ../init.d/antispoofing
     S09packetfilter  -->  ../init.d/packetfilter

After S10network, add this link:

     S11dialdaemon    -->  ../init.d/dialdaemon

/etc/rc.d/rc4.d:

Before S10network, add this link:

     S08ipchains      -->  ../init.d/ipchains

/etc/rc.d/rc5.d:

Before S10network, add this link:

     S08ipchains      -->  ../init.d/ipchains

/etc/rc.d/rc6.d:

After K90network, add these links:

     K91packetfilter  -->  ../init.d/packetfilter
     K92ipchains      -->  ../init.d/ipchains

chkconfig

For those startup scripts that are properly defined, you can use chkconfig to add their symbolic links to the appropriate places in the rc.d directory structure. See "man chkconfig" for more info. In general, do the following:

     chkconfig --add startscript
     chkconfig startscript on

Anti-spoofing

Note that anti-spoofing isn't needed if iptables (see below) is used by NARC (see below), since NARC provides anti-spoofing in its firewall rules.

/etc/rc.d/init.d/antispoofing:

Turns anti-spoofing on/off. Packets from outside the local network are checked to see that they don't have local return addresses (spoofing).

     "start" - Starts up anti-spoofing.
     "stop"  - Shuts anti-spoofing down.

You must create the antispoofing script, a sample of which, is shown below:

     #! /bin/sh
     # Script to turn on anti-spoofing.
     # If no source address verification, do nothing.
     [ -f /proc/sys/net/ipv4/conf/all/rp_filter ] || exit 0
     case "$1" in
         start)
          echo -n "Turning on antispoofing:"
          for f in /proc/sys/net/ipv4/conf//rp_filter; do
              echo 1 > $f
          done
          echo "."
          ;;
         stop)
          echo -n "Turning off antispoofing:"
          for f in /proc/sys/net/ipv4/conf//rp_filter; do
              echo 0 > $f
          done
          echo "."
          ;;
         *)
          echo "Usage: /etc/rc.d/init.d/antispoofing {start|stop}"
          exit 1
          ;;
     esac
     exit 0

Firewall/Packetfilter (ipchains)

For kernels prior to 2.4, ipchains is used to do packet filtering. For kernels from 2.4 and up, iptables (see below) is used instead.

/etc/rc.d/init.d/packetfilter:

Turns packet filtering on/off, using ipchains.

Will reload saved rules from /etc/ipchains.rules or run /etc/rc.d/init.d/firewall-rules, if it exists.

     "start" - Looks for firewall-rules and runs it or, if not found, restores
               previously-stored ipchains rulesets.
     "stop"  - Saves ipchains rules (just in case) and shuts down ipchains.

You must create the packetfilter script, a sample of which, is shown below:

     #! /bin/sh
     # Script to start/stop packet filtering.
     #
     # If there is a set of firewall rules in "/etc/rc.d/init.d/firewall-rules"
     # then this shell script is invoked.  Otherwise, any saved rules that are
     # found in "/etc/ipchains.rules" are reinstated.
     #
     #
     # Set the name of the external network interface here.  If you are using
     # "diald", this should be "sl0".  For PPP, use "ppp0".  For a DSL line or
     # cable modem, use the ethernet card that they are connected to.  Usually
     # this will be "eth1" but for PPPoE it will be "ppp0".
     extint="sl0"
     #
     # If there are no firewall rules and no store ipchains rules, do nothing.
     #
     [ -f /etc/rc.d/init.d/firewall-rules ] || [ -f /etc/ipchains.rules ] \
       || exit 0
     case "$1" in
         #
         # Upon startup, invoke the firewall rules or reinstate the ipchains
         # rules.
         #
         start)
          if [ -f /etc/rc.d/init.d/firewall-rules ]; then
              echo -n "Turning on the firewall:"
              /etc/rc.d/init.d/firewall-rules $extint start
          else
              echo -n "Turning on packet filtering and masquerading:"
              /sbin/ipchains-restore < /etc/ipchains.rules || exit 1
              echo 1 > /proc/sys/net/ipv4/ip_forward
          fi
             touch /var/lock/subsys/packetfilter
          echo "."
          ;;
         #
         # Upon shutdown, save the ipchains rules, then, turn of packet
         # filtering.
         #
         stop)
          echo -n "Turning off firewalling, packet filtering and masquerading:"
          echo 0 > /proc/sys/net/ipv4/ip_forward
          /sbin/ipchains-save >/etc/ipchains.rules
          /sbin/ipchains -X
          /sbin/ipchains -F
          /sbin/ipchains -P input ACCEPT
          /sbin/ipchains -P output ACCEPT
          /sbin/ipchains -P forward ACCEPT
             rm -f /var/lock/subsys/packetfilter
          echo "."
          ;;
         #
         # For all other cases, give help.
         #
         *)
          echo "Usage: /etc/rc.d/init.d/packetfilter {start|stop}"
          exit 1
          ;;
     esac
     exit 0

/etc/ipchains.rules:

File where previously existing ipchains rulesets are stored upon system shutdown.

/sbin/ipchains-save >/etc/ipchains.rules

Will save the current ipchains rulesets in /etc/ipchains.rules. This is done automatically at system shutdown by the packet filter.

/etc/rc.d/init.d/firewall-rules:

Setup rules for the firewall. Define which packets are to be filtered, how packets are to be routed, etc.

     "start"    - Perform a normal startup of packet filtering.
     "register" - Used to reregister the filtering rulesets when the IP address
                  of any of the links to the outside world is changed (e.g. when
                  diald brings up a PPP connection).

A sample of the firewall-rules script is shown below:

     #!/bin/sh
     #
     # /etc/rc.d/init.d/firewall-rules: Semi-Strong IPCHAINS firewall ruleset.
     #
     # This script takes two arguments.  The first is the name of the device that
     # connects to the Internet (e.g. ppp0, eth1, etc.).  The second is a
     # parameter that indicates whether we are initially starting the firewall or
     # just reregistering it because the link to the outside world changed (e.g.
     # in the case of diald bringing up the link).  By setting this parameter to
     # anything other than "start", the overhead of loading the masquerade modules
     # and creating the firewall chain can be avoided when the new link is simply
     # being registered.
     #
     # Note that the file /etc/rc.d/init.d/firewall-off may be defined to cause
     # this script to set up masquerade but leave the rest of the firewall off.
     # This may be useful for debugging.
     #
     PATH=/sbin:/bin:/usr/sbin:/usr/bin
     #
     # Specify your Static IP address here.
     #
     # If you have a DYNAMIC IP address, you need to tell this script about your
     # IP address everytime you get a new one.  To do this, enable the following
     # line which sets "extip" from the ifconfig results of querying the PPP link.
     # (Please note that the different single and double quote characters MATTER).
     #
     #
     # DHCP users:
     # -----------
     #
     # If you get your TCP/IP address via DHCP, you will need to enable the
     # comented out command in the PPP/DHCP section AND replace the word "ppp0"
     # with the name of your EXTERNAL Internet connection (eth0, eth1, etc.) on
     # the lines for "ppp-ip" and "extip".  It should be also noted that the DHCP
     # server can change IP addresses on you.  To fix this, users should configure
     # their DHCP client to re-run the firewall ruleset everytime the DHCP lease
     # is renewed.
     #
     # NOTE #1: Some newer DHCP clients like "pump" do NOT have this ability to
     # run scripts after a lease-renew.  Because of this, you need to replace them
     # with something like "dhcpcd" or "dhclient".
     #
     # NOTE #2: The syntax for "dhcpcd" has changed in recent versions.
     #
     # Older versions used syntax like:
     #
     #      dhcpcd -c /etc/rc.d/init.d/firewall-rules eth0
     #
     # Newer versions use syntax like:
     #
     #      dhcpcd eth0 /etc/rc.d/init.d/firewall-rules
     #
     #
     # PPP users:
     # ----------
     #
     # If you aren't already aware, the "/etc/ppp/ip-up script" is always run when
     # a PPP connection comes up.  Because of this, we can have "ip-up" invoke the
     # firewall ruleset script with the new PPP IP address and thereby update
     # the strong firewall ruleset.
     #
     # If the /etc/ppp/ip-up file already exists, you should edit it and add a
     # line containing "/etc/rc.d/init.d/firewall-rules $1 register" near the end
     # of the file.
     #
     # If you don't already have a /etc/ppp/ip-up sccript, you need to create one
     # that invokes "/etc/rc.d/init.d/firewall-rules $1 register".
     #
     # If you use "diald", you need to create an "addroute" script that will be
     # invoked whenever the PPP link is brought up or taken down.  In the
     # "addroute" script, use:
     #
     #      /etc/rc.d/init.d/firewall-rules $1 register
     #
     # At initialization time, the rules should be set up using "sl0".  This will
     # cause masquerade to aim outbound packets at "sl0" which will cause "diald"
     # to dial up the link.  Once PPP is up and running, the "addroute" script
     # will change masquerade to point to the active link on  "ppp0".  When the
     # link is dropped, the "addroute" script will again be invoked to change
     # masquerade so that it points back at "sl0", thereby restoring the initial
     # configuration that waits for more outbound packets.
     #
     #
     # PPP and DHCP Users:
     # -------------------
     #
      Remove the '' on the shell line below and place a '#' in front of the
     # shell line after that, unless you have a static IP address.
     #
     extip="`/sbin/ifconfig $1 | grep 'inet addr' | awk '{print $2}' \
       | sed -e 's/.*://'`"
     #
     #
     # Users with STATIC IP addresses:
     # -------------------------------
     #
     # For PPP with a static IP address or for any other Internet connection that
      uses a static IP address (e.g. DSL or cable modem on eth1), place a '' in
      front of the shell line above and remove the '' from the shell line below.
     # Also, fill in your correct static IP address.
     #
     # extip="your.static.PPP.address"
     #
     # Set the correct internal network information below.  Usually this is the
     # device "eth0" and the network defined by "192.168.1.0" (if you've been
     # following all of the examples).  The "24" indicates that the network
     # addresses have the top 24 bits on in their netmask (i.e. 255.255.255.0).
     #
     intint="eth0"
     intnet="192.168.1.0/24"
     #
     # Perform startup processing, if this is the first time.
     #
     case "$2" in
         #
         # Upon startup, load all required IP MASQ modules.  Any other time, the
         # modules will still be loaded so there's no need to redo this.
         #
         # NOTE: Only load the IP MASQ modules you need.  Most current IP MASQ
         #       modules are shown below but are commented out to prevent loading.
         #
         # Set up the firewall rules once.  They will get reused from that point
         # on so that repeating the set up is superfluous.
         #
         start)
           #
           # Needed to initially load modules
           #
           depmod -a
           #
           # Supports the proper masquerading of FTP file transfers using the
           # PORT method
           #
           modprobe ip_masq_ftp
           #
           # Supports the masquerading of RealAudio over UDP.  Without this
           # module, RealAudio WILL function but in TCP mode.  This can cause a
           # reduction in sound quality
           #
           modprobe ip_masq_raudio
           #
           # Supports the masquerading of IRC DCC file transfers
           #
           #modprobe ip_masq_irc
           #
           # Supports the masquerading of the CuSeeme video conferencing software
           #
           #modprobe ip_masq_cuseeme
           #
           # Supports the masquerading of the VDO-live video conferencing software
           #
           #modprobe ip_masq_vdolive
           #
           # Set up the firewall chain to filter inbound packets.  Only packets
           # coming off of the external interface to the world are passed to this
           # chain so there's no need to check the interface or addresses.  All
           # that we check are port numbers.  Our aim is to only let through the
           # good stuff and reject all of the dangerous stuff.
           #
           ipchains -N fw-chain
           #
           # Start by letting responses to any of the previously masqueraded
           # packets through.
           #
           ipchains -A fw-chain -p TCP --dport 61000:65096 -j ACCEPT
           ipchains -A fw-chain -p UDP --dport 61000:65096 -j ACCEPT
           #
           # Also, let through packets bound for ports in the dynamic port
           # assignment range.  Any programs that are running on this machine
           # which send packet traffic out to the world will do so from ports
           # in this range (e.g. if you run the Netscape browser on this
           # machine).  In order for such programs to receive responses from
           # the Internet we must let the responses back in to the dynamic
           # ports that they are using.  This is akin to letting the masquerade
           # responses back in (above) but for programs on this machine instead
           # of for those on the local net.
           #
           # Make sure that you do not run any services that listen on these
           # ports.  Look at /etc/services and see which services use port
           # numbers > 1023.  Disable these services, either by editing the
           # startup files in /etc/rc.d or editing /etc/inetd.conf (for those
           # services run by inetd.  If you wish to be really paranoid and
           # don't care about running stuff from this machine (i.e. its only
           # being used for routing), comment out these lines.
           #
           # Note that you can check the ports that your system will use in the
           # dynamic port assignment range by executing the following command:
           #
           #      cat /proc/sys/net/ipv4/ip_local_port_range
           #
           ipchains -A fw-chain -p TCP --dport 1024:4999 -j ACCEPT
           ipchains -A fw-chain -p UDP --dport 1024:4999 -j ACCEPT
           #
           # Let inbound DNS requests pass.  This is perfectly safe because
           # DNS is pretty well behaved.
           #
           ipchains -A fw-chain -p TCP --dport 53 -j ACCEPT
           ipchains -A fw-chain -p UDP --dport 53 -j ACCEPT
           #
           # Let returning DNS querries pass.  Note that if you didn't comment
           # out the lines above that pass packets to ports 1024-4999, you
           # needn't do anything here.
           #
           # You can uncomment the two lines below that accept general network
           # traffic from source port 53 (DNS).  This will certainly allow DNS
           # to work.  However, allowing traffic from a particular source port
           # is dangerous, since a spoofer can claim to be anything they want.
           # It is much safer to only allow traffic to a particular destination
           # port on our machine (we know what's there and what it will do with
           # the inbound packets).
           #
           # My DNS always makes requests to the outside world from port 1024.
           # I allow packets through bound for port 1024.  You can do the same,
           # if you figure out which port your DNS is requesting on.  Look in
           # /var/log/messages for the rejected packets coming from port 53.
           # The port that they are bound to is the one your DNS is listening
           # on.
           #
     #     ipchains -A fw-chain -p TCP --sport 53 -j ACCEPT
     #     ipchains -A fw-chain -p UDP --sport 53 -j ACCEPT
     #     ipchains -A fw-chain -p TCP --dport 1024 -j ACCEPT
     #     ipchains -A fw-chain -p UDP --dport 1024 -j ACCEPT
           #
           # Allow access to the Web server.  Depending on the holes in Apache,
           # this could be dangerous.
           #
     #     ipchains -A fw-chain -p TCP --dport 80 -j ACCEPT
           ipchains -A fw-chain -p TCP --dport 8180 -j ACCEPT
           ipchains -A fw-chain -p TCP --dport 8280 -j ACCEPT
           ipchains -A fw-chain -p TCP --dport 8380 -j ACCEPT
           #
           # Let inbound NTP requests pass.  This is perfectly safe because
           # NTP is pretty well behaved.
           #
           ipchains -A fw-chain -p TCP --dport 123 -j ACCEPT
           #
           # Accept ping and responses to ping.
           #
           ipchains -A fw-chain -p icmp --icmp-type ping -j ACCEPT
           ipchains -A fw-chain -p icmp --icmp-type pong -j ACCEPT
           #
           # Packets which are one of the error ICMPs get accepted so that we can
           # see network errors being reported to us.
           #
           ipchains -A fw-chain -p icmp --icmp-type destination-unreachable \
             -j ACCEPT
           ipchains -A fw-chain -p icmp --icmp-type source-quench -j ACCEPT
           ipchains -A fw-chain -p icmp --icmp-type time-exceeded -j ACCEPT
           ipchains -A fw-chain -p icmp --icmp-type parameter-problem -j ACCEPT
           #
           # All other packets will reach the end of the chain.  Since it is a
           # user defined chain, it will return to the input chain from whence
           # it was called.  The input chain will deny and log the packets.
           #
           # By arranging the chain this way, we can dynamically add rules to
           # the end of the chain, where they can accept additional packets.
           # This allows us to determine which packets to accept on the fly.
           #
           ;;
         #
         # For all other cases, just fall through to the setting of IPCHAINS.
         #
         *)
           ;;
     esac
     #
     # It is CRITICAL that we enable IP forwarding since it is disabled by default
     #
     # Redhat users should also change the options in
     # /etc/sysconfig/network from:
     #
     #     FORWARD_IPV4=false to FORWARD_IPV4=true
     #
     echo "1" > /proc/sys/net/ipv4/ip_forward
     #
     # Set the masquerade timeouts:
     #
     #      2 hrs.    - for TCP session timeouts
     #      10 secs.  - for traffic after the TCP/IP "FIN" packet is received
     #      60 secs.  - for UDP traffic (masqueraded ICQ users must enable a
     #                  30 sec. firewall timeout in ICQ itself)
     #
     ipchains -M -S 7200 10 60
     #
     #                                RULE SETS
     #
     #############################################################################
     #
     # If the firewall is off, for incoming, flush and set the default policy of
     # accept.
     #
     if [ -f /etc/rc.d/init.d/firewall-off ]; then
         ipchains -F input
         ipchains -P input ACCEPT
     #
     # For incoming, flush and set the default policy of reject.  Actually the
     # default policy is irrelevant because there is a catch all rule at the end
     # of the chain which will reject and log all rejected packets.  However, the
     # default reject policy will prevent any packets from slipping through the
     # cracks while we are setting up the other rules.
     #
     else
         ipchains -F input
         ipchains -P input REJECT
         #
         # Any packets received from the fast Internet interface bound for local
         # addresses or with addresses that claim to be a local (i.e. IP
         # spoofing) can get lost.
         #
         # Note that you can comment out the following two lines to test the
         # fast Internet interface on the regular internal network to ensure
         # that it works prior to hooking it up to the Cable/DSL modem.
         #
         ipchains -A input -i $inetint -s 0.0.0.0/0 -d $intnet -l -j REJECT
         ipchains -A input -i $inetint -s $intnet -d 0.0.0.0/0 -l -j REJECT
         #
         # Any other packets from any source on the fast Internet interface bound
         # to any other destination are allowed.  The Internet is wide open
         # baby.  Let 'er rip!
         #
         # Actually, you should only use this rule for PPPoE.  In that case,
         # the inbound packets will get filtered by the rule for the PPP
         # interface.  If you aren't using PPPoE remove this rule and uncomment
         # the appropriate fw-chain rule (below).
         #
         ipchains -A input -i $inetint -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
         #
         # Special rules for local TiVo digital recorders.  If the TiVo tries
         # to send anything to the outside world, don't let it.  Its probably
         # calling home for a new version of the software that will fuck us.  If
         # it can't download the software, maybe it can't screw us.
         #
         ipchains -A input -i $intint -s 192.168.1.62 -d $intnet -j ACCEPT
         ipchains -A input -i $intint -s 192.168.1.62 -d 0.0.0.0/0 -j REJECT
         #
         # Any packets received on the local network interface, with local
         # addresses, going anywhere are allowed.
         #
         ipchains -A input -i $intint -s $intnet -d 0.0.0.0/0 -j ACCEPT
         #
         # Any packets received from the remote interface with addresses that
         # claim to be a local (i.e. IP spoofing) can get lost.
         #
         ipchains -A input -i $1 -s $intnet -d 0.0.0.0/0 -l -j REJECT
         #
         # Packets received from the loopback interface are always valid.
         #
         ipchains -A input -i lo -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
         #
         # Any packets received from the remote interface, with any source, that
         # are addressed to our externally visible IP address are generally valid
         # (that's how we receive incomming packets).  However, in the interest
         # of having a strong firewall, we will pass all of the inbound packets
         # through the firewall chain to apply the specific firewalling rules.
         #
         # This rule should appear second last, just before the catch all rule.
         # Any packets that are not specifically allowed by the firewall chain
         # will fall off the end of the chain and arrive at the catch all rule
         # when the return from falling of the end.  The catch all rule will
         # reject them.  Meanwhile, this allows us to dynamically add rules to
         # the firewall chain.
         #
         # Note that there is one rule for diald/PPPoE (the first) and one for
         # a bridging Cable/DSL modem (the second).  You should pick the one
         # that suits your purposes.
         #
         ipchains -A input -i $1 -s 0.0.0.0/0 -d $extip/32 -j fw-chain
         #ipchains -A input -i $inetint -s 0.0.0.0/0 -d $extip/32 -j fw-chain
         #
         # If you are running Samba on this machine, it can generate a lot of
         # packets bound for the external interface (i.e. three every 30 seconds)
         # that will be rejected and logged.  These packets are employed in its
         # feeble attempt to discover any other Mickeysoft networks in the outside
         # world.  Needless to say, we don't care about filling our log up with
         # this incessant drivel.
         #
         # The following two rules should ditch this junk nicely.  Basically, we
         # were going to toss any packets that made it this far anyway.  We just
         # check to see if the protocol is either IGMP or Mickeysoft's network
         # discovery protocol and if it is, we reject the packet silently.  If
         # you don't care about keeping your log clean or don't use Samba, you can
         # safely comment out these two rules.
         #
         ipchains -A input -p 2 -d 0.0.0.0/0 -j REJECT
         ipchains -A input -p 103 -d 0.0.0.0/0 -j REJECT
         #
         # This is the catch all rule.  All other incoming packets are denied and
         # logged.  Pity there is no log option on the default policy but this
         # does the job instead.
         #
         ipchains -A input -s 0.0.0.0/0 -d 0.0.0.0/0 -l -j REJECT
     fi
     #############################################################################
     #
     # If the firewall is off, for outgoing, flush and set the default policy of
     # accept.
     #
     if [ -f /etc/rc.d/init.d/firewall-off ]; then
         ipchains -F output
         ipchains -P output ACCEPT
     #
     # For outgoing, flush and set the default policy of reject.  Actually the
     # default policy is irrelevant because there is a catch all rule at the end
     # of the chain which will reject and log all rejected packets.  However, the
     # default reject policy will prevent any packets from slipping through the
     # cracks while we are setting up the other rules.
     #
     else
         ipchains -F output
         ipchains -P output REJECT
         #
         # Any packet addressed to the local network but being sent via the fast
         # Internet interface is a result of a broken routing and any packet with
         # a local address being sent anywhere via the fast Internet interface
         # signifies broken masquerading so deny them.
         #
         # Note that you can comment out the following two lines to test the
         # fast Internet interface on the regular internal network to ensure
         # that it works prior to hooking it up to the Cable/DSL modem.
         #
         ipchains -A output -s 0.0.0.0/0 -d $intnet -i $inetint -l -j REJECT
         ipchains -A output -s $intnet -d 0.0.0.0/0 -i $inetint -l -j REJECT
         #
         # Packets from any source going to the fast Internet interface are
         # valid.
         #
         ipchains -A output -s 0.0.0.0/0 -d 0.0.0.0/0 -i $inetint -j ACCEPT
         #
         # Packets from any source going to the local net via the local interface
         # are valid.
         #
         ipchains -A output -s 0.0.0.0/0 -d $intnet -i $intint -j ACCEPT
         #
         # Any packet addressed to the local network but being sent via the remote
         # interface is a result of a broken routing so deny it.
         #
         ipchains -A output -s 0.0.0.0/0 -d $intnet -i $1 -l -j REJECT
         #
         # Any packet with a local address being sent anywhere via the remote
         # interface signifies broken masquerading so deny it (remember that
         # masquerade replaces all internal addresses with our externally visible
         # address before packets get to the output chain).
         #
         ipchains -A output -s $intnet -d 0.0.0.0/0 -i $1 -l -j REJECT
         #
         # Any packet addressed from our externally visible address outgoing on
         # the remote interface is valid (that's how we send packets to the
         # world).
         #
         ipchains -A output -s $extip/32 -d 0.0.0.0/0 -i $1 -j ACCEPT
         #
         # The loopback interface is always valid.
         #
         ipchains -A output -i lo -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
         #
         # This is the catch all rule.  All other outgoing packets are denied and
         # logged.  Once again, its a pity there is no log option on the default
         # policy but this does the job instead.
         #
         ipchains -A output -s 0.0.0.0/0 -d 0.0.0.0/0 -l -j REJECT
     fi
     #############################################################################
     #
     # For forwarding, flush and set the default policy of reject.  Actually the
     # default policy is irrelevant because there is a catch all rule at the end
     # of the chain which will reject and log all rejected packets.  However, the
     # default reject policy will prevent any packets from slipping through the
     # cracks while we are setting up the other rules.
     #
     ipchains -F forward
     ipchains -P forward REJECT
     #
     # Masquerade all packets addressed from the local net bound to anywhere via
     # the remote interface.
     #
     ipchains -A forward -s $intnet -d 0.0.0.0/0 -i $1 -j MASQ
     #
     # This is the catch all rule.  All other forwarded packets are denied and
     # logged.  The usual story about it being a pity that there is no log option
     # on the default policy.  This does the job instead.
     #
     ipchains -A forward -s 0.0.0.0/0 -d 0.0.0.0/0 -l -j REJECT

Firewall/Packetfilter (iptables)

For kernels prior to 2.4, ipchains (see above) is used to do packet filtering. For kernels from 2.4 and up, iptables is used instead.

For iptables, some sort of firewall script (such as Lokkit or NARC) is the way to go, since many iptables commands must be issued to actually get a working packet filter going. This note describes how to set up NARC.

Get a copy of NARC from http://www.knowplace.org/netfilter/narc.html. Untar the distribution and follow the install steps in INSTALL.

You'll need to start NARC, either at boot time or when the external interface is brought up, through some sort of startup script, since the install does not make firewall startup automatic. If you want to always start NARC at boot time, you can use the iptables script supplied in the NARC distribution to replace the default iptables script (in /etc/init.d) supplied with the OS (typically, for RedHat distros, this runs the Lokkit firewall).

Before you replace the iptables script, you might want to consider changing the startup number from 11 to 08 in the script so that the firewall starts before the network comes up. Change:

     # chkconfig: 2345 11 92
to
     # chkconfig: 2345 08 92

This will ensure that there's no exposure to malicious traffic from the external interface during the brief time while the external interface is up but before the firewall has been started. However, if you'll be using diald to manage demand dialing for a telephone connection to the internet, you should change the startup number to something like:

     # chkconfig: 2345 64 36

This will give diald a chance to start up and define sl0, which can the be used by NARC to set up the firewall.

An alternative to trying to figure out when to start the firewall, during boot time, is to let the external interface handler (e.g. diald or ppp) start the firewall when it brings up the external interface. In this scenario, the iptables script is invoked by a script such as ip_up or ip_up.local in the /etc/ppp directory. In order for this to work, the iptables script must be altered to accept a couple of parameters.

We have found that full flexibility is provided if the iptables script is altered to accept two extra parameters (by convention, the first parameter passed to the script tells it what to do: start, stop, restart). The first extra parameter should be the external interface name. This can be used by startup scripts such as ip_up.local to start the firewall on external interfaces such as ppp0 or sl0.

The second extra parameter should be an optional IP address. If it isn't supplied, the firewall will start using masquerading on the dynamic external interface named by the first optional parameter. If it is supplied, the firewall will start using snat on the dynamic external interface named by the first optional parameter, using the supplied IP address as the snat address.

/etc/rc.d/init.d/iptables:

Here is the complete iptables script for NARC with diald/ppp support and snat support for other external interfaces such as eth0:

     #! /bin/sh
     #
     # iptables      This script loads the firewall rules and starts iptables,
     #               using those rules to filter all packet traffic on the server.
     #
     # chkconfig: 2345 08 92
     # description: Netfilter Automatic Rules Configurator (packet filtering \
     #              firewall with iptables).
     #
     # Revision History:
     #
     # Burchard Steinbild <bs@suse.de>   1996       Initial coding.
     # Ulrich Hecht <uli@suse.de>        1999       Adapted to scanlogd.
     # Shane Chen <shane@knowplace.org>  2001       Re-adapted to iptables.
     # ewilde                            2009Feb19  Add clustering support.
     #
     #
     # If there's no NARC configuration, we're all done.
     #
     if [ ! -f /etc/narc/narc.conf ]; then
         exit 0
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         WANCONNECTION=ADSL
     fi
     #
     # If this cluster doesn't use ADSL, Diald, or an ISP's WAN router for its WAN
     # connection, we're outta here.
     #
     DEVICE=`echo $WANCONNECTION | grep -e "eth[0-9]\+," -o`
     if [ x"$WANCONNECTION" != xADSL ] && [ x"$WANCONNECTION" != xDiald ] \
         && [ ! -n "$DEVICE" ]; then
         exit 0
     fi
     #
     # Load the NARC configuration.
     #
     . /etc/narc/narc.conf
     #
     # Set up to load iptables.
     
     base=${0/}
     link=${base[SK][0-9][0-9]}
     test $link = $base && START_IPTABLES=yes
     test "$START_IPTABLES" = "yes" || exit 0
     rc_done="  done"
     rc_failed="  failed"
     return=$rc_done
     #
     # Based on which operation we were asked to perform, have at it.
     #
     case "$1" in
         start)
             echo -n "Starting iptables"
             $NARC start $2 $3 || return=$rc_failed
             echo -e "$return"
             ;;
         stop)
             echo -n "Stopping iptables"
             $NARC stop || return=$rc_failed
             echo -e "$return"
             ;;
         restart|reload)
             $0 stop && $0 start $2 $3 || return=$rc_failed
             ;;
         status)
            $NARC status && echo OK || echo No process
             ;;
         *)
             echo "Usage: $0 {start|stop|restart|status}"
             exit 1
     esac
     test "$return" = "$rc_done" || exit 1
     exit 0

/usr/sbin/narc:

If you took our advice and modified the iptables script so as to pass an external interface name and IP address as the second and third parameters, you'll need to add some lines (around line 104) that read:

     # See if there is an interface specified on the command line.  If so, use it
     # instead.
     if test x"$2" != x; then
             EXTERNAL_INTERFACE=$2
     fi
     # See if there is an IP address for a static route specified on the command
     # line.  If so, use it instead.
     if test x"$3" != x; then
             DYNAMIC_EXTERNAL_IP="no"
             EXTERNAL_INTERFACE_IP=$3
     fi

Actually, this change is harmless. If no interface or no ip address parameter(s) are passed, nothing happens. Therefore, it is safe to make it under all circumstances.

You might need to change the command that calls ifconfig to include the path of your ifconfig (e.g. /sbin/ifconfig), if this script fails to work properly. After line 8, add a variable that defines the path to ifconfig:

     IFCONFIG="/sbin/ifconfig"
     At approximately line 193, change:
     EXTERNAL_INTERFACE_IP=`ifconfig $EXTERNAL_INTERFACE | grep inet \
       | cut -d : -f 2 |cut -d " " -f 1`

to

     EXTERNAL_INTERFACE_IP=`$IFCONFIG $EXTERNAL_INTERFACE | grep inet \
       | cut -d : -f 2 |cut -d " " -f 1`

If you wish to include shell variables in the custom configuration, you'll need to change the lines around 1179 from:

     grep -v "^ *#" $CUSTOM_SCRIPT | \
         ( while read line ; do
             if [ "$line" != '' ] ; then
                 $line && $ECHO "$BOLD$line$DEF - $RCOK" \
                     || $ECHO "$BOLD$line$DEF - $RCFAIL"
             fi
         done )
     $ECHO "Finished executing custom commands"

to

     grep -v "^ *#" $CUSTOM_SCRIPT | \
         ( while read line ; do
          linesub=`eval echo $line`
          if [ "$linesub" != '' ] ; then
              $linesub && $ECHO "$BOLD$linesub$DEF - $RCOK" \
                  || $ECHO "$BOLD$linesub$DEF - $RCFAIL"
          fi
      done )

$ECHO "Finished executing custom commands"

/etc/narc/narc.conf:

Hack the narc.conf file as described in the config file itself.

If you'll be starting NARC at boot time, in conjunction with diald, to enable masquerading and thereby allow locally-networked nodes to bring up the link, you should change:

     START_IPTABLES="no"
to
     START_IPTABLES="yes"

If you don't do this, NARC will not start at boot time, although it will say that it did in the log file.

Also, change:

     EXTERNAL_INTERFACE="eth0"               # Example: "eth0"
to
     EXTERNAL_INTERFACE="sl0"                # Slip must be masqueraded for
                                             # diald to work

Otherwise, if you are using some other dynamic firewall arrangement, set:

     EXTERNAL_INTERFACE=""                   # Use what we are passed.

This will allow the PPP up and down scripts to pass the interface name to NARC so that the firewall rules can be applied both when the PPP link is up and when it is down. You can also make this change, if you are using PPPoE over a DSL modem or a cable modem.

While you're at it, make sure to set the following:

     DYNAMIC_EXTERNAL_IP="yes"               # If this is set to "no", you'll need
                                             # to enter an IP address below
     EXTERNAL_INTERFACE_IP=""                # If DYNAMIC_EXTERNAL_IP is "yes",
                                             # NARC will attempt to auto-obtain
                                             # this

If you are making your WAN connection through an external router, this will allow you to pass the interface name and IP address to NARC from the startup script, if it is configured this.

Here is a sample of the NARC config file with no EXTERNAL_INTERFACE set and DYNAMIC_EXTERNAL_IP set to "yes", so that it can be modified by an ip-up script (which would be a typical DSL with PPPoE setup):

     #
     # NARC - Netfilter Automatic Rules Configurator v0.6.3
     #
     # Copyright (c) 2001, Shane Chen (shane@knowplace.org).  See the LICENSE
     # file for the (BSD) license.
     #
     CONF_VERSION=0.6.3    # DO NOT edit this line.
     EDITED="yes"                            # Edit the options below and change
                                             # this option to 'yes' once you're
                                             # satisfied with the changes
     #
     # Config options
     #
     # Start Iptables at boot up?
     START_IPTABLES="no"
     USE_COLOR="yes"                         # When possible
     # Path to the executables
     NARC="/usr/sbin/narc"                   # Location of the narc bash script
                                             # - edit this if the path is
                                             #   incorrect
     IPTABLES="/sbin/iptables"               # Make sure the path is correct!
     ECHO="/bin/echo"                        # Make sure the path is correct!
     # Load Netfilter modules - only necessary if you compiled netfilter as
     # modules
     LOAD_MODULES="yes"
     # Network parameters.
     #
     # If you're using PPPoE to connect through an ethernet card and then a DSL
     # modem, you should specify the PPP interface (e.g. ppp0) instead of the
     # ethernet interface, since the packet traffic gets routed through the PPP
     # interface (its all done with mirrors).
     #
     # But, note that, if you're using a startup script (e.g. /etc/ppp/if-up or
     # /etc/ppp/if-up.local) that starts NARC when the interface is brought up,
     # and this script passes the external interface to NARC, you should set
     # EXTERNAL_INTERFACE to "".  This will prevent NARC from starting, except
     # when it is invoked from the if-up script and passed the interface
     # information.
     #
     # Also, if your startup script can start either a dynamic external interface
     # (e.g. ppp0) or a static interface (e.g. eth0), depending on the state of
     # the network, and it will pass NARC the IP address to use on the static
     # interface, you should set DYNAMIC_EXTERNAL_IP to "yes" and then leave
     # EXTERNAL_INTERFACE_IP set to "".
     EXTERNAL_INTERFACE=""                   # Example: "eth0"
     DYNAMIC_EXTERNAL_IP="yes"               # If this is set to "no", you'll need
                                             # to enter an IP address below
     EXTERNAL_INTERFACE_IP=""                # If DYNAMIC_EXTERNAL_IP is "yes",
                                             # NARC will attempt to auto-obtain
                                             # this
     # The options immediately below control server services that you're offering
     # to the outside world - they do not limit the services available to your
     # localhost.
     #
     # Use comma separated names or numeric values from /etc/services.  If the
     # port is > 1024, use the numeric value instead of the name.
     #
     # Note: limited to 15 services - you shouldn't need more than 15 ports open,
     # especially on a firewall.
     ALLOW_TCP_EXT="ntp"                     # Example "ssh,smtp,http" - note the
                                             # lack of spaces
     ALLOW_UDP_EXT="ntp"                     # Example "domain,ntp" - note the
                                             # lack of spaces
     CHECK_SYN_PACKET_LENGTH="yes"           # Do not disable unless you must use
                                             # a stock kernel that does not
                                             # support length checking
     # The options immediately below here are similar to above, except that they
     # allows you to enter port ranges (and single ports) using space separated
     # numeric values. Enter as many as necessary (i.e. not limited to 15
     # entries).  Unless needed, use the above instead.
     #
     # The ports 8180, 8280 and 8380 are used by Apache to provide mirrors for
     # external Web sites.
     ALLOW_TCP_EXT_RANGE="8180 8280 8380"    # Example "6000:6010 6660:6669 3128"
     ALLOW_UDP_EXT_RANGE=""                  # Example "6000:6010 6660:6669 3128"
     # Note: If you simply wanted to firewall a single host, you can ~safely skip
     # the rest of the config options below.
     # MASQuerading section - This is the Linux equivalent of "Internet Connection
     # Sharing".  Don't turn on ALWAYS_FORWARD unless you know what you're doing.
     # ALWAYS_FORWARD will keep forwarding (and masq'ing) traffic even when there
     # are no firewall rules loaded.
     MASQUERADE="yes"                        # Turning this on will enable IP
                                             # forwarding automatically
     LAN_INTERFACE="eth0"                    # Example: "eth1"
     ALWAYS_FORWARD="no"                     # Don't turn this on unless you want
                                             # to forward traffic even when not
                                             # firewalling.
     PROTECT_FROM_LAN="no"                   # "yes" or "no" - Protect firewall
                                             # from internal network
     # The options immediately below control server services that you're offering
     # to your internal LAN - they do not limit the services available to your
     # localhost.  These are only needed if "PROTECT_FROM_LAN" is set to "yes",
     # above.
     #
     # Use comma separated names or numeric values from /etc/services.  If the
     # port is > 1024, use the numeric value instead of the name.
     #
     # Note: limited to 15 services - you shouldn't need more than 15 ports open,
     # especially on a firewall.
     #AL1="ftp,telnet,http,smtp,pop3,ntp"
     #AL2="netbios-ns,netbios-dgm,netbios-ssn"
     #ALLOW_TCP_LAN=${AL1}","${AL2}          # Example "ssh,smtp,http" - note the
                                             # lack of spaces
     #ALLOW_UDP_LAN=""                       # Example "domain,ntp" - note the
                                             # lack of spaces
     # The options immediately below here are similar to above, except that they
     # allows you to enter port ranges (and single ports) using space separated
     # numeric values. Enter as many as necessary (i.e. not limited to 15
     # entries).  Unless needed, use the above instead.
     #ALLOW_TCP_LAN_RANGE=""                 # Example "6000:6010 6660:6669 3128"
     #ALLOW_UDP_LAN_RANGE=""                 # Example "6000:6010 6660:6669 3128"
     # PortForwarding section - Requires masquerading and forwarding.
     PORT_FORWARD="no"                       # This will not have any effect
                                             # unless MASQUERADE is enabled
     DMZ_INTERFACE=""                        # DMZ interface (technically, you can
                                             # use your LAN interface as well -
                                             # bad security practice)
     PROTECT_FROM_DMZ=""                     # "yes" or "no" -  Protect firewall
                                             # from DMZ network
     FORWARD_LAN_TO_DMZ="no"                 # Forward traffic from LAN to DMZ
     FORWARD_CONF="/etc/narc/narc-forward.conf"
                                             # Edit this file for port forwarding
     # The options immediately below control server services that you're offering
     # to your DMZ network - they do not limit the services available to your
     # localhost.
     #
     # Use comma separated names or numeric values from /etc/services.  If the
     # port is > 1024, use the numeric value instead of the name.
     #
     # Note: limited to 15 services - you shouldn't need more than 15 ports open,
     # especially on a firewall.
     ALLOW_TCP_DMZ=""                        # Example "ssh,smtp,http" - note the
                                             # lack of spaces
     ALLOW_UDP_DMZ=""                        # Example "domain,ntp" - note the
                                             # lack of spaces
     # The options immediately below here are similar to above, except that they
     # allows you to enter port ranges (and single ports) using space separated
     # numeric values. Enter as many as necessary (i.e. not limited to 15
     # entries). Unless needed, use the above instead.
     ALLOW_TCP_DMZ_RANGE=""                  # Example "6000:6010 6660:6669 3128"
     ALLOW_UDP_DMZ_RANGE=""                  # Example "6000:6010 6660:6669 3128"
     # To enable traceroute from MS Windows to your firewall, enable ANSWER_PING.
     # To enable traceroute from UNIX hosts, turn enable ANSWER_TRACEROUTE.  Use
     # of either option is discouraged.
     ANSWER_PING="yes"
     PING_RATE="1/s"                         # Leave this alone unless you happen
                                             # to like flood pings
     ANSWER_TRACEROUTE="yes"
     # Auth port responds with reject instead of drop
     AUTH_REJECT="yes"                       # Disable this if you're running
                                             # identd or using IRC
     # Drop broadcasts
     DROP_BROADCASTS="yes"
     BROADCAST_NETWORKS="0.0.0.0/8 255.255.255.255 224.0.0.0/4"
     # Logging options.
     #
     # Logs to "kern.debug".  You must add "kern.=debug -/var/log/firewall.log" to
     # /etc/syslog.conf to actually write log entries to the firewall log file.
     LOG_DROPS="yes"                         # If this is turned off, the rest of
                                             # the log options have no effect.
     NORM_LOG_LEVEL="debug"                  # Log everything to "kern.debug"
     WARN_LOG_LEVEL="debug"                  # Change to "warning" if you want
                                             # more urgent logging to show up in
                                             # /var/log/warn
     LOG_PROBES="yes"                        # Uses the TCP/UDP_PROBE (below) to
                                             # monitor certain ports
     LOG_ILLEGAL="yes"                       # Logs packets defined by
                                             # ILLEGAL_TCP_FLAGS in the advanced
                                             # section below.
     LOG_INVALID="yes"                       # Logs packets that do not belong to
                                             # a valid connection
     LOG_SPOOF="no"                          # Logs packets defined by the
                                             # anti-spoof options in the advanced
                                             # section below.
     LOG_ICMP="no"                           # Logs packets not accepted by
                                             # ALLOW_ICMP_MESSAGE (below)
     LOG_PACKET_LENGTH="yes"                 # Logs TCP SYN packets that have bad
                                             # header length (PACKET_LENGTH)
     LOG_LIMIT_EXCEED="yes"                  # Logs TCP connections that exceed
                                             # LIMIT_RATE
     LOG_IPLIMIT_EXCEED="yes"                # Logs TCP connections that exceed
                                             # IPLIMIT_MAX_ACCEPT
     LOG_ALL_ELSE="yes"                      # This logs everything that we didn't
                                             # explicitly match (recommeded)
     BURST_MAX="5"                           # default is 5
     LOG_RATE="1/s"                          # not impl (may not be a good idea)
     # Probable probes - Note: Add or remove entries as necessary but do not
     # exceed 15 ports per line!  Use comma separated values with no spaces (common
     # trojans - see http://www.simovits.com/sve/nyhetsarkiv/1999/nyheter9902.html).
     TCP_PROBE="113,135,137,138,139,161,445,515,524,555,666,1000,1001,1025,1026"
     TCP_PROBE2="1234,1243,1433,2000,2001,6346,8080"
     UDP_PROBE="22,135,137,138,139,161,445,1025,1026,1081,1082,1083,1085,1089,1093"
     UDP_PROBE2="1095,1096,1433,2000,2001"
     # Advanced options below - DO NOT edit unless you know what you are doing
     # Executes a custom script
     EXECUTE_CUSTOM_SCRIPT="yes"             # Default is "no"
     CUSTOM_SCRIPT="/etc/narc/narc-custom.conf"
     PRELOAD_IP_MODULES="ip_tables ip_conntrack ip_conntrack_ftp"
     NAT_MODULES="iptable_nat ip_nat_ftp"
     # Illegal TCP flag combinations
     IF1="SYN,FIN PSH,FIN SYN,ACK,FIN SYN,FIN,PSH SYN,FIN,RST"
     IF1="SYN,FIN,RST,PSH SYN,FIN,ACK,RST SYN,ACK,FIN,RST,PSH ALL"
     ILLEGAL_TCP_FLAGS=${IF1}" "${IF2}
     FINSCAN="FIN"
     XMASSCAN="URG,PSH,FIN"
     NULLSCAN="NONE"
     # SYN packet length (range in bytes)
     PACKET_LENGTH="40:68"
     # General rate limit
     ENABLE_LIMIT_RATE="no"
     LIMIT_RATE="30/s"
     LIMIT_BURST="50"
     # IP based TCP rate limit (requires CONFIG_IP_NF_MATCH_IPLIMIT/iplimit patch)
     ENABLE_IPLIMIT="no"                     # You better know what you're doing
                                             # - change the values below.
     IPLIMIT_MAX_ACCEPT="16"                 # accept only UP TO this many
                                             # connections per the netmask below.
     IPLIMIT_NETMASK="24"                    # netmask value
     # Drop "unclean" packets, packet sanity checking (EXPERIMENTAL - don't use
     # this)
     DROP_UNCLEAN_PACKETS="no"
     # Allowable ICMP messages -
     # see http://www.iana.org/assignments/icmp-parameters
     # Note: will accept numeric or name value - 'iptables -p icmp -h' to list
     AI1="echo-reply network-unreachable host-unreachable"
     AI2="port-unreachable fragmentation-needed time-exceeded"
     ALLOW_ICMP_MESSAGE=${AI1}" "${AI2}
     # Anti-spoofing options
     # see http://www.sans.org/dosstep/cisco_spoof.htm
     # and http://www.isi.edu/in-notes/rfc1918.txt
     #
     # 0.0.0.0/8                 - Broadcast (old)
     # 255.255.255.255(/32)      - Broadcast (all)
     # 127.0.0.0/8               - Loopback
     # 224.0.0.0/4               - Multicast
     # 240.0.0.0/5               - Class E reserved
     # 248.0.0.0/5               - Unallocated
     # 192.0.2.0/24              - NET-TEST (reserved)
     # 169.254.0.0/16            - LinkLocal (reserved)
     # 10.0.0.0/8                - Class A (private use)
     # 172.16.0.0/12             - Class B (private use)
     # 192.168.0.0/16            - Class C (private use)
     RESERVED_NETWORKS="127.0.0.0/8 240.0.0.0/5 248.0.0.0/5"
     PRIVATE_NETWORKS=" 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16"
     # Accept traffic to loopback
     LOOPBACK_ACCEPT="yes"
     LOOPBACK_MODE="normal"                  # values are paranoid|normal|loose.
     # Self-referenced firewall DNS workaround - leave this alone; if you're
     # having DNS problems on the firewall itself, but not from behind it, this
     # should probably make sense to you.  Otherwise, leave this blank.
     #
     # Use space separated dotted quad IP addresses if you need more than one.
     BIND_IP=""
     #
     # Kernel options - do not change unless you're sure what you're doing
     #
     SYNCOOKIES="no"
     ANTI_SMURF="yes"
     ACCEPT_SOURCE_ROUTE="no"
     # Ingress filtering: 1 for simple, 2 to comply with RFC1812 section 5.3.8.
     # See http://andrew2.andrew.cmu.edu/rfc/rfc1812.html
     INGRESS_FILTER="2"
     LOG_MARTIANS="yes"
     # TCP congestion notification - depreciated
     ENABLE_TCP_ECN="no"

/etc/narc/narc-custom.conf:

The narc-custom.conf file is executed by NARC at the end of firewall startup. You can add the following lines to this file to allow all packets from lo to be accepted. This is probably OK, since lo is one of the good guys. Changes are also shown to prevent the TiVos from calling home, and to quiet Samba down.

     # Allow all traffic from lo, not just that from 127.0.0.1
     $IPTABLES -R INPUT 4 -i lo -j ACCEPT
     #
     # Special rules for local TiVo digital recorders.  If the TiVo tries to send
     # anything to the outside world, don't let it.  Its probably calling home
     # for a new version of the software that will not make us happy.  If it
     # can't download the software, maybe it can't mess us around.
     #
     # Note that the rejected packets are logged with a prefix of TIVO.
     #
     $IPTABLES -N TIVO_REJECT
     $IPTABLES -A TIVO_REJECT -j LOG --log-level $NORM_LOG_LEVEL \
               --log-prefix \"TIVO \" --log-ip-options --log-tcp-options
     $IPTABLES -A TIVO_REJECT -j REJECT
     # Reject each of the known TiVo machines.
     $IPTABLES -I FORWARD -s tivo4.homeworld -o $EXTERNAL_INTERFACE -j TIVO_REJECT
     $IPTABLES -I FORWARD -s tivo3.homeworld -o $EXTERNAL_INTERFACE -j TIVO_REJECT
     $IPTABLES -I FORWARD -s tivo2.homeworld -o $EXTERNAL_INTERFACE -j TIVO_REJECT
     $IPTABLES -I FORWARD -s tivo1.homeworld -o $EXTERNAL_INTERFACE -j TIVO_REJECT
     #
     # If you are running Samba on this machine, it can generate a lot of
     # packets bound for the external interface (i.e. three every 30 seconds)
     # that will be rejected and logged.  These packets are employed in its
     # feeble attempt to discover any other Mickeysoft networks in the outside
     # world.  Needless to say, we don't care about filling our log up with
     # this incessant drivel.
     #
     # The following two rules should ditch this junk nicely.  Basically, we
     # were going to toss any packets that made it this far anyway.  We just
     # check to see if the protocol is either IGMP or Mickeysoft's network
     # discovery protocol and if it is, we reject the packet silently.  If
     # you don't care about keeping your log clean or don't use Samba, you can
     # safely comment out these two rules.
     #
     $IPTABLES -I OUTPUT -p pim -o $EXTERNAL_INTERFACE -j DROP
     $IPTABLES -I OUTPUT -p igmp -o $EXTERNAL_INTERFACE -j DROP

/etc/syslog.conf:

Add the following lines to the syslog configuration to cause the packet filter messages to be logged to their own file:

     # Log packet filter (firewall) messages to a special file.
     kern.=debug                                             -/var/log/firewall

/etc/logrotate.d/narc:

Add the file named above and include the following lines in it (or hack the /etc/logrotate.conf file itself):

     /var/log/firewall
         missingok
         notifempty
     }

If you need to customize the firewall rules, you can add individual rules to the /etc/narc/narc-custom.conf file. Therein, you simply add calls to iptables to create the rules that you want. If you go down this path, you may find this description of the Netfilter architecture useful:

     http://www.netfilter.org/documentation/HOWTO//netfilter-hacking-HOWTO-3.html

Knowing how packets are routed by the kernel may provide you with insight about where you want to filter or alter them so that they can be dropped, rerouted or marked as you wish.

DMZ

Set up the DMZ on a third NIC (to keep the DMZ packets separate from those on your regular network). Define the NIC (in this example eth2) with a different subnet than your regular network. If you are using the Buffalo wireless router on this subnet, it comes preconfigured with a 192.168.11.x address so that subnet is a good choice (assuming the regular network is 192.168.1.x).

/etc/sysconfig/network-scripts/ifcfg-ethx:
/etc/sysconfig/networking/devices/ifcfg-ethx:
/etc/sysconfig/networking/profiles/default/ifcfg-ethx:

To set up the NIC, you can look in /etc/sysconfig/network-scripts/ifcfg-ethx, /etc/sysconfig/networking/devices/ifcfg-ethx and /etc/sysconfig/networking/profiles/default/ifcfg-ethx (where "x" is your network adapter's number) for the NIC setup. Note that you must not use uppercase letters in the hexadecimal MAC address set by the HWADDR parameter. If you do, the brain dead code in /sbin/ifup and /sbin/ifdown will not work properly. Here's a sample for the DMZ NIC.

/etc/sysconfig/network-scripts/ifcfg-eth2:

     USERCTL=no
     PEERDNS=no
     TYPE=Ethernet
     DEVICE=eth2
     HWADDR=00:40:33:d3:02:d0
     BROADCAST=192.168.11.255
     IPADDR="192.168.11.1"
     NETMASK="255.255.255.0"
     NETWORK=192.168.11.0
     BOOTPROTO=none
     ONBOOT=yes

If you want to serve IP addresses via DHCP on the DMZ subnet, add a subnet to the DHCP configuration file for the DMZ subnet.

/etc/dhcpd.conf:

This is the complete DHCP daemon config file with the DMZ subnet included. It should be set up something like:

     authoritative;
     default-lease-time 7200;
     max-lease-time 86400;
     option subnet-mask 255.255.255.0;
     option domain-name-servers 151.203.0.84, 151.203.0.85;
     option domain-name "mydomain.com";
     ddns-update-style ad-hoc;
     subnet 192.168.1.0 netmask 255.255.255.0 {
         option broadcast-address 192.168.1.255;
         option routers 192.168.1.1;
         range 192.168.1.150 192.168.1.200;
     }
     subnet 192.168.11.0 netmask 255.255.255.0 {
         option broadcast-address 192.168.11.255;
         option routers 192.168.11.1;
         range 192.168.11.150 192.168.11.200;
     }

To prevent provide packet forwarding from the DMZ to the outside world, the NARC firewall should be set up to allow a DMZ.

/etc/narc/narc.conf:

This just shows the changes you'll have to make to the standard NARC configuration to enable the DMZ:

     # PortForwarding section - Requires masquerading and forwarding.
     PORT_FORWARD="no"                       # This will not have any effect
                                             # unless MASQUERADE is enabled
     DMZ_INTERFACE="eth2"                    # DMZ interface (technically, you can
                                             # use your LAN interface as well -
                                             # bad security practice)
     PROTECT_FROM_DMZ="no"                   # "yes" or "no" -  Protect firewall
                                             # from DMZ network
     FORWARD_LAN_TO_DMZ="no"                 # Forward traffic from LAN to DMZ
     FORWARD_CONF="/etc/narc/narc-forward.conf"
                                             # Edit this file for port forwarding

Using this setup, machines from within the DMZ will be able to access machines on the internal network. If you'd rather not allow this to happen, you need to configure a custom NARC rule that prevents bridging between the DMZ subnet and the internal subnet.

/etc/narc/narc-custom.conf:

Adding the following lines, to this file, will prevent any packets originated in the DMZ from being bridged to the internal network:

     #
     # Rule to prevent packets from traversing from the DMZ subnet to the internal
     # subnet to keep viruses and other nasty stuff from getting at the good
     # stuff.  The other direction is OK, presumably.
     #
     # Note that all attempts to bridge from the DMZ to the internal subnet are
     # logged with a prefix of BRIDGE.
     #
     $IPTABLES -N BRIDGE_REJECT
     $IPTABLES -A BRIDGE_REJECT -j LOG --log-level $NORM_LOG_LEVEL \
               --log-prefix \"BRIDGE \" --log-ip-options --log-tcp-options
     $IPTABLES -A BRIDGE_REJECT -j REJECT
     # Hook the rule in to the forward chain.
     $IPTABLES -I FORWARD -i $DMZ_INTERFACE -o $LAN_INTERFACE -j BRIDGE_REJECT

DMZ Bridge User

In order to pass files back and forth to machines in the DMZ, you may want to set up a DMZ bridge user. Essentially, this is a dummy user that has no permissions (and no login shell) that can be accessed from the machines inside the DMZ via Samba. Machines outside of the DMZ can copy files to the bridge user's directory tree and it will be visible to DMZ machines but nothing inside the DMZ can access any files except the ones in the bridge user's tree. Also, since the bridge user has no login shell and no permissions, even knowing the userid's password will do a bad guy no good.

Begin by updating the Samba configuration to work on the DMZ subnet.

Add userid bridge:

     /usr/sbin/useradd -c "Bridge to DMZ" -m -s /sbin/nologin bridge

Since useradd likes to put a bunch of ka, ka in a regular user's home directory, we should delete all . files in home dir:

     rm -f /home/bridge/.ba /home/bridge/.em /home/bridge/.gt*

Edit /etc/group and add important users to group bridge. This will allow regular users to copy files to the bridge user's directory tree.

Change the permissions on /home/bridge to allow group members to have at the directory tree:

     chmod g+rwx /home/bridge

Add a password for user bridge to Samba:

     smbpasswd -a bridge secretpassword

or

     pdbedit -a -u bridge
     type secretpassword twice

In the Samba configuration file, add the following to all directories except the bridge user's home directory:

     invalid users = bridge

This will exclude the bridge user from all directories except the one they are meant to see. Probably not necessary but just in case.

GCC

Before proceeding any further, you may want to install a known working-version of the C compiler to use for building everything else that is built from source (for example, MySQL recommends building the database with gcc 3.3).

Known-working versions include 3.2, 3.3 and maybe the 4.x line. However, think about the whole zeitgust very carefully before you do decide to build a new compiler. If you break it, you could very well break a lot of other things, especially glibc. For example, if you choose the 3.4 version, you basically screw building any of the 2.2.x or 2.3.x versions of glibc because "__thread" became a keyword in gcc 3.4 but glibc 2.2.x and 2.3.x use it as a variable name.

Still not dissuaded. OK, begin by downloading the chosen compiler source from http://gcc.gnu.org into a source directory (e.g. /rpm/gcc).

Extract the source:

     cd /rpm/gcc
     tar -xvzf gcc-x.y.z.tar.gz

Note that you will have to have an existing copy of the compiler to bootstrap the build of the new compiler. If you do not, read the notes in the source directory under gcc-x.y.z/INSTALL/index.html. Click on the Prerequisites link to see which compiler you can use for bootstrapping.

Create an object directory for building the compiler in. Once you've done that, configure and build the compiler:

     mkdir gcc-x.y.z-obj
     cd gcc-x.y.z-obj
     ../gcc-x.y.z/configure --prefix=/usr --enable-threads
     make bootstrap

Note that the default installation directory for configure is "/usr/local". However, many systems come with gcc installed in "/usr". Hence, if you wish to replace an older gcc, the need to specify "--prefix=/usr". Also, if you are going to be doing anything with threads, you need to supply the enable parameter just it case. Sometimes, even if thread support is available on your platform, the configure script misses it and does not configure the compiler properly for thread support. So, it doesn't hurt to force the issue.

Install the completed build as super user. If you are overwriting the current version of the compiler with this new one, you may want to test it first:

     su
     make install

BINUTILS

The latest version of the binary utils package (which includes the assembler, linker, loader and archiver) may be required to build the latest glibc. If you need to build binutils, begin by downloading its source from http://www.gnu.org/software/binutils/.

Extract the source:

     cd /rpm/binutils
     tar -xvzf binutils-x.y.z.tar.gz

Then, configure and build the binary utilities:

     ./configure --prefix=/usr
     make

Note that the default installation directory for configure is "/usr/local". However, many systems come with the binary utilities installed in "/usr". Hence, if you wish to replace older ones, the need to specify "--prefix=/usr".

Install the completed build as super user.

     su
     make install

GLIBC

The latest applicable version of glibc may be required to build some of the components to be installed on your system. For example, if you are using "--with-mysqld-ldflags=-all-static" on the MySQL build, the earlier versions of glibc (e.g. 2.2.93) can cause it to segfault when linked to this library. However, the 2.4 and above version of glibc is only meant to run on the 2.6 and higher Linux kernels using the NPTL implementation of pthreads, which is now the default configuration. The stable 2.3 release series continues to be maintained and can be used on the 2.4 kernels found in the earlier versions of Linux (such as RedHat 8).

Consider the implications very carefully before beginning to build glibc. Getting it to build properly is a total pain in the butt, mainly because the library guys didn't bother to talk to the compiler guys so that the changes each other make screw the other guy and vice versa. Also, threads support continues to be a major goat rope. I guess it must be hard because it changes with every release and it never seems to work right.

Versions of glibc known to work with the required build tools are as follows:

     glibc      threads                       gcc        binutils
     2.3.6      glibc-linuxthreads-2.3.6      3.3.6      2.17

Finally, installing the new glibc on a running system is nigh-on impossible. The reason being is that every command and service that is running on the system probably depends on glibc in one way or another and, since it is dynamically linked, as soon as you replace it, they all immediately start using it. And, in classic Unix package manager style, each of the system components is inextricably linked to all of the others so that it is not possible to replace one piece at a time. All must be replaced en-masse.

Essentially, here's what happens with the typical "make install" of the new glibc into the running system. The install starts copying files into the running system until it gets to the point where it whacks the glibc dynamic library. Since the new library has links into symbols that are part of the loader but the loader dynamic library didn't get replaced yet (the same problem happens if the loader and glibc are replaced in the other order so give up on that bright idea), anything that tries to load now fails with an error that looks something like this "relocation error: /lib/i686/libc.so.6: symbol dlstarting_up, version GLIBC_PRIVATE not defined in file ld-linux.so.2 with link time reference". Essentially the new glibc symbols do not match with the old loader.

Of course, its the loader you just effectively hosed so everything stops working, including the copy command in the makefile which was about to copy the correct loader into the shared library directory. But, it fails. So does everything else, including init and your system is as good as broken.

Recovery is possible by pulling the drive out of the system and mounting it on some other, working system. Then, the old glibc can be copied back on top of it and the system may come back. This method also gives a clue as to how to install the new glibc with a hope of success. See below for the answer.

Meanwhile, if you still think its a good idea to build the new glibc, begin by downloading the chosen source from http://www.gnu.org/software/libc/ into a source directory (e.g. /rpm/glibc).

Extract the source:

     cd /rpm/glibc
     tar -xvzf glibc-x.y.z.tar.gz

At the very least, you should get the thread add-on for earlier versions of glibc (i.e. before 2.4). To do this, download glibc-linuxthreads-x.y.z.tar.gz from the same place where you got the source for glibc.

The add-on source needs to be extracted in the top level source directory. Do it like so:

     cd /rpm/glibc/glibc-x.y.z
     tar -xvzf ../glibc-linuxthreads-x.y.z.tar.gz
     cd ..

Create an object directory for building the library in. Once you've done that, configure and build the library:

     mkdir glibc-x.y.z-obj
     cd glibc-x.y.z-obj
     ../glibc-x.y.z/configure --prefix=/usr --enable-add-ons
                              --enable_kernel=2.4.18
     make

or, if TLS is a problem, you probably want to only enable the linuxthreads add-on and disable TLS:

     ../glibc-x.y.z/configure --prefix=/usr --enable-add-ons=linuxthreads
                              --enable_kernel=2.4.18 --without-tls

Note that the default installation directory for configure is "/usr/local". However, many systems come with glibc installed in "/usr". Hence, if you wish to replace an older glibc, the need to specify "--prefix=/usr". Also, for Linux systems only, the minimum kernel version should be made known to configure. This will cause compatibility code to be included in glibc so that it can run on the kernel in question, plus all of the later kernels that it encounters.

If you have problems with the glibc build, the following might be useful.

  1. If the configure craps out because it says: "CFI directive support in assembler is required", you should get the latest binutils, and build and install them (see above).
  2. If the configure dies because it says "forced unwind support is required", you need to get a later version of the compiler. Just be careful that you get one that is compatible with the glibc that you're trying to build.
  3. If you see an error like "failed when setting up thread-local storage" in one of the build tests run by make, its because TLS is not working. TLS requires kernel support but this support is not generally in the 2.4.x versions of the kernel. If that's the case, build your glibc configured "--without-tls".

Since you are overwriting the current version of the library with this new one, you may want to test it first (hint, hint). To do this, type:

     make check

If the check worked OK, it is time to install the completed build onto your system. The best way to do this is with a second system. Remove the disk from the first system, where the newly built glibc resides and mount it on the second system. Find all occurrences of "/usr" in the build tree files and replace them with "/mnt/usr" (or whatever the mount point is). At that point, as super user, you should be able to install onto the mounted system drive and have things work OK when you reinstall it in the original system:

     su
     make install

One final parting shot. If you get failures during the copying of the locale files, having to do with invalid character sequences, you will have to edit the Makefile in the ../po directory. Add each of the broken locale files to the list under BROKEN_LINGUAS. The example shown are the ones I've found to be broken:

     BROKEN_LINGUAS = cs da el es fi fr gl hu it ja ko nb pl pt_BR sv zh_CN zh_TW

Incidentally, its a lot easier to upgrade to a new OS with the version of glibc that you want than to build and install a new one. Things that make you go, "Hmmmmm".

Samba

SMB-based file sharing for Unix. "Opening Windows to a Wider World".

The following URL has good notes about setting up Samba. Might want to read them, if you run into trouble:

     http://home.nyc.rr.com/computertaijutsu/samba.html

Also note that a lot of Samba problems can be caused by the firewall on the Samba server. For example, on some systems, even though the firewall config tool says that Samba is open on the firewall, not all packets may be getting through. So, for a lot less grief, you may want to disable the firewall before proceeding with the next steps.

To get the latest version of Samba, see www.samba.org. Download the tar ball, change to the directory where it was downloaded and unpack it:

     tar -xvzf samba-3.0.24.tar.gz

Change to the source directory, and configure and build the source:

     cd samba-3.0.24/source
     ./configure
     make

As super user, install Samba:

     su
     make install

If you want to be able to find the Samba tools using the usual PATH environment variable, you can put symlinks to them into /usr/bin (where the Samba tools are typically installed by RedHat RPMs) by doing something like this:

     find /usr/local/samba/bin -exec ln -f -s \{\} /usr/bin \;

Note that, if you make any changes to Samba, you should do the following:

  1. Reboot all NT and Win 98 workstations (Windows caches SMBs and this can lead to changes not taking effect).
  2. Restart the smbd and nmbd processes.

.../etc/hosts & .../etc/lmhosts:

Add the samba server name to these two files on all of the Win 98/NT/2000/XP boxes. On NT/2000, .../etc/hosts is in /WinNT/System32/Drivers/Etc. On XP, .../etc/hosts is in /WINDOWS/System32/Drivers/Etc (the lmhosts file is in the same place as the hosts file on all systems). For example:

     192.168.1.1          mysys
     192.168.1.1          stargate       # <==== Exact name used for server
          .
          .
          .
     #BEGIN_ALTERNATE
     #INCLUDE     C:\WINNT\system32\drivers\etc\lmhosts.npt
     #END_ALTERNATE

It is very important that you do this and that the name added to the hosts file matches the name the Samba server is advertising under exactly.

/etc/hosts:

If you wish to refer to the workstations by name on the Samba server (e.g. if you will be using Samba client), add the Win 98/NT/2000/XP workstation names to /etc/hosts on the Unix box. For example:

     127.0.0.1       localhost.homeworld     localhost
     192.168.1.1     stargate.homeworld      stargate
     192.168.1.2     gabriella.homeworld     gabriella  # <==== workstation
     192.168.1.3     clara-bow.homeworld     clara-bow  # <==== names
     192.168.1.3     clara_bow.homeworld     clara_bow  # <====

/etc/rc.d/init.d/smb:

If you do not already have a Samba services script installed in your init.d subdirectory, here is one that should work.

     #!/bin/sh
     #
     # chkconfig: 2345 91 35
     # description: Starts and stops the Samba smbd and nmbd daemons \
     #               used to provide SMB network services.
     #
     # pidfile: /var/run/samba/smbd.pid
     # pidfile: /var/run/samba/nmbd.pid
     # config:  /etc/samba/smb.conf
     #BinDir=/usr/sbin/
     BinDir=/usr/local/samba/sbin/
     ConfigFile=/etc/samba/smb.conf
     # Source function library.
     if [ -f /etc/init.d/functions ] ; then
       . /etc/init.d/functions
     elif [ -f /etc/rc.d/init.d/functions ] ; then
       . /etc/rc.d/init.d/functions
     else
       exit 0
     fi
     # Avoid using root's TMPDIR
     unset TMPDIR
     # Source networking configuration.
     . /etc/sysconfig/network
     if [ -f /etc/sysconfig/samba ]; then
        . /etc/sysconfig/samba
     fi
     # Check that networking is up.
     [ ${NETWORKING} = "no" ] && exit 0
     # Check that smb.conf exists.
     #[ -f /etc/samba/smb.conf ] || exit 0
     [ -f ${ConfigFile} ] || exit 0
     # Check that we can write to it... so non-root users stop here
     #[ -w /etc/samba/smb.conf ] || exit 0
     [ -w ${ConfigFile} ] || exit 0
     RETVAL=0
     start() {
             KIND="SMB"
             echo -n $"Starting $KIND services: "
     #        daemon ${BinDir}smbd $SMBDOPTIONS
             ${BinDir}smbd -D -s ${ConfigFile} $SMBDOPTIONS
             RETVAL=$?
             echo
             KIND="NMB"
             echo -n $"Starting $KIND services: "
     #        daemon ${BinDir}nmbd $NMBDOPTIONS
             ${BinDir}nmbd  -D -s ${ConfigFile} $NMBDOPTIONS
             RETVAL2=$?
             echo
             [ $RETVAL -eq 0 -a $RETVAL2 -eq 0 ] && touch /var/lock/subsys/smb || \
                RETVAL=1
             return $RETVAL
     }
     stop() {
             KIND="SMB"
             echo -n $"Shutting down $KIND services: "
             killproc smbd
             RETVAL=$?
             echo
             KIND="NMB"
             echo -n $"Shutting down $KIND services: "
             killproc nmbd
             RETVAL2=$?
             [ $RETVAL -eq 0 -a $RETVAL2 -eq 0 ] && rm -f /var/lock/subsys/smb
             echo ""
             return $RETVAL
     }
     restart() {
             stop
             start
     }
     reload() {
             echo -n $"Reloading smb.conf file: "
             killproc smbd -HUP
             RETVAL=$?
             echo
             return $RETVAL
     }
     rhstatus() {
             status smbd
             status nmbd
     }
     case "$1" in
       start)
               start
             ;;
       stop)
               stop
             ;;
       restart)
               restart
             ;;
       reload)
               reload
             ;;
       status)
               rhstatus
             ;;
       condrestart)
               [ -f /var/lock/subsys/smb ] && restart || :
             ;;
       *)
             echo $"Usage: $0 {start|stop|restart|reload|status|condrestart}"
             exit 1
     esac
     exit $?

/etc/rc.d/rcx.d:

Add the smbd and nmbd services to the system startup by running chkconfig:

     /sbin/chkconfig --add smb
     /sbin/chkconfig smb on

This should add the services to runlevels 3, 4 & 5.

When you have finished all of the rest of the Samba configuration, you can start Samba via:

     /etc/rc.d/init.d/smb start

/var/log/samba:

You may need to create a log directory for Samba to write its logfiles into. The name of the logfile is defined in smb.conf (see below) but the tradional name is /var/log/samba. To create this directory:

     mkdir /var/log/samba
     chmod go= /var/log/samba

/etc/samba:

If the Samba configuration directory hasn't already been created, create one now:

     mkdir /etc/samba

The permissions should look like this:

     drwxr-xr-x     root     root

/etc/samba/lmhosts:

If you will be using Samba client (on the server, to access the Windows workstations), add the Win NT/98/2000/XP workstation names to /etc/samba/lmhosts on the Unix box. For example:

     127.0.0.1       localhost
     192.168.1.1     stargate
     192.168.1.2     gabriella           # <==== Workstation names
     192.168.1.3     clara-bow           # <==== Note that Unix uses '-'
     192.168.1.3     clara_bow           # <==== Note that Winduhs uses '_'

The lmhosts file is basically a copy of /etc/hosts so you can start by copying it to samba/lmhosts and then just edit it down, if you already have the workstations defined in /etc/hosts:

     cp /etc/hosts /etc/samba/lmhosts

The permissions should look like:

     -rw-r--r--     root     root

On the other hand, if you won't be using the Samba client on the server, you can skip all of the steps having to do with /etc/samba/lmhosts.

/etc/samba/smbusers (or /etc/smbusers for older versions of Samba):

Set up equivalences between Windows login names and login names on the Samba server in this file. For example:

     # Unix_name = SMB_name1 SMB_name2 ...
     root = administrator admin
     nobody = guest pcguest smbguest
     joeblow = joe

/etc/samba/smbpasswd (or /etc/smbpasswd for older versions of Samba):

Older versions of Samba (less than 3.x) used this file. For newer versions of Samba, see pdbedit, below.

You must set up passwords for Samba if you are using Windows NT 4.0, SP3 and above. You should generate the smbpasswd file from your /etc/passwd file using the following command:

     cat /etc/passwd | mksmbpasswd.sh > /etc/samba/smbpasswd

If you are running on a system that uses NIS, use:

     ypcat passwd | mksmbpasswd.sh > /etc/samba/smbpasswd

Or, if you are running on a system that doesn't have mksmbpasswd.sh installed (e.g. SuSE), you can make up the password file by hand. You need entries that look like this:

     username:nnn:00000000000000000000000000000000:\
       00000000000000000000000000000000:[U]:LCT-00000000:comments

Where "username" and "nnn" are the user's name and userid from the /etc/passwd file.

All of the users are given disabled passwords in this file, by default. To set passwords that Samba can use, run:

     smbpasswd username

Because this file contains secret passwords, you should make sure that it is not readable by regular users:

     chmod go= /etc/samba/smbpasswd

You can find the full description of how all this works in the Samba HOWTO, either the PDF or HTML versions, which is in the docs directory of the build tree:

     ../docs/Samba3-HOWTO.pdf
     ../docs/htmldocs/Samba3-HOWTO/index.html

pdbedit

For newer versions of Samba (3.x and above), it looks like passwords must be added by hand using pdbedit. Basically, you must run, as root:

     pdbedit -a -u username

for each user that you wish to add. Note that the username is the local userid, not the Windows userid (which is mapped to a local userid by /etc/samba/smbusers, above).

/etc/samba/smb.conf or /etc/smb.conf (for older versions of Samba):

Configure Samba by hacking /etc/samba/smb.conf. Pay attention to the following (especially the "interfaces" IP addresses, which should be set to your machine's IP address plus 127.0.0.1):

     workgroup = WORKGROUP
     # netbios name = MRSERVER  <== Set this only if you don't want to use the
                                    machine's name from /etc/sysconfig/network
     comment = mysys
     server string = Samba %v Server
     hosts allow = 192.168.1. 127.
     interfaces = 192.168.1.1/24 127.0.0.1    <==== Machine's IP address here
     remote browse sync = 192.168.1.255
     local master = yes
     domain master = yes
     [homes]
     [Root]

For versions of Samba before 3.x, you will probably want to pay attention to the password stuff:

     encrypt passwords = yes
     smb passwd file = /etc/samba/smbpasswd
     unix password sync = no
     username map = /etc/samba/smbusers

For versions of Samba 3.x and above, you simply need to use:

     passdb backend = tdbsam

You can copy the sample config file from the build directory tree:

     cp ../examples/smb.conf.default /etc/samba/smb.conf

Or, if you'd like, here is a sample of a complete config file:

     # This is the main Samba configuration file. You should read the
     # smb.conf(5) manual page in order to understand the options listed
     # here. Samba has a huge number of configurable options (perhaps too
     # many!) most of which are not shown in this example
     #
     # Any line which starts with a ; (semi-colon) or a # (hash)
     # is a comment and is ignored. In this example we will use a #
     # for commentry and a ; for parts of the config file that you
     # may wish to enable
     #
     # NOTE: Whenever you modify this file you should run the command "testparm"
     # to check that you have not many any basic syntactic errors.
     #
     #======================= Global Settings ===================================
     [global]
     # workgroup = NT-Domain-Name or Workgroup-Name
         workgroup = WORKGROUP
         comment = mysys
     # server string is the equivalent of the NT Description field
         server string = Samba %v Server
     # This option is important for security. It allows you to restrict
     # connections to machines which are on your local network. The
     # following example restricts access to two C class networks and
     # the "loopback" interface. For more examples of the syntax see
     # the smb.conf(5) man page
         hosts allow = 192.168.1. 127.
     # if you want to automatically load your printer list rather
     # than setting them up individually then you'll need this
     ;    printcap name = /etc/printcap
     ;    load printers = yes
     # It should not be necessary to spell out the print system type unless
     # yours is non-standard. Currently supported print systems include:
     # cups, bsd, sysv, plp, lprng, aix, hpux, qnx
     ;   printing = cups
     # Uncomment this if you want a guest account, you must add this to
     # /etc/passwd otherwise the user "nobody" is used
     ;   guest account = pcguest
     # this tells Samba to use a separate log file for each machine
     # that connects
         log file = /var/log/samba/log.%m
     # Have no cap on log file size or put a cap on the size of the log
     # files (in Kb).
     ;   max log size = 0
         max log size = 50
     # Security mode. Most people will want user level security. See
     # security_level.txt for details.
         security = user
     # Use password server option only with security = server
     ;   password server = <NT-Server-Name>
     # For versions of Samba prior to 3.x.
     #
     # Username and password level allows the matching of up to n characters of
     # the username and password in mixed case.  All combinations of upper and
     # lower case with up to n letters mixed are tried.  A value of zero tries
     # two usernames and two passwords and is probably what you want.
         username level = 0
         password level = 0
     # Disallow access to accounts that have null passwords.
         null passwords = no
     # You may wish to use password encryption. Please read
     # ENCRYPTION.txt, Win95.txt and WinNT.txt in the Samba documentation.
     # Do not enable this option unless you have read those documents
         encrypt passwords = yes
         smb passwd file = /etc/samba/smbpasswd
     # The following are needed to allow password changing from Windows to
     # update the Linux sytsem password also.
     # NOTE: Use these with 'encrypt passwords' and 'smb passwd file' above.
     # NOTE2: You do NOT need these to allow workstations to change only
     #        the encrypted SMB passwords. They allow the Unix password
     #        to be kept in sync with the SMB password.
         unix password sync = no
     ;   passwd program = /usr/bin/passwd %u
     ;   passwd chat = NewUNIXpassword %n\n ReTypenewUNIXpassword* %n\n \
                       passwd:allauthenticationtokensupdatedsuccessfully*
     # End of stuff for Samba versions < 3.x.
     # Unix users can map to different SMB User names
         username map = /etc/samba/smbusers
     # Using the following line enables you to customise your configuration
     # on a per machine basis. The %m gets replaced with the netbios name
     # of the machine that is connecting
     ;   include = /etc/samba/smb.conf.%m
     # Most people will find that this option gives better performance.
     # See speed.txt and the manual pages for details
     ;   socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
     # Configure Samba to use multiple interfaces
     # If you have multiple network interfaces then you must list them
     # here. See the man page for details.
     ;   interfaces = 192.168.1.1/24
         interfaces = 192.168.1.1/24 127.0.0.1
         bind interfaces only = True
     # Configure remote browse list synchronisation here
     #  request announcement to, or browse list sync from:
     #     a specific host or from / to a whole subnet (see below)
     ;   remote browse sync = 192.168.3.25 192.168.5.255
         remote browse sync = 192.168.1.255
     # Cause this host to announce itself to local subnets here
     ;   remote announce = 192.168.1.255 192.168.2.44
     # Browser Control Options:
     # set local master to no if you don't want Samba to become a master
     # browser on your network. Otherwise the normal election rules apply
     ;   local master = no
         local master = yes
     # OS Level determines the precedence of this server in master browser
     # elections. The default value should be reasonable
         os level = 33
     # Domain Master specifies Samba to be the Domain Master Browser. This
     # allows Samba to collate browse lists between subnets. Don't use this
     # if you already have a Windows NT domain controller doing this job
         domain master = yes
     # Preferred Master causes Samba to force a local browser election on
     # startup and gives it a slightly higher chance of winning the election
         preferred master = yes
     # Use only if you have an NT server on your network that has been
     # configured at install time to be a primary domain controller.
     ;   domain controller = <NT-Domain-Controller-SMBName>
     # Enable this if you want Samba to be a domain logon server for
     # Windows95 workstations.
     ;   domain logons = yes
     # if you enable domain logons then you may want a per-machine or
     # per user logon script
     # run a specific logon batch file per workstation (machine)
     ;   logon script = %m.bat
     # run a specific logon batch file per username
     ;   logon script = %U.bat
     # Where to store roving profiles (only for Win95 and WinNT)
     #        %L substitutes for this servers netbios name, %U is username
     #        You must uncomment the [Profiles] share below
     ;   logon path = \\%L\Profiles\%U
     # All NetBIOS names must be resolved to IP Addresses
     # 'Name Resolve Order' allows the named resolution mechanism to be specified
     # the default order is "host lmhosts wins bcast". "host" means use the unix
     # system gethostbyname() function call that will use either /etc/hosts OR
     # DNS or NIS depending on the settings of /etc/host.config,
     # /etc/nsswitch.conf and the /etc/resolv.conf file. "host" therefore is
     # system configuration dependant. This parameter is most often of use to
     # prevent DNS lookups in order to resolve NetBIOS names to IP Addresses.
     # Use with care! The example below excludes use of name resolution for
     # machines that are NOT on the local network segment
     # - OR - are not deliberately to be known via lmhosts or via WINS.
     ; name resolve order = wins lmhosts bcast
     # Windows Internet Name Serving Support Section:
     # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server
         wins support = no
     # WINS Server - Tells the NMBD components of Samba to be a WINS Client
     #     Note: Samba can be either a WINS Server, or a WINS Client, but NOT both
     ;   wins server = w.x.y.z
     # WINS Proxy - Tells Samba to answer name resolution queries on
     # behalf of a non WINS capable client, for this to work there must be
     # at least one     WINS Server on the network. The default is NO.
     ;   wins proxy = yes
     # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names
     # via DNS nslookups. The built-in default for versions 1.9.17 is yes,
     # this has been changed in version 1.9.18 to no.
         dns proxy = no
      map to guest = never
      dead time = 0
      debug level = 0
     # Case Preservation can be handy - system default is no
     # NOTE: These can be set on a per share basis
     ;  preserve case = no
     ;  short preserve case = no
     # Default case is normally upper case for all DOS files
     ;  default case = lower
     # Be very careful with case sensitivity - it can break things!
     ;  case sensitive = no
     #============================ Share Definitions ============================
     [homes]
         comment = Home Directory
         browseable = no
         writable = yes
     [Root]
         comment = Root Directory
         path = /
         public = yes
         browseable = yes
         writeable = yes
         write list = @joeblow
     # Un-comment the following and create the netlogon directory for Domain
     # Logons
     ; [netlogon]
     ;   comment = Network Logon Service
     ;   path = /home/netlogon
     ;   guest ok = yes
     ;   writable = no
     ;   share modes = no
     # Un-comment the following to provide a specific roving profile share
     # the default is to use the user's home directory
     ;[Profiles]
     ;    path = /home/profiles
     ;    browseable = no
     ;    guest ok = yes
     # NOTE: If you have a BSD-style print system there is no need to
     # specifically define each individual printer
     #[printers]
     #    comment = All Printers
     #    path = /var/spool/samba
     #    browseable = no
     # Set public = yes to allow user 'guest account' to print
     #    guest ok = no
     #    writable = no
     #    printable = yes

CUPS

The Common Unix Printing System can be installed to provide better printer support than the usual BSD printer support, if it is not already a part of your Linux distribution.

Download the latest version of CUPS from Easy Software (now Apple) at http://www.cups.org/software.php. Once you have downloaded the tar ball, change to the directory where it was downloaded and unpack it:

     tar -xvzf cups-1.2.12-source.tar.gz

Change to the source directory, and configure and build the source (it is best to leave out SSL, unless you really need it):

     cd cups-1.2.12
     ./configure --disable-ssl
     make

As super user, install CUPS:

     su
     make install

The install should put a copy of the CUPS start script in your system's startup directory (typically /etc/rc.d/init.d) but it may not enable it. To do so, try the following:

     /sbin/chkconfig --add cups
     /sbin/chkconfig cups on
     /sbin/chkconfig --list cups

The result should show CUPS turned on at runlevels 2, 3, 4 and 5.

If you want to use CUPS right away, start it as follows:

     /etc/rc.d/init.d/cups start

Launch the CUPS UI via your Web browser. The first time around, you may need to do this from localhost (it depends on whether you are running a version of CUPS that supports the "WebInterface" setting in cupsd.conf and whether this setting is set to "Yes").

     http://localhost:631

Once you have the UI up and running, you can allow local machines to remotely administer CUPS by clicking the Administration tab and checking the "Allow remote administration" check box. Otherwise, you will always have to administer CUPS from localhost.

Pick the Administration tab.

Click "Add Printer"

     Printer Name: Laser
     Location: Office
     Description: NEC Silentwriter 95 LASER printer

Click "Continue" and then pick the device type as described below.

If you wish to set up a line printer server that uses the LPD protocol to print Postscript text, proceed as follows:

     Device: LPD/LPR server (from dropdown list)
     URI: lpd://kinkos/Laser (this example is for a Lantronix EPS-2 which has a
                              queue called "Laser" on Port_2)
     Note that you get the IP number and queue name from the server's setup but
     you should always use "lpd://".
     Make: Postscript
     Model: Generic postscript printer

If you wish to set up a line printer that uses the Raw protocol to print ASCII text, without modification (e.g. Genicom, DataProducts), proceed as follows:

     Device: AppSocet/JetDirect (from dropdown list)
     URI: socket://10.100.0.20:10001/PORT1 (this example is for a NetPrint 500/100
                                            which uses port 10001 [by default] and
                                            has a queue name of PORT1 [by
                                            default])
     Note that you get the IP number, port number and queue name from the server's
     setup but you should always use "socket://".
     Make: Raw
     Model: Raw queue

For each printer, you may want to test its printing capabilities after it is set up by clicking the Printers tab and then clicking its "Print Test Page" button.

Once you have set up all your printers, you should select one as the default by clicking the Printers tab and then clicking the "Set As Default" button for the printer that you wish to be the default. You can test the default print setting by doing an lpr command from the command prompt:

     lpr conf/cupsd.conf

CUPS Under Samba

You can use Samba to make all of the CUPS printers available to Windows/SMB clients. If your printer is a postscript printer you just need to share it and then install it as described in the Windows 2000 or Windows XP steps below (there are more comprehensive notes at http://www.bsmdevelopment.com/Reference/Inst_Sys_CUPS.html). When it comes time to install the drivers, install the regular printer drivers from wherever you have them (e.g. manufacturer's install disk). These drivers will produce postscript output, which CUPS will handle just fine.

If your printer requires some kind of special driver (e.g. one of those cheap ink-dispensers, ooops, we meant inkjet printers, like the Canon printers), you will need to set up generic postscript drivers that can be used by any printer clients to create output to be spooled to the CUPS printers. Once that is done, the printers are defined under Samba and installed on any Windows client that wishes to use them.

>>>>>>
Begin by getting the CUPS postscript drivers from the CUPS Web site (www.cups.org -- click the Windows tab to see the download page for the drivers -- you may have to look around for them a bit). The source tarball also includes precompiled DLLs. Once you've downloaded it, untar the files: <<<<<<

     tar -xvzf cups-windows-6.0-source.tar.gz

Create a drivers directory (under your CUPS install directory -- note that this is a different directory than the "driver" directory that is found in the CUPS library path) for the driver files and copy them into it:

     cd cups-windows-6.0
     su
     mkdir /usr/share/cups/drivers
     cp i386/* /usr/share/cups/drivers

Now, the CUPS postscript drivers also depend on the Windows postscript drivers and the cupsaddsmb command on some versions of CUPS has a bug in it whereby it loops endlessly if the Windows postscript drivers cannot be found. To get them, you must go to a version of Windows 2000 or XP and copy the required files. If you are using Samba, this should be a snap. The directory to look in is:

     C:\WINNT\system32\spool\drivers\w32x86\3    (for Windows 2000)
     C:\WINDOWS\system32\spool\drivers\w32x86\3  (for Windows XP)

The four files that you want are:

     ps5ui.dll
     pscript.hlp
     pscript.ntf
     pscript5.dll

We created a directory named WindowsPSDrivers/i386 on our print server to hold them. Note that the same buggy cupsaddsmb (mentioned above) wants to use lowercase names for these drivers but some of the files may appear with uppercase names on Windoze. Windoze doesn't care but Unix/Linux sure does. So, lowercase all the names in the WindowsPSDrivers/i386 before proceeding.

Once you have these four files with lowercase names, copy them to the drivers directory that you created above:

     cd WindowsPSDrivers/i386
     su
     cp ps5ui.dll pscript.hlp pscript.ntf pscript5.dll /usr/share/cups/drivers
     chmod u-x /usr/share/cups/drivers/*

Note that on Windows 7, we had first had to install a TCP/LPR printer with a bogus IP address and queue name to get the 64-bit versions of these files (they are not present on the system until a Postscript printer is installed). To do this, we fired up the Add Printer dialog and picked Network Printer. We immediately clicked Stop to stop the pointless search for printers and then clicked "The printer that I want isn't listed". Having done this, we were able to click "Add a printer using a TCP/IP address or hostname". From there, we picked a Device Type of TCP/IP Device, filled in a Hostname and a Port Name. When the bogus port wasn't found, we selected Custom for the Device Type and then clicked Settings to set the Protocol to LPR and the Queue Name to something equally bogus. The next page allowed us to install the driver. We clicked the Windows Update button to search Windows Update for all available print drivers (this took about 10 minutes). From there, we selected Generic in the left column and Generic 35ppm PS in the right column. This caused the 64bit drivers to be installed in the location noted below:

     C:\Windows\System32\spool\drivers\w32x86\3  (for Windows 7)

After we copied the drivers away (we created a directory named WindowsPSDrivers/x64 on our print server to hold them), we deleted the printer that we added. When you do this, you will first need to delete the TCP port. Begin by making sure that there are no print jobs pending on the printer (e.g. a spurious, failed test page that is hung). If so, Windows will silently blow off all attempts to delete anything. Click on the bogus printer to select it on the Devices and Printers page. At the top of the page, "Print server properties" will appear. Click it to get a dialog box that has several tabs. Click the Ports tab and then find the port that you added. Select it and click Delete Port. Now, you can go back to the printer itself and right click on it. Pick Remove Device from the popup menu. If, for whatever reason, you do things in the reverse order, you can still get to the port to delete it by selecting any installed printer. The "Print server properties" selection will appear and you can proceed as noted above (the port and printer are unrelated as far as Windows 7 is concerned).

As above, the four files that you want are:

     ps5ui.dll
     pscript.hlp
     pscript.ntf
     pscript5.dll

However, these are the 64-bit versions (except for pscript.hlp) so you should put them in a different directory (as we stated above, we put them in WindowsPSDrivers/x64 on our print server). We still have to deal with the uppercase name thing so lowercase all the names before proceeding.

Once you have these four files with lowercase names, we need to make a separate directory for them and then copy them to it:

     su
     mkdir /usr/share/cups/drivers/x64
     cd WindowsPSDrivers/x64
     cp ps5ui.dll pscript.hlp pscript.ntf pscript5.dll \
       /usr/share/cups/drivers/x64
     chmod u-x /usr/share/cups/drivers/x64/*

Add the printers definition information to the Samba config file. You'll have to figure out where to actually put the information but it should look something like this:

     [global]
       load printers = yes
       ; cups options = raw
     printcap name = cups
     printing = cups
     [printers]
       comment = All Printers
       path = /var/spool/samba
       browseable = no
       public = yes
       guest ok = yes
       writable = no
       printable = yes
       printer admin = root
     [print$]
       comment = Printer Drivers
       path = /etc/samba/drivers
       browseable = yes
       guest ok = no
       read only = yes
       write list = root

Also, make sure that the "cups options = raw" parameter is not present in the global options section of the config file or, if it is, it is commented out. Some of the later default Samba config files have this parameter set.

If you didn't compile Samba against a version of libcups.so that will enumerate the printers for it (this could happen if CUPS wasn't installed on your system when you built Samba, for example), the "printcap name = cups" thing won't work. However, since CUPS exports all of its printer definitions to printcap automagically, you should be able to use "printcap name = /etc/printcap". Be sure that the smb.conf configuration file has these directives set/unset:

     load printers = yes
     ; cups options = raw
     printcap name = /etc/printcap
     printing = cups

Also, there may be a bug in some versions of Samba that causes printers not to be handled properly if the permissions of the user looking at the printer are not those of the printer administrator. The upshot of this is that the printer will appear to Winduhs as "permission denied", although the user will be able to print to the printer. Unfortunately, this precludes the user from setting up the printer, since Windoze will not proceed past the first step, once the printer is selected. The kludge is to add all of the users who will be setting up a printer to the following line for the "[printers]" or individual printer definiton:

     printer admin = root, user1, user2, ...

Once the printer has been set up, the user names can be removed from the list. The users will get the annoying "permission denied" message but will still be able to print.

Be aware that, under versions of Samba from 3.x and above, "printer admin" is deprecated and should be left out of the configuration file. If the problem still exists, the following "[global]" option is reported to fix things:

     use client driver = Yes

Create a directory where Samba can put the printer drivers that it gets from CUPS:

     su
     mkdir /etc/samba/drivers

Finally, run the cupsaddsmb command to export the postscript printer drivers and the Windows PPD for the printer to Samba. To export a single printer, you might do this:

     su
     /usr/sbin/cupsaddsmb -H localhost -U root -v Photo

If you wish to export all of the printers that are defined on the system, you should do this:

     su
     /usr/sbin/cupsaddsmb -H localhost -U root -a -v

Note that you must supply the name of the Samba server (despite what the man page for cubsaddsmb says) or the command seems to fail. Also, you will be prompted for the root password under Samba (this may be different than your system's root password). On earlier versions of Samba, don't even think of running this command without the -v option. It is so buggy and it loops endlessly if there's any error from Samba, so you'll want to see WTF is going on.

Under some versions of Samba, this command bombs out while doing rpcclient. We suspect it is a problem with rpcclient, whereby it does not properly handle logins using a username file, supplied via the "-A" option in the cupsaddsmb code. Anyway, if cubsaddsmb gets an NT_STATUS_LOGON_FAILURE while trying to run rpcclient to register the newly-added driver (and loops endlessly), you can run the command yourself. Simply cut and paste the failed command from the error message and run it directly, after removing the -N and -A options and substituting -U root. It should look something like:

     rpcclient localhost -U root -c 'adddriver "Windows NT x86" \
       "Photo:pscript5.dll:Photo.ppd:ps5ui.dll:pscript.hlp:NULL:RAW:\
        pscript5.dll,Photo.ppd,ps5ui.dll,pscript.hlp,pscript.ntf,\
        cups6.ini,cupsps6.dll,cupsui6.dll"'

Then, you will need to manually connect the driver to the printer by running the setdriver command, which will look something like this:

     rpcclient localhost -U root -c 'setdriver Photo Photo'

In the setdriver command, the first name is the printer name and the second is the driver name (as registered by adddriver). In this example they are both the same but they can be different, if you want.

You can verify that the driver is installed correctly and that it is associated with the printer by doing these two commands:

     rpcclient localhost -U root -c 'enumdrivers 3'
     rpcclient localhost -U root -c 'getprinter Photo 2'

Note that on at least one version of Samba, the enumdrivers command segfaulted but the driver was still installed correctly. The ultimate test is if the getprinter command shows the driver listed under "drivername:". You may also be able to see what's going on with the drivers by trying enumdrivers at a less detailed level. For example:

     rpcclient localhost -U root -c 'enumdrivers 2'

For a more comprehensive description of how this works, you can surf over to this URL:

     http://www.linuxtopia.org/online_books/network_administration_guides/
         samba_reference_guide/29_CUPS-printing_105.html

To add the printer from Windows 2000 or Windows XP, do the following:

Windows 2000

  1. Launch the Printers dialog, under Control Panel
  2. Click "Add Printer"
  3. Pick "Network Printer"
  4. Select "Type the printer name, or click Next to ..." and then click Next.
  5. The browser will come up and let you surf over to the printer to be installed in the network tree. Click on it to select it and then click Next to proceed with the install.
  6. If, at this point, Windoze asks you if you want to set up the printer driver on your local machine because the server that has the printer on it does not have the print driver installed, click Cancel. Then, go back to the part (above) where we talk about how Samba is broke-dick and how you need to add the user setting up the printer to the "printer admin" option in the Samba config file, temporarily. Do that and try again.
  7. Answer No to the default printer question (unless you want it as your default) and click Next. Then click Finish.
  8. Your printer is installed. If you temporarily added the user to the "printer admin" option, you can remove them now. Note that if you click on Properties for this printer (e.g. to set the comments), you will be told that the driver for the printer is not installed, blah, blah, blah. Its a lie. Always click No, unless you want to go down the Windoze Rabbit Hole to Driverland. You should probably eat the mushrooms first.

Windows XP

     On later versions of XP (e.g. Service Pack 3), you may have to apply a
     couple of changes to the Registry before your system will let you log in
     to the CUPS server and thereby gain access to the printers.  If you try
     to open a share point on the CUPS server and you continually get the
     userid/password prompt, despite the fact that you are entering the correct
     userid and password (remember that it is the Samba userid/password which
     is not necessarily the login userid password), you may wish to try this
     fix.
     Open the Registry Editor by typing "regedt32" at a command prompt or into
     the Run box.  When the editor launches, add these two keys:
     HKLM\System\CCS\Services\LanmanWorkstation\Parameters
       DWORD  DomainCompatibilityMode = 1
       DWORD  DNSNameResolutionRequired = 0
     After you save these two parameters, you should be able to login and
     set up the printers.  Follow the steps below.
     1) Launch the Printers dialog, under Control Panel
     2) Click "Add Printer"
     3) Pick "Network Printer"
     4) Select "Browse for a printer" and click Next.
     5) The browser will come up and let you surf over to the printer to be
        installed in the network tree.  Click on it to select it and then click
        Next to proceed with the install.
     6) Answer Yes to the prompt about installing malicious, virus-infested
        drivers on your computer.
     7) Answer No to the default printer question (unless you want it as your
        default) and click Next.  Then click Finish.
     8) Your printer is installed.  Note that if you click on Properties for this
        printer (e.g. to set the comments), you will be told that the driver for
        the printer is not installed, blah, blah, blah.  Its a lie.  Always click
        No, unless you want to go down the Windoze Rabbit Hole to Driverland.
        You should probably eat the mushrooms first.

Windows 7

     Before you try to gain access to the printers on your CUPS server, you
     need to apply a couple of changes to the Registry on your Windows system
     so that it will let you log in to the server.  Without the patches, if you
     try to open a share point on the CUPS server, you will continually get the
     userid/password prompt, despite the fact that you are entering the correct
     userid and password (remember that it is the Samba userid/password which
     is not necessarily the login userid password).
     Open the Registry Editor by typing "regedt32" at a command prompt or into
     the Run box.  When the editor launches, add these two keys:
     HKLM\System\CCS\Services\LanmanWorkstation\Parameters
       DWORD  DomainCompatibilityMode = 1
       DWORD  DNSNameResolutionRequired = 0
     After you save these two parameters, you should be able to login and
     set up the printers.  Follow the steps below.
     1) Launch the Printers dialog, under Control Panel, Hardware and Sound,
        "View devices and printers"
     2) Click "Add Printer"
     3) Pick "Network Printer"
     4) Windows, being smarter than the rest of us, will immediately go
        screaming off into the weeds searching for printers, probably using
        that sad excuse of a feature called Bonjour.  You could let it do its
        thing for 10 or 15 minutes but we, having a short attention span,
        immediately click the Stop button.
     5) Select "Browse for a printer" and click Next.
     6) The browser will come up and let you surf over to the printer to be
        installed in the network tree.  Click on it to select it and then click
        Next to proceed with the install.
     7) Answer Yes to the prompt about installing malicious, virus-infested
        drivers on your computer.  This is another sad excuse of a feature that
        applies misguided security policies to practically everything you try
        to do.  Ignore it or you won't be able to get anything useful done.
     8) A dialog box will pop up asking you if you'd like to print a test page.
        Its probably a good idea to do it.  You know its gonna work....
     9) Your printer is installed.  Note that if you click on Properties for
        this printer (e.g. to set the comments), you can unset it as the
        default (Windows seems to gratuitously make it the default -- ain't
        it grand being smarter than the rest of us).  You can also set other
        properties and even print another test page, if you wish.

On the other hand, if you just want simplicity, you can define your printer something like this in your Samba configuration file:

     [Laser]
       printable = yes
       print command = /usr/bin/lpr -P%p -r %s
       printer = Laser
       printing = BSD
       path = /var/tmp
       browseable = no
       public = yes
       writable = no
       printable = yes
       printer admin = root

Then, add the printer (as described above) on Windows and pick a standard driver file from the list of already-supported drivers for any postscript printer such as the HP LaserJet 5P/5MP PostScript printer. They all use the same Microsoft postscript engine so anything that you choose should work.

Lastly, if you have a printer that is a total pain in the butt (like the Canon BubbleJet series printers), but you have a Windows driver for it, you can set up a RAW printer queue under CUPS and use it to send output to the printer. To do this, make the following choices when setting up the printer:

     Make: Raw
     Model: Raw queue

Then, on Windows, install the printer as described above but, when it comes time to install the drivers, pick the proper driver from the list of standard drivers or install the one supplied by the manufacturer. You should be able to print to the printer but all bets are off as to whether CUPS will correctly handle the interaction between the two printer queues, if simultaneous print requests from the Unix/Linix side and the Windblows side are received at once.

Canon 850/860 BubbleJet Printers on CUPS

Download the CUPS print drivers from the Canon Web site. We found them at:

     ftp://download.canon.jp/pub/driver/bj/linux/

The instructions and readme files are all in Japanese. But, the source is written in English. So, press on. Get the CUPS drivers, a tarball that looks something like:

     bjcups-2.4-0.tar.gz

You may also want to get the CUPS monitor and the print filter but they aren't necessary. If you do, get the tarballs that look something like:

     bjcupsmon-2.4-1.tar.gz
     bjfilter-2.4-0.tar.gz

To install the CUPS printer driver, untar the tarball:

     tar -xvzf bjcups-2.4-0.tar.gz

Forget the directions. First build the libraries as follows:

     cd bjcups-2.4-0/libs
     make

Now move on to the CUPS print filter. Our copy of the source had a couple of bugs in it so you'll want to fix them up, if they're still there. Hack pstocanonbj.c in the filter directory and make the following changes:

     350,351c351,353
     <                       p_choice = ppdFindMarkedChoice(p_ppd, "Resolution");
     <                       reso = atoi(p_choice->choice);
     ---
     >                       if ((p_choice = ppdFindMarkedChoice(p_ppd, \
                                 "Resolution")) != NULL)
     >                               reso = atoi(p_choice->choice);
     >                       else reso = 360;
     354,355c356,358
     <                       p_choice = ppdFindMarkedChoice(p_ppd, "PageSize");
     <                       p_size = ppdPageSize(p_ppd, p_choice->choice);
     ---
     >                       if ((p_choice = ppdFindMarkedChoice(p_ppd, \
                                 "PageSize")) != NULL)
     >                               p_size = ppdPageSize(p_ppd, \
                                         p_choice->choice);
     >                       else p_size = ppdPageSize(p_ppd, "Letter");

Also, our version of GhostScript was not in a place that the CUPS scheduler could execute it from with its PATH variable set as it was (whatever it was, it didn't work). So, we found the GhostScript program (in our case /usr/local/bin/gs) and added/changed the following lines in pstocanonbj.c:

     42a43
     > #define GS_PATH                       "/usr/local/bin/gs"
     365,366c368,369
     <       "gs -r%d -g%dx%d -q -dNOPROMPT -dSAFER -sDEVICE=ppmraw \
                 -sOutputFile=- -| ",
     <       reso, (int)(p_size->width * (float)reso / 72.0),
     ---
     >       "%s -r%d -g%dx%d -q -dNOPROMPT -dSAFER -sDEVICE=ppmraw \
                 -sOutputFile=- -| ",
     >       GS_PATH, reso, (int)(p_size->width * (float)reso / 72.0),

Now, build the filter as follows:

     cd ../filter
     cc -c -O2 -Wall -I../libs/bjexec -I../libs/paramlist pstocanonbj.c
     cc -c -O2 -Wall canonopt.c
     cc -o pstocanonbj pstocanonbj.o canonopt.o -L../libs/bjexec \
        -L../libs/paramlist -lcups -lbjexec -lparamlist

Install the ppd and filter into your CUPS directories as super user:

     cd ..
     su
     cp filter/pstocanonbj /usr/lib/cups/filter
     chmod ugo+x /usr/lib/cups/filter/pstocanonbj
     cp ppd/*.ppd /usr/share/cups/model

Now, for the screwy part. The Canon kids messed up the distribution so that several libraries that are needed for all this stuff to work are not available in source form. "Sorry Charlie." You will have to get their RPM and install it. It contains the dynamic libraries that are used by the printer filter as well as bitmaps, fonts and other stuff needed by the printer. Download the RPM for the printer:

     bjfilterpixus860i-2.4-0.i386.rpm

To check what it will install before you let 'er rip, you can try:

     rpm -q -l -p bjfilterpixus860i-2.4-0.i386.rpm

To actually install the RPM, as super user, do the following:

     su
     rpm -i bjfilterpixus860i-2.4-0.i386.rpm

Note that the RPM wants old versions of libglade and libpng. If you have recent versions, it will fail on the dependency checks. To force it to install anyway, use:

     rpm -i --nodeps bjfilterpixus860i-2.4-0.i386.rpm

If you do force install the RPM, you may have to add some symlinks for some of the older libraries. To test if the symlinks are necessary, run the filter program and see if it complains about missing shared libraries (you'll have to do this iteratively, since the loader only tells you about one missing library each time the program is run). For example:

     /usr/local/bin/bjfilterpixus860i

If you need to add symlinks, they'll probably look something like this (take a look at the other symlinks in /usr/lib that point to the current versions of the shared libraries and see where they point). Only add the ones you actually need:

     cd /usr/lib
     ln -s libpng.so.3.49.0 libpng.so.2
     ln -s libglade-2.0.so.0.0.7 libglade.so.0

Once all of the above is done, you should be able to define a new printer on your system using the Canon 860i. Launch the CUPS UI via your Web browser:

     http://printserver:631

Configure the "Canon" printer using the Administration tab and Add Printer button. Choose the following:

     Other Network Printers: LPD/LPR Host or Printer
     Connection: lpd://kinkos/Photo
     Name: Photo
     Description: Color, Canon i860 photo printer
     Location: Office
     Sharing: Share This Printer (checked)
     Make: Canon
     Model: Canon PIXUS 860i ver.2.4 (en)

Click the Continue button as necessary. At the end, click Add Printer.

Next, set the following options under each of the tabs shown:

General
Paper Size: Letter Media Type: Plain Paper
Banners
Starting Banner: none Ending Banner: none
Policies
Error Policy: stop-printer Operation Policy: default

When done, click on the Set Default Options button.

Now, from the lefthand menu, pick Print Test Page. You should see a nice test page printed on the Photo printer. If that doesn't work, let the debugging begin. The test page produces some heavy byte traffic to the printer so it should take a few minutes to print. If the print job ends immediately, you have probably screwed up the GhostScript path or the print filter itself. Check those things first. CUPS is pretty good at giving up silently, when an error occurs in those areas.

You can probably install any of the other Canon BubbleJet printers too, if you download the correct support RPM from their download site. Good luck. Note that if you are going to share this printer or one of its ilk with Winblows via Samba, it will not work. The postscript files created by the CUPS/Winduhs drivers are converted by GhostScript to something that the Canon filter chokes on. Best to set up a separate RAW printer under CUPS and share this with Windoze. You can then install the standard BubbleJet drivers on Windows and get on with your life.

DHCP

To run a DHCP server, get the latest version of dhcpd (dhcp-latest.tar.gz) from ftp://ftp.isc.org/isc/dhcp/. Unzip/tar it and run ./configure, followed by make and make install. Note that this isn't the standard configure script so don't try to give it any parameters. They won't work and will just cause the script to crap out.

The make install puts the executables in /usr/local/sbin. If you want them to be in /sbin (where everyone expects to find them), you can either copy them there or symlink them. I prefer the symlink, since when you install the next version of the software into /usr/local/sbin, the newer version will be used instead of the older version hanging around:

     ln -s /usr/local/sbin/dhclient /sbin/dhclient
     ln -s /usr/local/sbin/dhcpd /sbin/dhcpd
     ln -s /usr/local/sbin/dhcrelay /sbin/dhcrelay

/etc/rc.d/init.d/dhcpd:

After you've installed dhcpd, if you'd like it to start automatically at boot time, you need a startup script in /etc/rc.d/init.d/dhcpd. You can use this script:

     #!/bin/sh
     #
     # dhcpd         This shell script takes care of starting and stopping
     #               dhcpd.
     #
     # chkconfig: 2345 65 35
     # description: dhcpd provide access to Dynamic Host Control Protocol.
     # Source function library.
     . /etc/rc.d/init.d/functions
     # Source networking configuration.
     . /etc/sysconfig/network
     . /etc/sysconfig/dhcpd
     # Check that networking is up.
     [ ${NETWORKING} = "no" ] && exit 0
     [ -f /usr/sbin/dhcpd ] || exit 0
     [ -f /etc/dhcpd.conf ] || exit 0
     RETVAL=0
     prog="dhcpd"
     start() {
         # Start daemons.
         echo -n $"Starting $prog: "
         daemon /usr/sbin/dhcpd ${DHCPDARGS}
         RETVAL=$?
         echo
         [ $RETVAL -eq 0 ] && touch /var/lock/subsys/dhcpd
         return $RETVAL
     }
     stop() {
         # Stop daemons.
         echo -n $"Shutting down $prog: "
         killproc dhcpd
         RETVAL=$?
         echo
         [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/dhcpd
         return $RETVAL
     }
     # See how we were called.
     case "$1" in
       start)
         start
         ;;
       stop)
         stop
         ;;
       restart|reload)
         stop
         start
         RETVAL=$?
         ;;
       condrestart)
         if [ -f /var/lock/subsys/dhcpd ]; then
             stop
             start
             RETVAL=$?
         fi
         ;;
       status)
         status dhcpd
         RETVAL=$?
         ;;
       *)
         echo $"Usage: $0 {start|stop|restart|condrestart|status}"
         exit 1
     esac
     exit $RETVAL

The permissions should look like:

     -rwxr-xr-x     root     root

If you use the above script or, if a DHCP startup script is already installed on your system, you may turn it on by doing:

     /sbin/chkconfig --add dhcpd
     /sbin/chkconfig dhcpd on

/etc/sysconfig/dhcpd:

Note that the standard startup script or the script supplied above will look in /etc/sysconfig/dhcpd for DHCP parameters. This is so the system admin tools can monkey with DHCP startup parameters. If this file is missing, the script won't start so you'll need to supply one. Here's a sample:

     # Command line options here
     DHCPDARGS=

If you are running the latest version of DHCP, that supports IPV4 and IPV6, you will need to choose one by supplying the correct value to DHCPDARGS. The default (for now) is "-4", which selects IPV4. However, it wouldn't hurt to specify "-4", if you want IPV4, just in case, in the future the code winkies decide to make IPV6 the default and your configuration stops working. If you want IPV6, you must specify "-6".

The file permissions should look like:

     -rw-r--r--     root     root

/etc/dhcpd.conf:

This is the DHCP daemon config file. It should be set up something like:

     authoritative;
     default-lease-time 7200;
     max-lease-time 86400;
     option subnet-mask 255.255.255.0;
     option domain-name-servers 151.203.0.84, 151.203.0.85;
     option domain-name "mydomain.com";
     ddns-update-style ad-hoc;
     subnet 192.168.1.0 netmask 255.255.255.0 {
         option broadcast-address 192.168.1.255;
         option routers 192.168.1.1;
         range 192.168.1.150 192.168.1.200;
     }

You can do "man dhcpd.conf" for a definitive description of how to set up this config file.

Once you've installed the startup script and configured DHCP via its config file, start DHCP via:

     /etc/rc.d/init.d/dhcpd start

/var/lib/dhcpd/dhcpd.leases
/var/lib/dhcp/dhcpd.leases:
/var/state/dhcp/dhcpd.leases:
/var/db/dhcpd.leases:

The first time DHCP is run, it will whine about not being able to find the leases file (/var/lib/dhcpd/dhcpd.leases or, possibly, /var/lib/dhcp/dhcpd.leases for older versions of DHCP, and /var/state/dhcp/dhcpd.leases for newer versions of DHCP or, even, /var/db/dhcpd.leases for the newest versions). You simply need to create an empty file, wherever DHCP is looking for it. For example:

     touch /var/state/dhcpd/dhcpd.leases

The permissions should look like:

     -rw-r--r--     root     root

Inetd

This is the Internet daemon which runs various services on demand when requests are detected on well-known ports.

/etc/inetd.conf:

You may want to disallow any services that you don't need. Examples are the finger, cfinger, etc. suite of utilities.

Probably should turn on one or other of the mail services (either pop-2 and pop-3 or imap), depending on which one you want to use.

The Web-based linux configuration program, linuxconf, may be useful.

As a general rule, do not run any service from inetd that has a service number greater than 1023 in /etc/services.

Here is a sample inetd.conf:

     #
     # inetd.conf     This file describes the services that will be available
     #          through the INETD TCP/IP super server.  To re-configure
     #          the running INETD process, edit this file, then send the
     #          INETD process a SIGHUP signal.
     #
      Version:     @()/etc/inetd.conf     3.10     05/27/93
     #
     # Authors:     Original taken from BSD UNIX 4.3/TAHOE.
     #          Fred N. van Kempen, <waltje@uwalt.nl.mugnet.org>
     #
     # Modified for Debian Linux by Ian A. Murdock <imurdock@shell.portal.com>
     #
     # Modified for RHS Linux by Marc Ewing <marc@redhat.com>
     #
     # <service_name> <sock_type> <proto> <flags> <user> <server_path> <args>
     #
     # Echo, discard, daytime, and chargen are used primarily for testing.
     #
     # To re-read this file after changes, just do a 'killall -HUP inetd'
     #
     echo      stream   tcp   nowait   root        internal
     echo      dgram    udp   wait     root        internal
     discard   stream   tcp   nowait   root        internal
     discard   dgram    udp   wait     root        internal
     daytime   stream   tcp   nowait   root        internal
     daytime   dgram    udp   wait     root        internal
     chargen   stream   tcp   nowait   root        internal
     chargen   dgram    udp   wait     root        internal
     time      stream   tcp   nowait   root        internal
     time      dgram    udp   wait     root        internal
     #
     # These are standard services.
     #
     ftp       stream   tcp   nowait   root        /usr/sbin/tcpd in.ftpd -l -a
     telnet    stream   tcp   nowait   root        /usr/sbin/tcpd in.telnetd
     #
     # Shell, login, exec, comsat and talk are BSD protocols.
     #
     shell     stream   tcp   nowait   root        /usr/sbin/tcpd in.rshd
     login     stream   tcp   nowait   root        /usr/sbin/tcpd in.rlogind
     #exec     stream   tcp   nowait   root        /usr/sbin/tcpd in.rexecd
     #comsat   dgram    udp   wait     root        /usr/sbin/tcpd in.comsat
     #talk     dgram    udp   wait     nobody.tty  /usr/sbin/tcpd in.talkd
     #ntalk    dgram    udp   wait     nobody.tty  /usr/sbin/tcpd in.ntalkd
     #dtalk    stream   tcp   wait     nobody.tty  /usr/sbin/tcpd in.dtalkd
     #
     # Pop and imap mail services et al
     #
     pop-2     stream   tcp   nowait   root        /usr/sbin/tcpd ipop2d
     pop-3     stream   tcp   nowait   root        /usr/sbin/tcpd ipop3d
     #imap     stream   tcp   nowait   root        /usr/sbin/tcpd imapd
     #
     # The Internet UUCP service.
     #
     #uucp     stream   tcp   nowait   uucp        /usr/sbin/tcpd \
     #                                               /usr/lib/uucp/uucico -l
     #
     # Tftp service is provided primarily for booting.  Most sites run this
     # only on machines acting as "boot servers." Do not uncomment this
     # unless you need it.
     #
     #tftp     dgram    udp   wait     root        /usr/sbin/tcpd in.tftpd
     #bootps   dgram    udp   wait     root        /usr/sbin/tcpd bootpd
     #
     # Finger, systat and netstat give out user information which may be
     # valuable to potential "system crackers."  Many sites choose to disable
     # some or all of these services to improve security.
     #
     #finger   stream   tcp   nowait   nobody      /usr/sbin/tcpd in.fingerd
     #cfinger  stream   tcp   nowait   root        /usr/sbin/tcpd in.cfingerd
     #systat   stream   tcp   nowait   guest       /usr/sbin/tcpd /bin/ps -auwwx
     #netstat  stream   tcp   nowait   guest       /usr/sbin/tcpd \
     #                                                 /bin/netstat -f inet
     #
     # Authentication
     #
     auth      stream   tcp   wait     root        /usr/sbin/in.identd \
                                                     in.identd -e -o
     #
     # Linux configuration via HTTP.
     #
     linuxconf stream   tcp   wait     root        /bin/linuxconf linuxconf --http
     #
     # Rsync server.
     #
     rsync     stream   tcp   nowait   root        /usr/bin/rsync rsync --daemon
     #
     # End of inetd.conf

Xinetd

A replacement for inetd. This daemon can be configured by inserting individual configuration files, one for each service to be run under xinetd, into the xinetd directory. Typically, (e.g. under RedHat), there is a Gnome application too that configures services. Adding a service via this tool enables the service entry in the xinetd directory. RPMs and other install programs add service entries to this directory (in either enabled or disabled form).

If you wish to add a service that isn't installed in the usual way, you can fabricate an entry in this directory by hand.

/etc/xinetd.d/servname:

Here is a sample service entry for rsync (note that the descripton comment has a continuation at the end of the line so that the description can be longer than a single line):

     # default: off
     # description: The rsync server is a good addition to am ftp server, as it \
     #       allows crc checksumming etc.
     service rsync
     {
          disable = no
          socket_type     = stream
          wait            = no
          user            = root
          server          = /usr/bin/rsync
          server_args     = --daemon
          log_on_failure  += USERID
     }

FTP

On the older Linuxes, the FTP daemon is replaced by an FTP from Washington University. To change its parameters under inetd, just hack them in /etc/inetd.conf.

/etc/xinetd.d/wu-ftpd:

To change the FTP parameters for WU FTP under xinetd, change this file. The most common change is to set the timeout to some value longer than five minutes (e.g. "-t 7200" gives two hours).

     # default: on
     # description: The wu-ftpd FTP server serves FTP connections. It uses \
     #       normal, unencrypted usernames and passwords for authentication.
     service ftp
     {
             socket_type             = stream
             wait                    = no
             user                    = root
             server                  = /usr/sbin/in.ftpd
             server_args             = -l -a -t 7200
             log_on_success          += DURATION USERID
             log_on_failure          += USERID
             nice                    = 10
     }

On the newer Linuxes, you should be using vsftpd instead of ftp or wuftp, since it is the most secure of the FTPs.

Under the Red Hats greater than 9 and CentOS, vsftpd is started by the /etc/rc.d/init.d/vsftpd script. This script will read all of the config files in /etc/vsftpd and start one FTP service for each of them. Usually, there is only one config file (named /etc/vsftpd.conf) which starts the regular FTP service.

Under earlier Linuxes (e.g. Red Hat 8), vsftpd is stared by xinetd. There is an entry in /etc/xinetd.d (vsftpd) that starts vsftpd on demand, if it is enabled. Multiple ports may be defined by starting one service for each. The config file is /etc/vsftpd.conf.

/etc/vsftpd/*.conf:

As with wu-ftpd, the most common change is to set the timeout to some value longer than five minutes (e.g. "idle_session_timeout=7200" gives two hours).

     #
     # The default compiled in settings are very paranoid. This sample file
     # loosens things up a bit, to make the ftp daemon more usable.
     #
     # Allow anonymous FTP?
     anonymous_enable=YES
     #
     # Uncomment this to allow local users to log in.
     local_enable=YES
     #
     # Uncomment this to enable any form of FTP write command.
     write_enable=YES
     #
     # Default umask for local users is 077. You may wish to change this to 022,
     # if your users expect that (022 is used by most other ftpd's)
     local_umask=022
     #
     # Uncomment this to allow the anonymous FTP user to upload files. This only
     # has an effect if the above global write enable is activated. Also, you will
     # obviously need to create a directory writable by the FTP user.
     #anon_upload_enable=YES
     #
     # Uncomment this if you want the anonymous FTP user to be able to create
     # new directories.
     #anon_mkdir_write_enable=YES
     #
     # Activate directory messages - messages given to remote users when they
     # go into a certain directory.
     dirmessage_enable=YES
     #
     # Activate logging of uploads/downloads.
     xferlog_enable=YES
     #
     # Make sure PORT transfer connections originate from port 20 (ftp-data).
     connect_from_port_20=YES
     #
     # If you want, you can arrange for uploaded anonymous files to be owned by
     # a different user. Note! Using "root" for uploaded files is not
     # recommended!
     #chown_uploads=YES
     #chown_username=whoever
     #
     # You may override where the log file goes if you like. The default is shown
     # below.
     #xferlog_file=/var/log/vsftpd.log
     #
     # If you want, you can have your log file in standard ftpd xferlog format
     xferlog_std_format=YES
     #
     # You may change the default value for timing out an idle session.
     idle_session_timeout=7200
     #
     # You may change the default value for timing out a data connection.
     #data_connection_timeout=120
     #
     # It is recommended that you define on your system a unique user which the
     # ftp server can use as a totally isolated and unprivileged user.
     #nopriv_user=ftpsecure
     #
     # Enable this and the server will recognise asynchronous ABOR requests. Not
     # recommended for security (the code is non-trivial). Not enabling it,
     # however, may confuse older FTP clients.
     #async_abor_enable=YES
     #
     # By default the server will pretend to allow ASCII mode but in fact ignore
     # the request. Turn on the below options to have the server actually do ASCII
     # mangling on files when in ASCII mode.
     # Beware that turning on ascii_download_enable enables malicious remote parties
     # to consume your I/O resources, by issuing the command "SIZE /big/file" in
     # ASCII mode.
     # These ASCII options are split into upload and download because you may wish
     # to enable ASCII uploads (to prevent uploaded scripts etc. from breaking),
     # without the DoS risk of SIZE and ASCII downloads. ASCII mangling should be
     # on the client anyway..
     #ascii_upload_enable=YES
     #ascii_download_enable=YES
     #
     # You may fully customise the login banner string:
     #ftpd_banner=Welcome to blah FTP service.
     #
     # You may specify a file of disallowed anonymous e-mail addresses. Apparently
     # useful for combatting certain DoS attacks.
     #deny_email_enable=YES
     # (default follows)
     #banned_email_file=/etc/vsftpd.banned_emails
     #
     # You may specify an explicit list of local users to chroot() to their home
     # directory. If chroot_local_user is YES, then this list becomes a list of
     # users to NOT chroot().
     #chroot_list_enable=YES
     # (default follows)
     #chroot_list_file=/etc/vsftpd.chroot_list
     #
     # You may activate the "-R" option to the builtin ls. This is disabled by
     # default to avoid remote users being able to cause excessive I/O on large
     # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume
     # the presence of the "-R" option, so there is a strong case for enabling it.
     #ls_recurse_enable=YES
     pam_service_name=vsftpd
     userlist_enable=YES
     #enable for standalone mode
     listen=YES
     tcp_wrappers=YES

Rsync

Rsync is useful for copying directory structures to/from other machines and could aid in the installation/setup process.

Rsync should work on its own, via rsh or an rsync server can be used. I prefer the rsync server, since access to the service can be permitted without turning on all of the features (security holes) of rsh.

Once rsync is installed, per the instructions, make sure the server is run via inetd or xinetd (see above). Create a config file that describes the "modules" that are available to the rsync users.

/etc/rsyncd.conf:

Describes the "modules" (i.e. sharepoints) that are available to the rsync users. Here is an example of one "module" that can be used to store original files from another machine, when setting up a mirror machine:

     uid = nobody
     gid = nobody
     [orig]
          path = /orig
          read only = false
          comment = Original server files

This will let anybody copy files to/from the path "/orig" with a username of "nobody" and a group name of "nobody", providing the permissions allow it. To copy all of the files and subdirectories from /etc to a new directory /orig/etc under "orig", one might do:

     rsync -avz /etc/ newserver::orig/etc

Serial Ports

If an extra serial port board is added, you may need to configure the serial ports via the setserial command. This can be done at boot time by putting the setserial commands in /etc/rc.serial. For example:

     setserial /dev/ttyS2 port 0xcc00 UART 16550A irq 11 Baud_base 115200
     setserial /dev/ttyS3 port 0xd000 UART 16550A irq 11 Baud_base 115200

This would configure two serial ports on a PCI card, using IRQ 11 and attach them the /dev/ttyS2 and /dev/ttyS3.

The IRQ and port address can be gotten by reading /proc/pci or doing lspci on later versions of the OS.

Modem

If you are installing a modem, see http://start.at/modem first to make sure the modem you are going to use is a real modem and that it will work in your machine, under Linux.

If you get the right type of modem, Kudzu may configure it automatically for you, in which case, the serial port will be set up with the correct line speed and UART type and /dev/modem will be linked to the port.

If Kudzu doesn't set up the modem, the procedure for configuring a PCI modem is as follows. First, you need to find the IRQ and I/O address of the device from the /proc/pci file and then use setserial to configure the serial port:

     cat /proc/pci
     Bus  0, device  10, function  0:
       Communication controller: PCI device 151f:0000
                                 (TOPIC SEMICONDUCTOR Corp) (rev 0).
       IRQ 12.
       I/O at 0xbc00 [0xbc07].

Create a tty device to access the modem and link /dev/modem to it:

     cd /dev; test -c ttyS3 || ./MAKEDEV ttyS3; chmod 666 /dev/ttyS3
     ln -sf /dev/ttyS3 /dev/modem

Set the properties of the serial port:

     setserial /dev/ttyS3 uart 16550A port 0xbc00 irq 12 baud_base 115200 \
       spd_vhi skip_test

You can use any other ttyS? device as well. Make sure that the setserial command is issued each time upon the boot-up, making the necessary addition to /etc/rc.serial.

Dialdaemon

Diald will bring up a dialup link whenever there is traffic bound for the outside world. To make this slight of hand work, it defines a fake device, "sl0", at startup and sets the default routing table to point to this device (the device is also given a bogus IP address which is chosen from among the non-routable addresses from a subnet that doesn't otherwise exist at your site).

Whenever anyone sends a packet to the outside world, the default routing will send it to "sl0". Diald catches this packet and begins dialing the phone (using the dialer of your choice, usually vwdial or chat). When the phone is answered and the login sequence completed, pppd or whatever other link manager is being used is spawned and a new device of the appropriate type is created.

Then, the routing tables are altered to make the new device the default and the packet that diald intercepted is rerouted to the new device. So far, so good.

When the link is up and the new device created, a new set of firewall rules must be defined, to point to the new device, etc. This is done by running the usual ip-up script or diald's addroute script.

Normally, that's all that need be done but, if you are using masquerade to deliver packets from network-attached workstations, through the system, the masquerading is usually done by the firewall. This means that a second copy of the firewall rules must run when the link is down to send the masqueraded packets to "sl0". Otherwise, packets bound for the outside world from the network-attached workstations will not cause diald to bring up the link because they will never be sent to "sl0".

Hence, if you'll be using masquerading, you'll need to start your firewall at boot time (pointing it to "sl0" but only after diald has been allowed to run and create the device) and you'll need an ip-down script to put it back in place when the link comes down. In the case of NARC, this particularly means that you should be sure to set the NARC configuration to bring up the firewall at boot time and masquerade on the external interface sl0.

/etc/rc.d/init.d/dialdaemon:

Brings up diald to automatically dial up PPP connections to the Internet on an as-needed basis.

     "start" - Starts up diald.
     "stop"  - Shuts diald down.

You must create the diald script, a sample of which, is shown below, since it is not usually supplied with the OS or diald install. This script can be installed using chkconfig, whereupon it will run at startup and bring up diald to listen for outbound traffic that can't be delivered locally:

     #! /bin/sh
     #
     # dialdaemon - Script to start/stop the PPP dialer daemon.
     #
     # chkconfig: 2345 11 89
     # description: PPP dialer automatically brings up dialup link on demand
     #
     # Define the serial port for modem or PCI modem.
     #
     COMPORT=/dev/ttyS3
     case "$1" in
         #
         # Upon startup, fire up the PPP dialer daemon.  This will set up sl0
         # and wait for traffic on the link.  When traffic is detected, the PPP
         # link will be dialed up and routing will be switched to ppp0.
         #
         # If the port with the modem is a serial device on com3 or com4, use
         # the setserial command (or something like it) to set up the port.
         # Note that setserial is used to set up the port for proper use of an
         # alternate interrupt request line, due to conflicts.
         #
         # If the port is a PCI device, use the 3ComMdm command to set up the
         # port.
         #
         start)
          echo -n "Starting up PPP automatic dialing:"
          #
          # If the modem port is not set up by startup, you may want to try one
          # of the following commands, as noted above, to set it up.
          #
          # /bin/3ComMdm ${COMPORT}
          # /bin/setserial ${COMPORT} port 0x3E8 irq 7
          rm -f /dev/modem
          ln -s ${COMPORT} /dev/modem
          /usr/sbin/diald
          echo "."
          ;;
         #
         # Upon shutdown, find the dialer dameon's pid and kill it.
         #
         stop)
          echo -n "Stoping PPP automatic dialing:"
          ddpid="`/bin/ps -A | grep 'diald' | awk '{print $1}'`"
          kill $ddpid
          echo "."
          ;;
         #
         # For all other cases, give help.
         #
         *)
          echo "Usage: /etc/rc.d/init.d/dialdaemon {start|stop}"
          exit 1
          ;;
     esac
     exit 0

Additional information can be found in:

     diald.unix.ch/FAQ/diald-faq.html
     www.loonie.net/~eschenk/diald.html
     diald.sourceforge.net

Install this script in /etc/rc.d/init.d but do not enable it, if you will be using the transport switcher (below). Otherwise, install it with:

     chkconfig --add dialdaemon
     chkconfig dialdaemon on

/usr/sbin/diald:

The diald program itself.

/etc/diald.conf:

Diald configuration file. You must supply this file, a sample of which is shown. Note that, if you change this file, you must stop and then start the diald task for the changes to take effect. Do this with dialdaemon script (above). Here is a sample in which the port being used is /dev/ttyS3:

     mode ppp
     connect /usr/lib/diald/connect-eskimo
     device /dev/ttyS3
     speed 115200
     modem
     # lock
     crtscts
     local 192.168.2.253
     remote 192.168.2.254
     netmask 255.255.255.0
     dynamic
     defaultroute
     mtu 1500
     include /usr/lib/diald/standard.filter
     # addroute /usr/lib/diald/connect-addroute

If you will use wvdial to establish the connection, you shouldn't use the "lock" parameter, as this locks the modem and prevents wvdial from working. If you're going to use chat, "lock" may be appropriate.

If you have a standard ip-up and ip-down script (or ip-up.local and ip-down.local, see below) in /etc/ppp, you probably don't need the addroute parameter. The job of the script that is run by this parameter is to alter the firewall rules and bring up anything that is only run when PPP is up. It also shuts down things and sets the firewall rules back to what they were when the PPP connection comes down. Hence, if you regular ip-up and ip-down scripts do everything, it is unnecessary.

/usr/lib/diald/connect-*:

Connection parameters used to connect to various dialup services. A sample for eskimo is shown below. In order to figure out how the diald script should be written, you can run wvdial manually to connect to the host and observe the entire modem dialing and host login sequence the first time. From there, you should be able to determine what the chat sequences should be for diald.

However, you can also use wvdial as your dialer and dispense with all the crap-oh-la. Once you get wvdial working, set up the following simple script and aim diald at it. Trust me. You'll thank me for this. Here is an example of how to use an already-working wvdial configuration to dial eskimo:

     #!/bin/sh
     #
     # Run wvdial as the dialer for diald to connect to Eskimo
     #
     /usr/bin/wvdial --chat eskimo

If you haven't had enough aggravation in your day or wvdial is incapable of figuring out the screwy logon negotiation that your ISP uses, you may have to hand build a chat script. Here is a sample of one that might work to connect to Eskimo:

     #!/bin/sh
     #
     # This script will dial to Eskimo.
     #
     # The "message" facility of diald is used to communicate progress through
     # the dialing process to a diald monitoring program such as dctrl or diald-top.
     # It also reports progress to the system logs. This can be useful if you
     # are seeing failed attempts to connect and you want to know when and why
     # they are failing.
     #
     # This script requires the use of chat-1.9 or greater for full
     # functionality. It should work with older versions of chat,
     # but it will not be able to report the reason for a connection failure.
     # Configuration parameters
     # The initialization string for the modem.  An initial "ATZ" will be sent
     # prior to sending this string
     MODEM_INIT="AT E1 M0 Q0 S0=0 S11=55 V1 &C1 &K3 &D2 +FCLASS=0"
     # The phone number to dial
     PHONE_NUMBER="nnn-nnn-nnnn"
     # The chat sequence to recognize that the remote system
     # is asking for your user name.
     USER_CHAT_SEQ="ogin:--ogin:"
     # The string to send in response to the request for your user name.
     USER_NAME="username"
     # The chat sequence to recongnize that the remote system
     # is asking for your password.
     PASSWD_CHAT_SEQ="word:"
     # The string to send in response to the request for your password.
     PASSWORD="password"
     # The prompt the remote system will give once you are logged in
     # If you do not define this then the script will assume that
     # there is no command to be issued to start up the remote protocol.
     #PROMPT="annex:"
     # The command to issue to start up the remote protocol
     #PROTOCOL_START="ppp"
     # The string to wait for to see that the protocol on the remote
     # end started OK. If this is empty then no check will be performed.
     START_ACK="Switching to PPP."
     # Pass a message on to diald and the system logs.
     function message () {
     [ $FIFO ] && echo "message $" >$FIFO
     logger -p local2.info -t connect "$"
     }
     # Reset the modem.
     message "Resetting Modem"
     /usr/sbin/chat TIMEOUT 5 "" ATZ TIMEOUT 45 OK ""
     if [ $? != 0 ]; then
         message "Failed to reset modem"
         exit 1
     fi
     # Initialize the modem. Its already reset.
     message "Initializing Modem"
     /usr/sbin/chat TIMEOUT 5 "" "$MODEM_INIT" TIMEOUT 45 OK ""
     if [ $? != 0 ]; then
         message "Failed to initialize modem"
         exit 1
     fi
     # Dial the remote system.
     message "Dialing system"
     /usr/sbin/chat \
          TIMEOUT 45 \
          ABORT "NO CARRIER" \
          ABORT BUSY \
          ABORT "NO DIALTONE" \
          ABORT ERROR \
          "" ATDT$PHONE_NUMBER \
          CONNECT ""
     case $? in
        0) message Connected;;
        1) message "Chat Error"; exit 1;;
        2) message "Chat Script Error"; exit 1;;
        3) message "Chat Timeout"; exit 1;;
        4) message "No Carrier"; exit 1;;
        5) message "Busy"; exit 1;;
        6) message "No DialTone"; exit 1;;
        7) message "Modem Error"; exit 1;;
        *)
     esac
     # We're connected try to log in.
     message "Logging in"
     /usr/sbin/chat \
          TIMEOUT 5 \
          $USER_CHAT_SEQ \\q$USER_NAME \
          TIMEOUT 45 \
          $PASSWD_CHAT_SEQ $PASSWORD
     if [ $? != 0 ]; then
         message "Failed to log in"
         exit 1
     fi
     # We logged in, try to start up the protocol (provided that the
     # user has specified how to do this)
     if [ $PROMPT ]; then
         message "Starting Comm Protocol"
         /usr/sbin/chat TIMEOUT 15 $PROMPT $PROTOCOL_START
         if [ $? != 0 ]; then
             message "Prompt not received"
             exit 1
         fi
     fi
     if [ $START_ACK ]; then
         /usr/sbin/chat TIMEOUT 15 $START_ACK ""
         if [ $? != 0 ]; then
          message "Failed to start Protocol"
          exit 1
         fi
     fi
     # Success!
     message "Protocol started"

/etc/ppp/ip-up.local:

The local script that is run whenever the PPP link comes up. For use with diald and wvdial, the following script will start the NARC firewall and register the PPP link as the gateway to the world. It will also run the PropagateIP script to notify the world where the Web site and other services are:

     #!/bin/sh
     #
     # This shell script is called by pppd whenever it brings up a PPP connection
     # to the remote host.  Its purpose is to add into the router's routing
     # tables a default routing to the gateway machine at the other end of the
     # PPP link.  This will cause all non-specifically routed packets to be
     # passed to the gateway at the other end of the PPP link for forwarding to
     # the Internet.
     #
     # This script also registers the ppp0 device via the firewall rules script to
     # change the rules so that they use the correct active device (ppp0 when the
     # link is up).
     #
     # The parameters that pppd passes to this script are (see pppd(8)):
     #
     #      <iface> <ttydev> <speed> <local-ip> <remote-ip> <ipparam>
     #
     #
     # When the PPP link comes up, add the default route.
     #
     /sbin/route add default gw $5
     #
     # Register the new interface with the firewall and masquerade.  This requires
     # us to restart the firewall, specifying which interface to use.  NARC (the
     # firewall) will figure out which address to use by querying the interface
     # directly.
     #
     /etc/rc.d/init.d/iptables restart $1
     #
     # Set up all of the dynamically addressed Web server links and advertise
     # our WAN IP address.
     #
     /etc/dyndns/PropagateIP
     exit 0

Note that, you'll need to make the changes, mentioned in the firewall/packetfilter section that deals with iptables, that allow the external device address to be passed to the iptables and NARC scripts.

/etc/ppp/ip-down.local:

The local script that is run whenever the PPP link goes down. For use with diald, the following script will restart the NARC firewall, telling it to masquerade and deliver packets to "sl0", by default. This will cause any Internet traffic to bring up the link. Without this script, only Internet traffic from the local machine will bring up the link:

     !/bin/sh
     #
     # This shell script is called by pppd whenever it shuts down up a PPP
     # connection to the remote host.  Its purpose is to unregister the ppp0 device
     # via the firewall rules script to change the rules so that they won't block
     # traffic within the network.
     #
     # The parameters that pppd passes to this script are (see pppd(8)):
     #
     #      <iface> <ttydev> <speed> <local-ip> <remote-ip> <ipparam>
     #
     #
     # Restart the firewall using sl0 as the interface so that traffic bound for
     # the net will bring up the connection.
     #
     /etc/rc.d/init.d/iptables restart sl0

/usr/lib/diald/connect-addroute:

Executed when a dialup route is added. Configures the packet filter for new IP addresses, etc. May not be necessary if you already have an ip-up and ip-down script in /etc/ppp. Here is a sample:

     #!/bin/sh
     #
     # This shell script is called by diald once it establishes the PPP connection
     # to the remote host.  Its purpose is to add into the router's routing
     # tables a default routing to the gateway machine at the other end of the
     # PPP link.  This will cause all non-specifically routed packets to be
     # passed to the gateway at the other end of the PPP link for forwarding to
     # the Internet.
     #
     # This script also registers the ppp0 and sl0 devices via the firewall
     # rules script to change the rules so that they use the correct active
     # device (ppp0 when the link is up and sl0 when it is down).  The script
     # is invoked by diald both when the link is brought up and also when it is
     # brought down.  Thus, whether the link is up or down, the firewall and
     # masquerade will continue to work properly.
     #
     # The parameters that diald passes to this script are (see diald(8)):
     #
     #      <iface> <netmask> <local-ip> <remote-ip> <metric>
     #
     #
     # If this is the PPP link coming up, add the default route.
     #
     if [ $1 != "sl0" ]; then
         /sbin/route add default gw $4
     fi
     #
     # Register the new interface with the firewall and masquerade.
     #
     /etc/rc.d/init.d/firewall-rules $1 register
     exit 0

/etc/ppp/options:

Contains basic options (not set by diald when it invokes pppd) for pppd. Should be set to the following:

     noauth
     defaultroute
     idle 600
     lock

/etc/wvdial.conf:

If the chat script shown in /usr/lib/diald/connect-* doesn't work, you may need to configure a new one. Seeing the actual dialog that passes between your machine and your ISP can be valuable in designing this dialog. To see the dialog, configure vwdial and use it to dial the ISP. Watch what takes place carefully and note the sequence of negotiations for use in your diald chat script.

Note that you should put the ip-up script (above) in place before you fire up wvdial to start the firewall when the dialup connection comes up. If you don't you'll be running unprotected to the world, which is not a good thing.

Of course, once you get wvdial working, you can just use it for your dialer and skip chat altogether (you'd be smart to listen to me).

Here's a sample wvdial config file:

     [Dialer Defaults]
     Modem = /dev/ttyS0
     Baud = 57600
     Init1 = ATZ
     Init2 = AT E1 Q0 S0=0 S11=55 V1 &C1 &K3 &D2 +FCLASS=0
     SetVolume = 0
     Dial Command = ATDT
     [Dialer Eskimo]
     Phone = nnn-nnn-nnnn
     Username = username
     Password = password
     Inherits = Dialer Defaults

/usr/bin/wvdial:

The wvdial program.

PPPoE

If PPPoE is necessary to make a DSL connection work, get the latest rp-pppoe from www.roaringpenguin.com/pppoe. Even if you have a copy of this software already installed on your system (e.g. RedHat 8 comes with it), you may need to get the RP version, since on some versions, RedHat has screwed up their implementation of it.

Download then unpack the tar file:

     tar xzvf rp-pppoe-x.y.tar.gz

Change to the source directory, run configure and make the source:

     cd rp-pppoe-x.y/src
     ./configure
     make

Install rp-pppoe as root:

     su
     make install

Next, make sure the Ethernet card you intend to use with the DSL modem is visible to the Linux kernel. You should see interface information if you query the card directly. For example:

     /sbin/ifconfig ethx

Should produce results that look like this:

     ethx      Link encap:Ethernet  HWaddr 00:40:F4:2D:73:64
               BROADCAST MULTICAST  MTU:1500  Metric:1
               RX packets:0 errors:0 dropped:0 overruns:0 frame:0
               TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:100
               RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
               Interrupt:9 Base address:0xf000

Note that "x" is the actual number of the network card, such as 0, 1, or 2. Of course, the HWaddr will be different but, as long as you see a valid MAC address in the HWaddr line, your card should be working.

DO NOT assign an IP address to the Ethernet card. You can configure the card to come up at boot time or not, as you prefer. If you see the card and it has an IP address when you do:

     /sbin/ifconfig

There is something wrong. Either use the network control GUI to disable the card at boot time or hack the appropriate files in some or all of these (note that they are often hard links of each other):

/etc/sysconfig/network-scripts/ifcfg-ethx:
/etc/sysconfig/networking/devices/ifcfg-ethx:
/etc/sysconfig/networking/profiles/default/ifcfg-ethx:

The basic ifcfg-ethx file should look like this:

     DEVICE=ethx
     TYPE=Ethernet
     USERCTL=no
     BOOTPROTO=none
     ONBOOT=yes|no

There is, however, one instance where you might want to configure the NIC for regular TCP communication. Some ADSL modems come configured as routers with PPPoE and all sorts of other protocols enabled on the WAN side, along with a firewall, DHCP server, etc. This type of modem can only be used with rp-pppoe if it is put into bridge mode. To do this, one typically aims a Web browser at some well-known IP address which is defined by the modem and then uses its Web administration UI to set it up in bridge mode.

If this is the case, first define the ifcfg-ethx file something like this:

     DEVICE=ethx
     TYPE=Ethernet
     USERCTL=no
     BOOTPROTO=none
     BROADCAST=10.255.255.255
     IPADDR=10.0.0.4
     NETMASK=255.0.0.0
     NETWORK=10.0.0.0
     PEERDNS=no
     ONBOOT=yes

This sets the NIC up so that it can talk to the modem. Once you've done that, restart networking ("/etc/rc.d/init.d/network restart") and use the local Web browser to set up the ADSL modem (for example, you'd use the file above and browse to 10.0.0.2:80 to set up an Encore ENDSL-AR). After the router is disabled and/or the modem is bridged, put the ifcfg-ethx file back the way it should be and restart networking. You can then proceed with the rest of the setup.

Note that you may have to disconnect the WAN side of the modem from the DSL line, if it is already in bridged mode, since the modem will echo the WAN IP address on the LAN side, if it is connected to the DSLAM. If you cannot get the modem to talk to the Web browser, you may need to reset to the factory configuration first, via the Factory Reset button, if there is one.

The settings for a configurable modem's WAN interface should be something like this, for use with rp-pppoe:

     Virtual Circuits: Disabled
     Bridge Mode:      Enabled
     IGMP:             Enabled|Disabled  (depending on whether you want your modem
                                         to respond to pings -- probably not)
     Encapsulation:    RFC 1483 Bridged IP, LLC  (<-- most US & Canada telcos)
                       RFC 1483 Routed IP, LLC
                       RFC 1483 Bridged IP, VC-Mux
                       RFC 1483 Routed IP, VC-Mux
     DHCP Client:      Disabled
     MAC Spoofing:     Disabled  (unless you really need it, which is unlikely)
     VPI:              0  (see chart of ADSL Settings)
     VCI:              35  (see chart of ADSL Settings)

A chart of ADSL Settings by Country is provided at:

     http://www.routertech.org/pages.php?page=43

Now, several config files need to be hacked. The easiest way to do this is to run setup script as root:

     cd ../scripts
     chmod ugo+x *
     ./pppoe-setup

Answer the questions and you should be all set. If you care what the setup script did, it modified the files chap-secrets, pap-secrets, pppoe.conf and pppoe-server-options in the /etc/ppp directory. Usually, it gets things right so that there's no need to monkey with what it did. However, if your connection does not work or you need to know more for some other reason, the rp-pppoe-x.y/doc/how-to-connect file has the whole story.

/etc/ppp/ip-up-adsl:

Create a script to be run when the ADSL link comes up. If you are using ipchains, uncomment the line that starts firewall-rules (below), otherwise uncomment the line that starts iptables (below):

     #!/bin/sh
     #
     # This shell script is called by pppd whenever it brings up a PPP connection
     # to the remote host.  Its purpose is to add into the router's routing
     # tables a default routing to the gateway machine at the other end of the
     # PPP link.  This will cause all non-specifically routed packets to be
     # passed to the gateway at the other end of the PPP link for forwarding to
     # the Internet.
     #
     # This script also registers the ppp0 device via the firewall rules script to
     # change the rules so that they use the correct active device (ppp0 when the
     # link is up).
     #
     # The parameters that pppd passes to this script are (see pppd(8)):
     #
     #      <iface> <ttydev> <speed> <local-ip> <remote-ip> <ipparam>
     #
     #
     # When the PPP link comes up, add the default route.
     #
     /sbin/route add default gw $5
     #
     # Register the new interface with the firewall and masquerade.
     #
     #/etc/rc.d/init.d/firewall-rules $1 register
     #/etc/rc.d/init.d/iptables reload $1
     #
     # Restart the NTP daemon in case the lease on our IP address has expired.
     # NTP needs to receive packets sent to the address it registered with the
     # stratum 2 NTP server.  If the IP address changes, this won't happen and
     # NTP will cease to work (silently, of course).
     #
     /etc/rc.d/init.d/ntpd restart
     #
     # Set up all of the dynamically addressed Web server links and advertise
     # our WAN IP address.
     #
     /etc/dyndns/PropagateIP
     exit 0

Note that, if you are using iptables/NARC as your firewall/packetfilter, you'll need to make the changes, mentioned in the firewall/packetfilter section that deals with iptables, that allow the external device address to be passed to the iptables and NARC scripts.

/etc/ppp/ip-up.local:

Copy the ip-up-adsl script to this file. This mimics what the switch-ppp script does when it switches the system to using the ADSL connection. It sets things up to run the ADSL connection by default.

/etc/ppp/options:

The default values for this file created by installation has the "lock" option set. It seems that this is not a good choice. Its probably best to clear the options file for use with PPPoE:

     echo "" >/etc/ppp/options

/etc/rc.d/init.d/adsl or /etc/rc.d/init.d/pppoe (for later versions):

Install the supplied script but do not enable it, if you will be using the transport switcher (below). Otherwise, install it with:

     chkconfig --add adsl
     chkconfig adsl on

or

     chkconfig --add pppoe
     chkconfig pppoe on

However, before doing this, you might want to change the start and stop levels for the script, so that the DSL connection will come up before the other networking stuff that might depend on it. Change the chkconfig line to:

     # chkconfig: 2345 11 89

PPP Transport Switcher

If you would like to be able to switch back and forth between PPPoE and dialup as your transport modes, install the two startup scripts (adsl or pppoe and diald) in /etc/rc.d/init.d but do not enable them (via chkconfig). Copy the script that you are likely to use the most (or wish to use the first time) to /etc/rc.d./init.d/ppptransport and then do the following:

     chkconfig --add ppptransport
     chkconfig ppptransport on

/etc/ppp/switch-ppp:

Create this script to switch between the ADSL and PPP transport modes:

     #! /bin/sh
     # Script to switch PPP to dialup or adsl.
     #
     # Select the transport method that the user wanted.
     #
     case "$1" in
         #
         # If the caller picked dialup, set up diald as the outbound ppp transport
         # method.
         #
         dialup)
             #
             # Stop whatever PPP transport is currently running.
             #
             if [ -f /etc/rc.d/init.d/ppptransport ]; then
                 /etc/rc.d/init.d/ppptransport stop
             fi
             #
             # Switch the transport method to use diald and the modem.
             #
             cp /etc/rc.d/init.d/dialdaemon /etc/rc.d/init.d/ppptransport
          rm -f /etc/ppp/options
          cp /etc/ppp/options-diald /etc/ppp/options
          rm -f /etc/ppp/ip-up.local
          echo "PPP transport switched to dialup."
          #
          # Start the new transport method.
          #
          /etc/rc.d/init.d/ppptransport start
          ;;
      #
      # If the caller picked adsl, set up adsl as the outbound ppp transport
      # method.
      #
      adsl)
          #
          # Stop whatever PPP transport is currently running.
          #
          if [ -f /etc/rc.d/init.d/ppptransport ]; then
              /etc/rc.d/init.d/ppptransport stop
          fi
          #
          # Switch the transport method to use adsl and pppoe.
          #
          cp /etc/rc.d/init.d/adsl /etc/rc.d/init.d/ppptransport
          echo "" > /etc/ppp/options
          rm -f /etc/ppp/ip-up.local
          cp /etc/ppp/ip-up-adsl /etc/ppp/ip-up.local
          echo "PPP transport switched to adsl."
          #
          # Start the new transport method.
          #
          /etc/rc.d/init.d/ppptransport start
          ;;
      #
      # For all other cases, give help.
      #
      *)
          echo "Usage: /etc/ppp/switch-ppp {dialup|adsl}"
          exit 1
          ;;

esac

     exit 0

Routed WAN Connection

With the advent of high-speed Internet services such as fibre-to-the-premises, WAN connection through a proprietary or ISP-supplied router is another option, along with dialup, PPP and PPPoE, that is available these days.

Such a WAN connection is usually configured as a standard ethernet connection to the WAN router. If firewalling or masquerading on this connection is desirable, the NARC packetfilter/firewall can be deployed. A WAN connection script such as the following will prove useful:

/etc/init.d/wanconnect:

     #!/bin/sh
     #
     # wanconnect    This script starts or stops a WAN connection, via the primary
     #               server in the cluster, over the local LAN, or via an ISP's
     #               WAN modem.
     #
     # chkconfig: 2345 11 89
     # description: Connects to the Internet, over the LAN, via the primary \
     #              server in the cluster or an ISP's local router.
     #
     # Revision History:
     # ewilde      2008Mar24  Initial coding.
     # ewilde      2010Apr17  Connect to the WAN via an ISP's router.
     #
     #
     # Define the install path for the binaries, etc.
     #
     INSTALL_PATH="/sbin"
     #
     # Define the paths to the programs used herein.
     #
     ARPING=${INSTALL_PATH}/arping
     IP=${INSTALL_PATH}/ip
     ROUTE=${INSTALL_PATH}/route
     #
     # Define the network prefix length to use when setting up a local network
     # address to be used with an ISP's WAN router.  Nearly all WAN routers use
     # some variation of a local IP address like 192.168.x.y, which implies a
     # 24-bit network prefix (i.e. 255.255.255.0).  You can set this value to
     # something else, if your router is so defined, but this should work for
     # pretty much everyone.
     #
     PREFIX=24
     #
     # Load the function library if it exists.
     #
     if [ -f /etc/rc.d/init.d/functions ]; then
         . /etc/rc.d/init.d/functions
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ]; then
         . /etc/sysconfig/clustering
     else
         WANCONNECTION=ADSL
     fi
     #
     # If this cluster uses ADSL or Diald for its WAN connection, we're outta here.
     #
     if [ x"$WANCONNECTION" == xADSL ] || [ x"$WANCONNECTION" == xDiald ]; then
         exit 0
     fi
     #
     # The user can configure a single IP address as the WAN gateway, in which
     # case we simply route all WAN traffic to that address over the LAN.
     #
     # Alternately, the user can specify a tuple consisting of the address of a
     # dedicated network device that connects to a WAN gateway router, an IP
     # address for that local network device, and an IP address for the WAN
     # gateway router.  Typically, the WAN gateway router will be an ISP's router
     # (such as an EVDO or FIOS router) that is set to bridge packets, sent to it
     # on one of its ports, to the WAN.
     #
     DEVICE=`echo $WANCONNECTION | grep -e "eth[0-9]\+," -o`
     if [ -n "$DEVICE" ]; then
         DEVICE=${DEVICE%,}
      LOCALADDR=`echo $WANCONNECTION | grep -e ",[^,]\+," -o`
      LOCALADDR=${LOCALADDR#,}
      LOCALADDR=${LOCALADDR%,}
      WANADDR=`echo $WANCONNECTION | grep -e ",[^,]\+\$" -o`
      WANADDR=${WANADDR#,}
  else
      DEVICE=""
      LOCALADDR=""
      WANADDR=$WANCONNECTION

fi
#
# If a local network device is used to talk to a WAN, we need to bring it up # and assign an IP address to it.
#
# Note that we must do this because we assume that the dedicated network # device is not brought up at boot time, nor is it assigned an IP address, # because the intention was to use the device for PPP or some other, as yet # undefined, purpose.
#
# Incidentally, much of this code was cribbed from the device startup code in # /etc/sysconfig/network-scripts/ifup-eth. So, you should check there for # changes, if this code fails to bring the device up properly. #
StartEth()

      {
      #
      # Bring up the network device.
      #
      if ! $IP link set dev $1 up ; then
          echo $"Failed to bring up $1."
          return 1
      fi
      #
      # Make sure that there's no other host already using our local IP
      # address.
      #
      if ! $ARPING -q -c 2 -w 3 -D -I $1 $2 ; then
          echo $"Error, some other host already uses address $2."
          return 1
      fi
      #
      # Set the IP address into the network device.
      #
      if ! $IP addr add $2/${PREFIX} brd + dev $1 scope link label $1 ; then
          echo $"Error adding address $2 for $1."
          return 1
      fi
      #
      # Update the ARP cache of the ISP's WAN router.
      #
      $ARPING -q -A -c 1 -I $1 $2
      ( sleep 2; $arping -q -U -c 1 -I $1 $2 ) >/dev/null 2>&1 < /dev/null &
      #
      # Looks like everything went well.
      #
      return 0
      }

#
# Routine to start up the WAN connection. #
start()

      {
      #
      # If need be, bring up the local network device and assign an
      # address to it.
      #
      ASSIGNOK=1
      if [ x"$DEVICE" != x ]; then
          echo -n "Assigning local IP address $LOCALADDR to $DEVICE "
          StartEth $DEVICE $LOCALADDR
          if [ $? = 0 ]; then
              echo_success
          else
              echo_failure
              ASSIGNOK=0
          fi
          echo ""
      fi
      #
      # Bring up the WAN connection.
      #
      ROUTEOK=0
      if [ $ASSIGNOK ]; then
          echo -n "Bringing up an Internet connection via $WANADDR "
          $ROUTE add default gw $WANADDR >/dev/null 2>&1
          if [ $? = 0 ]; then
              echo_success
              ROUTEOK=1
          else
              echo_failure
          fi
          echo ""
      fi
      #
      # If need be, bring up the local network device and assign an
      # address to it.
      #
      FIREWALLOK=1
      if [ $ROUTEOK ] && [ x"$DEVICE" != x ]; then
          echo -n "Bringing up firewall on $DEVICE, SNAT IP address $LOCALADDR "
          /etc/init.d/iptables start $DEVICE $LOCALADDR >/dev/null 2>&1
          if [ $? = 0 ]; then
              echo_success
          else
              echo_failure
              FIREWALLOK=0
          fi
          echo ""
      fi
      #
      # If everything went OK, create a lock file.
      #
      if [ $ASSIGNOK ] && [ $ROUTEOK ] && [ $FIREWALLOK ]; then
          touch /var/lock/subsys/wanconnect
      fi
      }

#
# Routine to stop the WAN connection. #
stop()

      {
      #
      # If need be, shut down the firewall.
      #
      if [ x"$DEVICE" != x ]; then
          echo -n "Shutting down the firewall "
          /etc/init.d/iptables stop >/dev/null 2>&1
          if [ $? = 0 ]; then
              rm -f /var/lock/subsys/wanconnect
              echo_success
          else
              echo_failure
          fi
          echo ""
      fi
      #
      # Clear out the routing table.
      #
      echo -n "Shutting down connection to the Internet via $WANADDR "
      $ROUTE del default gw $WANADDR >/dev/null 2>&1
      if [ $? = 0 ]; then
          rm -f /var/lock/subsys/wanconnect
          echo_success
      else
          echo_failure
      fi
      echo ""
      #
      # If need be, shut down the local network device.
      #
      if [ x"$DEVICE" != x ]; then
          echo -n "Shutting down device $DEVICE "
          $IP addr flush dev $DEVICE >/dev/null 2>&1
          $IP link set dev $DEVICE down >/dev/null 2>&1
          if [ $? = 0 ]; then
              rm -f /var/lock/subsys/wanconnect
              echo_success
          else
              echo_failure
          fi
          echo ""
      fi
      }

#
# Based on which operation we were asked to perform, have at it. #
case "$1" in

      #
      # Fire up the Great Link (thanks, Odo).
      #
      start)
          start
          ;;
      #
      # Bye, bye Great Link.
      #
      stop)
          stop
          ;;
      #
      # Refresh the Great Link.
      #
      restart)
          echo "Restarting WAN connection to the Internet"
          stop
          start
          ;;
      #
      # Waaaaa 'sappenin'?
      #
      status)
          if [ -f /var/lock/subsys/wanconnect ]; then
              echo "Connected to the Internet through $WANADDR"
          else
              echo "Not connected to the Internet"
          fi
          ;;
      #
      # Help text.
      #
      *)
          echo "Usage: wanconnect {start|stop|restart|status}"
          exit 1

esac
#
# Heading home.
#
exit 0

This script should be enabled to start at boot time with the following commands:

     chkconfig --add wanconnect
     chkconfig wanconnect on

/etc/sysconfig/network-scripts/ifcfg-ethx:
/etc/sysconfig/networking/devices/ifcfg-ethx:
/etc/sysconfig/networking/profiles/default/ifcfg-ethx:

To use this WAN connection script, the basic ifcfg-ethx file should look like this:

     DEVICE=ethx
     TYPE=Ethernet
     USERCTL=no
     BOOTPROTO=none
     ONBOOT=yes|no

By defining the ethernet interface in this manner, it can be used as a PPPoE interface to bring up an ADSL connection or it can be used as a connection to a WAN router. If the connection is to a WAN router, the wanconnect script will configure the WAN connection, through the ethernet interface, using the information provided by the WANCONNECTION parameter in the clustering configuration file. You should set it something like this:

/etc/sysconfig/clustering:

     WANCONNECTION=eth1,192.168.5.2,192.168.5.1

This tells wanconnect to set up the WAN connection on eth1. This interface will be given an IP address of 192.168.5.2. The routing table will be set to route all packets through the gateway at 192.168.5.1 (which is presumably the WAN router). The IP address 192.168.5.2 will be used to snat all packets that pass through to the WAN router.

Note that you must use iptables/NARC as your firewall/packetfilter and you'll need to make the changes, mentioned in the firewall/packetfilter section that deals with iptables, that allow the external device address and IP address to be passed to the iptables and NARC scripts.

The WAN router should be set up in the usual manner. In all probability, the router will be delivered by the ISP properly set up. You can leave it as is or switch it to bridge mode but note that, if you do switch it to bridge mode, you may need to handle the remote WAN protocol (such as PPPoE) yourself. If you don't switch it to bridge mode, you may want to punch through the firewall so that it delivers all packets from/to the WAN to/from the internal LAN. Otherwise, you need to set up the WAN router's firewall to allow the proper external services through to the LAN side.

The Linux system is plugged into one of the LAN ports of the WAN router. The IP address of the WAN router should be set to one that is in the same subnet as that used for the Linux system's external IP interface. This address should also match that set as the gateway address in the clustering configuration. In the above example, if WANCONNECTION was set to "eth1,192.168.5.2,192.168.5.1", the IP address of the WAN router's LAN interface would be set to "192.168.5.1".

OpenVPN Client

The OpenVPN Client can be used to create a VPN tunnel between the client system and any VPN server that uses the OpenVPN protocol. This will allow you to route traffic through the VPN tunnel to the VPN server from the client system.

To build the OpenVPN Client, we must first ensure that the prerequisite modules are available. These are lzo, openssl and pam. If your package manager has them available, you can install them like this on an RPM-based system like RedHat or CentOS:

     su
     yum install lzo
     yum install lzo-devel
     yum install openssl
     yum install openssl-devel
     yum install pam-devel

For a Debian-based system such as Ubuntu, you can try:

     su
     apt-get install lzo lzo-devel openssl openssl-devel pam-devel

If your system is like some earlier RedHat/CentOS systems, the openssl, openssl-devel and pam-devel packages will already be installed and the lzo/lzo-devel packages will not be available.

But, even if your OS does have a version of LZO available, the version that is available is usually obsolete by quite a bit (e.g. for CentOS 6.3, in 2013 May, lzo-2.03 is supplied, whereas lzo-2.06 has been available since 2011 Aug 12). Given that the whole point of LZO is speed, and since the later versions of LZO have optimizations that improve speed, the latest LZO will run cooler, cleaner, quieter, longer. In other words, the performance of your VPN connection will be better and use less resources. So, for us, we always begin by building LZO.

If you surf over to http://www.oberhumer.com/opensource/lzo/download, you can select and download the latest source version of LZO from the list. Once you have the source, change to the directory where it was downloaded and unpack it:

     tar -xvzf lzo-2.06.tar.gz

Change to the package directory, and configure and build the source:

     cd lzo-2.06
     ./configure
     make
     make check
     make test     (run a full test)

Once the tests run, install the LZO library:

     su
     make install

Now that you have all of the prerequisites installed, you can build and install the OpenVPN package. Begin by downloading the latest OpenVPN source from http://openvpn.net/index.php/open-source/downloads.html. After you obtain the source, change to the directory where it was downloaded and unpack it:

     tar xfz openvpn-2.3.1.tar.gz

Then cd to the top-level directory and type:

     ./configure --enable-password-save
     make

OpenVPN can be installed like this:

     su
     make install

Now that you've installed the OpenVPN package and before you proceed any further, you should check that the iproute package is installed on your system because this package and the commands therein are very useful when running VPN tunnels. You can simply look for /bin/ip or /sbin/ip to verify that the iproute package is installed. If it isn't, install it like this:

     su
     yum install iproute

Or, if you are using a Debian-based system such as Ubuntu, install it like this:

     sudo apt-get install iproute

Before you switch OpenVPN into production mode and start paying for a VPN provider (if that is your intention), you may want to test your OpenVPN installation against an OpenVPN server that is readily available and doesn't require you to sign up and pay for the service. You can surf over to http://www.bestfreevpn.com/ where you may be able to find a free OpenVPN server that will allow you to test your connection. Another tip is to search with your favorite search engin for 'free vpn "openvpn"' and see what it finds.

Whichever service you choose, be sure you pick a server that supports OpenVPN. You'll need to download a set of credentials that can be used to set up a free client connection, and possibly a username and password. If need be, unzip the downloaded file or extract the files (one way or another) to get something that looks like these files:

     openvpn.conf
     ca.crt
     client.crt
     client.key

And possibly, if the VPN provider uses a TLS authorization key, a file that looks like this:

     ta.key

Alternately, some of the free VPN providers supply a simple configuration file that includes everything (certificates and keys) all in the config file itself. If you choose one of these VPN providers, all you need is the config file, for example:

     simplevpn.conf

We used to use Hostizzle for testing because they offered a 30-day free trial account. This is no longer the case but they still offer a cheap account (currently $3/month). You could choose this route instead of a free VPN provider to test your connection against a real, production VPN service. In this discussion, we'll use Hostizzle and VPNBook (a provider that we found through our search engine) as examples of how to set up two different VPN connections.

It makes a nice, clean installation if you put all of the OpenVPN configuration files in a common directory such as /etc/openvpn, but such a directory is not created by the the OpenVPN install. So, before proceeding, create it:

     su
     mkdir /etc/openvpn
     chown root:root /etc/openvpn
     chmod u=rwx,go=rx /etc/openvpn

VPNBook supplies all the files you'll need as a single config file (with the certificates and keys embedded within it) as a set of config files, one for each connection type that they offer, in their zip file whereas Hostizzle supplies separate config and certificate/key files in their zipped archive. Either pick the single VPNBook config file for the connection type you wish to test (we start out simple using TCP, port 80, so our favorite ISP won't get their greasy little fingers on anything and we have a good chance of it working) or unzip the Hostizzle files to the /etc/openvpn directory, naming or renaming them along the way to identify their origin and keep them separate:

     su
     (extract VPNBook Euro 1, TCP, 80 as) /etc/openvpn/VPNBook-Euro1-TCP80.conf
     chown root:root /etc/openvpn/VPNBook-Euro1-TCP80.conf
     chmod u=rw,go= /etc/openvpn/VPNBook-Euro1-TCP80.conf

or

     su
     cp ca.crt /etc/openvpn/Hostizzle_ca.crt
     cp client.crt /etc/openvpn/Hostizzle_client.crt
     cp client.key /etc/openvpn/Hostizzle_client.key
     cp ta.key /etc/openvpn/Hostizzle_ta.key
     cp 3dbf5515f8a7a248e3559ae8534cfe44.ovpn /etc/openvpn/Hostizzle.conf
     chmod u=rw,go= /etc/openvpn/Hostizzle*

Since you renamed the Hostizzle files, you need to alter the reference to them in the conf file. Use your favorite editor to change Hostizzle.conf to read:

.

       .

tls-auth /etc/openvpn/Hostizzle_ta.key 1

     ca /etc/openvpn/Hostizzle_ca.crt
     cert /etc/openvpn/Hostizzle_client.crt
     key /etc/openvpn/Hostizzle_client.key
          .
          .
          .

Save the edited config file. You can then run the OpenVPN client against the VPNBook or Hostizzle server to test that all is well:

     su
     /usr/local/sbin/openvpn /etc/openvpn/VPNBook-Euro1-TCP80.conf

or

     su
     /usr/local/sbin/openvpn /etc/openvpn/Hostizzle.conf

You'll see a series of messages detailing the steps that OpenVPN takes to establish the connection to the server. If all goes well, you can see that the connection is properly set up by opening a separate terminal window and entering:

     /sbin/ifconfig

You should see something like:

     tun0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-...
           inet addr:10.60.0.1  P-t-P:10.60.0.1  Mask:255.255.0.0
           UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:100 
           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

When the VPN tunnel comes up, OpenVPN automatically adds a route to the tunnel and makes it the default gateway. You can see this with:

     /sbin/route

Your results should look something like this:

     Kernel IP routing table
     Destination     Gateway        Genmask         Flags Metric Ref   Use Iface
          .
          .
          .
     10.60.0.0       *              255.255.0.0     U     0      0       0 tun0
     default         10.60.0.1 128.0.0.0       UG    0      0       0 tun0
     128.0.0.0 10.60.0.1      128.0.0.0       UG    0      0       0 tun0

If all is well, you should be able to ping the gateway:

     ping -c 2 10.60.0.1

A real test is to do a traceroute to a node that is omnipresent, for example:

     traceroute google.com

If the traceroute returns nodes that aren't normally in your route to the big eye in the sky, OpenVPN is working. You can verify this fact by looking up the IP addresses in the traceroute results at either of these Web sites:

     http://ip2location.com/demo
     http://www.geobytes.com/IpLocator.htm?GetLocation

To shut down the tunnel, simply enter <Ctrl-C> on the terminal that is connected to the openvpn command and it should shut down cleanly, removing the routes through the tunnel device and the tunnel device itself. You can check this by replaying the /sbin/route and /sbin/ifconfig commands and observing that the tun device and routing is gone.

For each VPN connection that you wish to use, you will need to duplicate the steps above, copying the certificates and key that are supplied to you by the VPN service or that you get from your system adminstrator (or that you created when you set up the VPN server, if you are the system adminstrator). For example:

     su
     cp ca.crt /etc/openvpn/MyVPN_ca.crt
     cp client.crt /etc/openvpn/MyVPN_client.crt
     cp client.key /etc/openvpn/MyVPN_client.key
     chmod u=rw,go= /etc/openvpn/MyVPN*

Then, you'll have to create a config file for the VPN connection (if one isn't supplied by the VPN service). If a config file is supplied by the VPN service, you should use your favorite editor to change MyVPN.conf, renaming the certificate/key files to reflect the names used when they were copied to /etc/openvpn. Otherwise, if you need to start from scratch, copy the client sample configuration file from the OpenVPN install directory:

     su
     cp .../openvpn-2.3.1/sample-config-files/client.conf /etc/openvpn/MyVPN.conf
     chmod u=rw,go= /etc/openvpn/MyVPN.conf

Once you've copied the sample configuration file, use your favorite editor to make any changes necessary, as noted in the file's comments, but especially the changes to the certificate file names:

.

       .

ca /etc/openvpn/MyVPN_ca.crt
cert /etc/openvpn/MyVPN_client.crt
key /etc/openvpn/MyVPN_client.key

       .
       .
       .

Save the edited config file. As above, you can then run the OpenVPN client against the new VPN server like this:

     su
     /usr/local/sbin/openvpn --config /etc/openvpn/MyVPN.conf

If you're happy with simply starting and stopping the tunnel whenever you need it, you can do it from the command line using the command shown above. To stop the client, either press <Ctrl-C> or kill the PID, if you started the tunnel using "&".

Note that OpenVPN adds routes through the tunnel for all traffic when it comes up. Maybe this will work for you but, in all probability you'll want to set up routing yourself. If this is the case, you can start the tunnel like this:

     su
     /usr/local/sbin/openvpn --route-noexec --config /etc/openvpn/MyVPN.conf

Then, you should then consult the section "Routing Traffic Through a VPN Connection" for notes on how to selectively route only the traffic that you want through your VPN connection, instead of all the traffic.

Also, starting and stopping your OpenVPN tunnel using commands from the command line is fairly rudimentary. The "Starting/Stopping an OpenVPN Tunnel and Routing Traffic Through It" section shows the scripts that will allow you to bring up the tunnel and shut it down, as well as altering the routing table to route traffic through the tunnel and modifying the firewall rules to ensure that bringing up the tunnel doesn't open up the system to any security breaches.

PPTP Client

The PPTP Client can be used to create a VPN tunnel between the client system and a Microsoft (or other) VPN server that uses the PPTP protocol. This will allow you to route traffic through the VPN tunnel to the VPN server from the client system.

Before you proceed any further, if you have an OS with a kernel version below 2.6.15 or a pppd below version 2.4.2, see the notes at http://pptpclient.sourceforge.net/#tryit for more information on what to do. Alternately, the latest versions of commonly-available operating systems (e.g. CentOS 5.5 or Ubuntu 10.04) have kernels that are fine so you might want to consider an upgrade as your first step.

Begin by downloading the latest PPTP Client source from the downloads page at:

     http://sourceforge.net/projects/pptpclient/files/

Note that you'll have to look in the "All Files" section under the portion of the tree that contains the version you want to use. At the bottom of the tree you should see the straight source tar file which will look something like this:

     pptp-1.7.2.tar.gz

Once you've downloaded the package, build and install it like this:

     tar -xvzf pptp-1.7.2.tar.gz
     cd pptp-1.7.2
     make
     su
     make install

After you've installed the pptp package and before proceeding any further, you should check that the iproute package is installed on your system, since pptp uses the ip command, which is included in the iproute package. You can simply look for /bin/ip or /sbin/ip to verify that the iproute package is installed. If it isn't, install it like this:

     su
     yum install iproute

Or, if you are using a Debian-based system, such as Ubuntu, install it like this:

     sudo apt-get install iproute

Next, add a symlink for /bin/ip that points to /sbin/ip, if there isn't one already or if ip isn't already installed in /bin (i.e. on Debian-based systems):

     su
     ln -s /sbin/ip /bin/ip

This is required because the pptp code includes a hard-coded reference to /bin/ip in the module routing.c. On Debian-based systems, ip is actually installed in /bin. Then, a symlink from /sbin/ip is thoughtfully provided for those of us who know that ip is really supposed to be in /sbin. However on Redhat-based systems, such as CentOS, ip is installed in /sbin, where it is supposed to be but the symlink is not thoughtfully provided from /bin. So, pptp does not work properly and you will see the message "sh: /bin/ip: No such file or directory" when you start up the pptp tunnel via pppd. The alternative to adding the symlink is to fix the hard-coded reference to /bin/ip in routing.c and rebuild/reinstall pptp.

The install step will copy an "options.pptp" file into /etc/ppp. You should edit this file and check to make sure that the following options are present (the usually are):

     lock
     noauth
     nobsdcomp
     nodeflate

You may need to comment out some or all of these options, depending on which protocol the server is using to authenticate the username/password used to set up the tunnel:

     refuse-pap
     refuse-eap
     refuse-chap
     refuse-mschap

Unless all of the above protocols are commented out, pptp will only accept MSCHAP-V2.

If you wish to use MPPE encryption, which you probably will (note that MPPE requires the use of MSCHAP-V2 during authentication), be aware that there have been multiple versions of PPP with encryption support deployed. You should be able to figure out which one (CentOS uses ppp_mppe.ko, for example) by running this command as root:

     find /lib/modules -name ppp_mppe\*

If the find turns up one or more modules named "ppp_mppe.ko" you should include this option in your "options.pptp" file:

     # Require MPPE 128-bit encryption
     require-mppe-128

If the find turns up one or more modules named "ppp_mppe_mppc.ko" you should include this option in your "options.pptp" file:

     # Require MPPE 128-bit encryption
     mppe required,stateless

If it doesn't exist, create a /etc/ppp/chap-secrets file. It should only have permissions for root, since it holds usernames and passwords. The file should look like this:

     -rw------- 1 root root 256 Aug 23  2008 /etc/ppp/chap-secrets

Add a line to this file with the username and password that is used to login to the VPN server. If the username requires a domain name the line should look like this:

     vpndomain\\vpnuser PPTP secretpassword *

If the username does not require a domain name the line should look like this:

     vpnuser PPTP secretpassword *

In both cases, above, substitute your actual VPN authentication domain (if used), VPN username and the actual secret password for the three placeholders shown.

Note that, if the passwords contain any special characters, quote them in double quotes (as described in "man pppd").

Create a file for the tunnel in the /etc/ppp/peers subdirectory, for example as shown for the "mytunnel" connection here:

/etc/ppp/peers/mytunnel:

     pty "/usr/sbin/pptp vpnserver --nolaunchpppd"
     name vpndomain\\vpnuser
     remotename PPTP
     require-mppe-128
     file /etc/ppp/options.pptp
     ipparam mytunnel

The "vpnserver" parameter should be replaced with the actual server name or IP address of the VPN server.

On the "name" line, the VPN authentication domain (if used) and the VPN user should look exactly as they are entered in the chap-secrets file. Pick one of:

     vpndomain\\vpnuser

or

     vpnuser

If you won't be using MPPE encryption, omit the line that reads:

     require-mppe-128

Finally, the "mytunnel" parameter should be replaced by the actual name of the tunnel used to name this tunnel in the /etc/ppp/peers subdirectory.

And, as with the chap-secrets file, this file should only have permissions for root, since it holds information about the VPN tunnel. The file should look like this:

     -rw------- 1 root root 57 Oct 25 23:11 mytunnel

At this point, you should have everything set up that is necessary to establish a VPN tunnel with your remote VPN server. You can test it with the following command:

     /usr/sbin/pppd call mytunnel mtu 1435 mru 1435 debug dump logfd 2 nodetach

The "mytunnel" parameter should be replaced by the actual name of the tunnel that you used in the /etc/ppp/peers subdirectory. The debug output will show the tunnel coming up, the logon being negotiated, etc. If anything goes wrong, the tunnel will be terminated and you'll see an error message. Otherwise, the tunnel will stay up, waiting for you to make the next move. Terminate the tunnel by pressing <Ctrl-C>.

If you're happy with simply starting and stopping the tunnel whenever you need it, you can copy the "pon" and "poff" scripts that are supplied with PPP:

     su
     cp /usr/share/doc/ppp-2.4.4/scripts/pon /etc/ppp
     cp /usr/share/doc/ppp-2.4.4/scripts/poff /etc/ppp
     chmod ugo+x /etc/ppp/pon /etc/ppp/poff

Don't forget to give them execute permissions. Then, you can start/stop the VPN tunnel like this:

     /etc/ppp/pon mytunnel mtu 1435 mru 1435
     /etc/ppp/poff mytunnel

Once the VPN tunnel is set up, routing packets through the VPN tunnel can be as simple as setting up the default gateway:

     su
     /sbin/route add default gw 195.12.34.56

In this example, we're assuming that, when the VPN tunnel came up, the actual IP address of the tunnel was "195.12.34.56". You can check that the route to the VPN tunnel is correctly set up like this:

     /sbin/route

Note that, if you do happen to check your routing tables, you may see a route to 169.254.0.0. Huh? Where'd that come from? Well, its a new feature brought to you by those ever-thinking, clever guys in distroland. Somebody thought it would be real cool to add a default route to this bogus address so there would always be something in the routing table, even if you forget to configure things properly (it probably has something to do with that gem of an application, Network Manager or maybe even Windoze). Nice! Thanks for helping me out. All this does is confuse the issue. If you want to get rid of it, you can add a parameter to the network file in sysconfig.

/etc/sysconfig/network:

     NOZEROCONF=yes

This will get rid of the 169.254.0.0 route and make sure that all of your packets only go where you say they should.

Incidentally, if you don't want to add a permanent route (by adding a line to your rc.local script, for example) or want to add the route by hand every time you bring up the VPN tunnel, you can automate the process of adding/removing such a route by modifying the ip-up.local and ip-down.local scripts.

These two scripts are called every time pppd either brings up or shuts down a ppp connection. You can simply add a few lines at the start of each one to check the ppp connection name and, if it is greater than 1, call a special VPN up/down script. This will allow you to have a couple of regular PPP connections and then start up a VPN tunnel on a special PPP connection (which may even route the tunnel through one of the regular PPP connections). To start the tunnel, you would use:

     su
     /usr/sbin/pppd call mytunnel unit 2

The modifications to the two scripts (whose full description is provided elsewhere in these notes), are as follows:

/etc/ppp/ip-up.local:

     #!/bin/sh
     #
     # This shell script is called by pppd whenever it brings up a PPP connection
     # to the remote host.  Its purpose is to do any work that is required to
     # complete the PPP connection such as adding firewall rules specific to the
     # connection or routes to remote host.
     #
     # Since pppd is also used to bring VPN tunnels up and down via pptp, as well
     # as for regular ppp connections, the first thing this script does is check
     # to see if the connection is a VPN tunnel.  By convention, we will always
     # start VPN tunnels so that they use an interface name of ppp2 or greater.
     # This is typically done with a command like this:
     #
     #      /usr/sbin/pppd call my-tunnel unit 2
     #
     # If this script sees such a ppp connection, it will invoke vpn-updown to
     # take care of starting up the VPN tunnel.  Once vpn-updown completes,
     # the script exits.
     #
     # Otherwise, for regular ppp connections this script's purpose is to add
     # into the router's routing tables a default routing to the gateway
     # machine at the other end of the PPP link.  This will cause all
     # non-specifically routed packets to be passed to the gateway at the other
     # end of the PPP link for forwarding to the Internet.
     #
     # This script also registers the regular ppp connection's device via the
     # firewall rules script to change the rules so that they use the correct
     # active device when the link is up.
     #
     # The parameters that pppd passes to this script are (see pppd(8)):
     #
     #      <iface> <ttydev> <speed> <local-ip> <remote-ip> <ipparam>
     #
     #
     # First, since pppd can start VPN tunnels via pptp, we'll check to see if
     # this is a VPN tunnel and act accordingly, if it is.
     #
     if [ "$1" = ppp2 ] ; then
         /etc/ppp/vpn-updown $1 $4 $5
         exit 0
     fi
          .
          .
          .

/etc/ppp/ip-down.local:

     Note that, for many PPP configurations, the ip-down.local script is not
     necessary.  If that's the case, you simply use the code shown here, up to
     the elipsis, as the entire script.
     #!/bin/sh
     #
     # This shell script is called by pppd whenever it brings down a PPP
     # connection to a remote host.  Its purpose is to undo anything that was
     # done by ip-up.local when the connection was brought up.
     #
     # There is nothing to be done for regular PPP connections but pppd is also
     # used to bring VPN tunnels up and down via pptp.  By convention, we always
     # start VPN tunnels so that they use an interface name of ppp2 or greater.
     # If this script sees such a PPP connection, it will invoke vpn-updown to
     # take care of shutting down the VPN tunnel.
     #
     # The parameters that pppd passes to this script are (see pppd(8)):
     #
     #      <iface> <ttydev> <speed> <local-ip> <remote-ip> <ipparam>
     #
     #
     # If this is a VPN tunnel, invoke vpn-updown to handle it.
     #
     if [ $1 = 'ppp2' ] ; then
         /etc/ppp/vpn-updown $1
         exit 0
     fi
          .
          .
          .

All of the work of bringing up and shutting down the VPN connection is then done by a single script, which decides on what to do based on the parameters passed to it, either by ip-up.local or ip-down.local. You should put this script in the PPP directory, with all of the other PPP scripts.

/etc/ppp/vpn-updown:

     #!/bin/bash
     #
     # Script to handle the bringup and shutdown of a VPN tunnel started by
     # pppd/pptp.
     #
     # This script is called by the ppp ip-up.local script when it determines
     # that a VPN tunnel is being brought up.  It decides this when it sees an
     # interface with a number greater than one (e.g. ppp2, ppp3).  This number
     # is chosen when the VPN tunnel is started by passing a number to pppd via
     # the unit parameter, like this:
     #
     #      /usr/sbin/pppd call my-tunnel unit 2
     #
     # When ip-up.local calls this script, it must pass the VPN interface name
     # (e.g. ppp2), the interface's local IP address (e.g. 192.168.1.99), and
     # VPN tunnel's remote IP address (e.g. 93.182.128.101) as the first three
     # parameters.  In that case, this script will set up the default route
     # through the VPN tunnel.
     #
     # This script is also called by the ppp ip-down.local script when it
     # determines that a VPN tunnel is being shut down.  As with ip-up.local,
     # ip-down.local decides this when it sees an interface with a number
     # greater than one (e.g. ppp2, ppp3).
     #
     # When ip-down.local calls this script, it must pass the VPN interface
     # name (e.g. ppp2) as the only parameter.  The fact that the interface's
     # local IP address is not passed indicates to this script that it should
     # shut down the VPN tunnel, rather than start it up.  In that case, this
     # script will remove the default route through the VPN tunnel.
     #
     #
     # Pick up the VPN interface that was created by pppd/pptp.
     #
     VPN_INTERFACE=$1
     VPN_INTERFACE_IP=$2
     VPN_REMOTE_IP=$3
     #
     # If the VPN tunnel is being brought up, set everything up.
     #
     if [ x"$VPN_INTERFACE_IP" != x ] ; then
         /sbin/route add default gw $VPN_REMOTE_IP
     #
     # Otherwise, shut everything down.
     #
     else
         /sbin/route del default
     fi

Starting and stopping your PPTP VPN tunnel using commands from the command line and adding a default route via the above script is fairly rudimentary. You can consult the section "Routing Traffic Through a VPN Connection" for notes on how to selectively route only the traffic that you want through your VPN connection, instead of all the traffic. And, the "Starting/Stopping a PPTP Tunnel and Routing Traffic Through It" section shows the scripts that you'll need to bring up the tunnel and shut it down, as well as to alter the routing table to route traffic through the tunnel and modify the firewall rules to ensure that bringing up the tunnel doesn't open up the system to any security breaches.

Routing Traffic Through a VPN Connection

If the system on which PPTP or OpenVPN is running is a router, and you only want to route packets from it or from certain machines connected to it or perhaps only certain types of traffic (e.g. all mail traffic outbound to SMTP on port 25) through the VPN tunnel, you can use iproute2 to accomplish this. This method works, regardless of what type of VPN tunnel you are using but, if you are using OpenVPN, you must use the "--route-noexec" parameter when you bring up the tunnel to prevent OpenVPN from setting up the routes itself.

The first step, which is the same regardless of which types of packets you wish to route through the VPN tunnel, consists of adding a special table for the VPN tunnel to the routing tables. Use your favorite editor to update or add:

/etc/iproute2/rt_tables:

     #
     # reserved values
     #
     255     local
     254     main
     253     default
     0       unspec
     #
     # local
     #
     #1      inr.ruhep
     200     vpn.tunnel

Then, add a default gateway to the "special" table so that any packets that are routed through it will actually be sent down the VPN tunnel. For PPTP, this will look something like this:

     su
     /sbin/ip route add default via 195.12.34.56 dev ppp2 table vpn.tunnel
     /sbin/ip route flush cache

Or, for OpenVPN, this will look something like this:

     su
     /sbin/ip route add default via 10.1.1.11 dev tun0 table vpn.tunnel
     /sbin/ip route flush cache

You should be able to observe that this default routing is in effect as follows:

     /sbin/ip route list table vpn.tunnel

Which should produce a result like this (for PPTP):

     default via 195.12.34.56 dev ppp2

or like this (for OpenVPN):

     default via 10.1.1.11 dev tun0

Now that you have a "special" routing table that can be used to route only the packets that you select for transport through the VPN tunnel, one method that you can use to choose those packets is to tell iproute2 that all packets from certain machines should be routed through that table. In this case, you will be routing packets based on their source IP address. By way of example, we'll assume that you want to route all packets from a single machine that is connected to the router:

     su
     /sbin/ip rule add from 192.168.11.4 table vpn.tunnel

You can verify that the special table is in effect as follows:

     /sbin/ip rule ls

You should see something like this:

     0:      from all lookup local
     32765:  from 192.168.11.4 lookup vpn.tunnel
     32766:  from all lookup main
     32767:  from all lookup default

Note that, in the above examples, we assumed that, when the VPN tunnel came up, the interface" assigned to it was either "ppp2" or "tun0" and that the corresponding IP address of the tunnel was either "195.12.34.56" or "10.1.1.11", depending on which type of tunnel was started. Naturally, whenever bring up a VPN tunnel the actual device names and IP address will be assigned dynamically. While you can certainly add and remove the default route and routing rule or rules by hand, whenever you bring up a VPN tunnel, the vpn-updown script (shown in either the "Starting/Stopping a PPTP Tunnel and Routing Traffic Through It" or "Starting/Stopping an OpenVPN Tunnel and Routing Traffic Through It" sections) can be used to add and remove such rules and the default route automatically.

If, as above, the system on which PPTP or OpenVPN is running is a router but, instead of routing all traffic from certain machines, you only want to route certain types of traffic passing through it (e.g. all SMTP traffic), you can still use iproute2 to accomplish this. In fact, we prefer to use this approach to route traffic through the VPN tunnel even when the routing selection is based solely on source IP address and could be accomplished by a routing rule, as shown above, because this method is much more flexible, in our opinion.

Plus, it has one extra advantage that using a routing rule does not. This technique allows packets that are supposed to be routed through the VPN tunnel to be dumped instead of being delivered through some other route or specifically routed somewhere else, if and when the VPN tunnel is down. This feature is especially important if, for example, you are using the VPN tunnel to obscure the source of your Internet traffic by routing it to a remote location. You certainly don't want packets being delivered through your regular Internet connection, if your VPN tunnel disconnects.

As above, you should begin by adding a special table for the VPN tunnel to the routing tables, with your favorite editor. If you won't be simultaneously employing the previous approach and this approach, you can reuse the same table name as above, otherwise you should pick a different name, as we've shown in this example:

/etc/iproute2/rt_tables:

     #
     # reserved values
     #
     255     local
     254     main
     253     default
     0       unspec
     #
     # local
     #
     #1      inr.ruhep
     200     vpn.tunnel
     201     vpn.marked

Now that you again have your "special" routing table, you can use netfilter to mark all of the packets that should be routed through the VPN tunnel so that they can be properly routed, based on their marks, by iproute2. Let's assume, you're looking to route most of the traffic from a particular machine through the tunnel but, as an added wrinkle, you want traffic bound for a couple of specific IP addresses (in this case your DNS servers) to be left alone. Add some iptables rules like this:

     su
     /sbin/iptables -N VPN_TUNNEL -t mangle
     # Allow our DNS servers.
     /sbin/iptables -A VPN_TUNNEL -t mangle -d 151.203.0.84 -j RETURN
     /sbin/iptables -A VPN_TUNNEL -t mangle -d 204.122.16.8 -j RETURN
     /sbin/iptables -A VPN_TUNNEL -t mangle -d 216.231.41.2 -j RETURN
     # Mark the packet for tunneling.
     /sbin/iptables -A VPN_TUNNEL -t mangle -j MARK --set-mark 1
     # Add each of the systems that are to be routed through the VPN tunnel.
     /sbin/iptables -A PREROUTING -t mangle -i eth0 -s 192.168.1.84 -j VPN_TUNNEL
     /sbin/iptables -A PREROUTING -t mangle -i eth1 -s 192.168.11.4 -j VPN_TUNNEL

Once you have the rules set as you wish, you can save them with:

     su
     iptables-save >mytables.sav

If you need to restore the configuration, you can do so with:

     su
     iptables-restore <mytables.sav

Marking packets using other criteria is also possible. For example, if you wanted to route all of your incoming SMTP traffic through the VPN tunnel to an email server at a secure/remote location, you can mark the packets like this:

     su
     /sbin/iptables -A PREROUTING -t mangle -i eth0 -p tcp --dport 25 \
         -j VPN_TUNNEL
     /sbin/iptables -A PREROUTING -t mangle -i eth1 -p tcp --dport 25 \
         -j VPN_TUNNEL

Now that you have marking rules in place, you can cause all of the marked packets to be routed through your "special" routing table, using the mark that iptables (known to ip as the firewall, hence the tag "fwmark") adds to each packet. For PPTP, you should do something like this:

     su
     /sbin/ip route add default via 195.12.34.56 dev ppp2 table vpn.marked
     /sbin/ip route flush cache
     /sbin/ip rule add fwmark 1 table vpn.marked

Or, if you have an OpenVPN tunnel, you might do something like this:

     su
     /sbin/ip route add default via 10.1.1.11 dev tun0 table vpn.marked
     /sbin/ip route flush cache
     /sbin/ip rule add fwmark 1 table vpn.marked

You could even get creative by setting up multiple VPN tunnels, multiple "special" routing tables and default routes, and then marking different types of packets with different firewall marks. This would route one type of traffic down one VPN tunnel and another type of traffic down another VPN tunnel. Complicated network topologies and routes are quite possible.

If you are using NARC as your firewall, you can add rules, such as those shown above, to your narc-custom.conf file. The following lines will mark any packets from the machines whose traffic is to be routed through the VPN tunnel. As discussed above, if the packets aren't routed, they will be dumped:

/etc/narc/narc-custom.conf:

.

       .

#
# Prerouting rule to tag certain kinds of packets from certain machines # in the DMZ with a flag that can be used by iproute2 to route them via # a VPN tunnel to a remote system.
#
# First, the rule looks for all packets that aren't to be routed through # the tunnel. Typically, these are packets that use protocols that # shouldn't be routed (e.g. DNS lookups or those that are already being # tunneled somewhere else) or packets bound for systems known to be OK. # For all such packets, the rule simply returns. #
# If the packets are still of interest, all those from the machines to # be rerouted are marked with a flag of 1 to tell iproute2 to reroute # them through the tunnel.
#
$IPTABLES -N VPN_TUNNEL -t mangle
#
# Allow our DNS servers.
#
# Note that this can be dangerous if your intention is to hide who you # are, to the outside world, by virtue of tunneling all of your packet # traffic off to some remote VPN service. If the system that is supposed # to be hidden inadvertently asks for an address from one of these DNS # servers and then starts accessing that address through the VPN, a clever # observer could put two and two together and figure out the real IP # address of the hidden system.
#
# Therefore, make sure that all systems that are supposed to be hidden do # not access any DNS servers except those that are approved for use with # the VPN provier, or disable this feature herein. Otherwise, you may # experience a DNS leak that could expose your hidden system to the world. #
$IPTABLES -A VPN_TUNNEL -t mangle -d 151.203.0.84 -j RETURN $IPTABLES -A VPN_TUNNEL -t mangle -d 151.203.0.85 -j RETURN $IPTABLES -A VPN_TUNNEL -t mangle -d 204.122.16.8 -j RETURN $IPTABLES -A VPN_TUNNEL -t mangle -d 216.231.41.2 -j RETURN #
# Allow other VPN tunnels that are directly connected. #
$IPTABLES -A VPN_TUNNEL -t mangle -d 123.45.67.89 -j RETURN $IPTABLES -A VPN_TUNNEL -t mangle -d 123.45.67.98 -j RETURN #
# Since all packets from certain machines are sent here, just in case # locally-bound packets from those machines somehow get past this router, # don't tunnel any packets that are bound for our local network segments. #
$IPTABLES -A VPN_TUNNEL -t mangle -d 192.168.1.0/24 -j RETURN $IPTABLES -A VPN_TUNNEL -t mangle -d 192.168.11.0/24 -j RETURN

     # Mark the packet for tunneling.
     $IPTABLES -A VPN_TUNNEL -t mangle -j MARK --set-mark 1
     #
     # Add a rule for each of the systems that are to be routed through the
     # VPN tunnel.  This will send all of those systems' packets through the
     # VPN prerouting chain of the mangle table.
     #
     # Note that you should not route packet traffic through the VPN tunnel
     # for any systems that are supposed to be hidden, if they have any
     # external ports open to the outside network (either through port
     # forwarding or through a direct connection, such as this system itself
     # has).  Pay particular attention that you don't route any packet traffic
     # for this system itself through the VPN connection, if it is routing
     # packets for any attached hidden system.
     #
     # If you do this, there is a vulnerability that can allow an outside
     # observer to determine the actual IP address of the hidden system by
     # probing its external ports with UDP packets that will be answered
     # through the VPN tunnel.  See the full description of the problem in the
     # /etc/openvpn/tun-updown script.
     #
     $IPTABLES -A PREROUTING -t mangle -i eth0 -s 192.168.1.84 -j VPN_TUNNEL
     $IPTABLES -A PREROUTING -t mangle -i eth1 -s 192.168.11.4 -j VPN_TUNNEL
     #
     # If you wish, you can send a whole range of systems through the VPN
     # tunnel.  For example, this rule routes all of the hosts that are
     # assigned IP addresses by DHCP, on a DMZ segment, through the VPN tunnel
     # by checking for any source address in the range used by DHCP.
     #
     $IPTABLES -A PREROUTING -t mangle -i eth1 -m iprange \
         --src-range 192.168.11.150-192.168.11.200 -j VPN_TUNNEL
     #
     # If the VPN tunnel is ever down, we don't want packets marked for the
     # tunnel to go anywhere since, presumably, they were being tunneled for
     # a good reason.  If we ever get as far as the forward queue (i.e. the
     # packets are just about to be delivered) and we find a marked packet
     # bound for the external interface, we should reject it.
     #
     # Normally, marked packets shouldn't make it this far, bound for the
     # external interface, because iproute2 will take note of their marking
     # and reroute them to the VPN tunnel (when it is up).
     #
     # Note that all attempts to deliver VPN packets to the external interface
     # are logged with a prefix of VPN.
     #
     $IPTABLES -N VPN_REJECT
     $IPTABLES -A VPN_REJECT -j LOG --log-level $NORM_LOG_LEVEL \
         --log-tcp-options --log-ip-options --log-prefix "VPN_REJECT "
     $IPTABLES -A VPN_REJECT -j VPN_REJECT
     # Hook the rule in to the forward chain.
     $IPTABLES -I FORWARD -m mark --mark 1 -o $EXTERNAL_INTERFACE -j VPN_REJECT
          .
          .
          .

Note that in the above example, the changes that we've made to narc-custom.conf allow other VPN tunnels that are bound directly from the machines we're checking to pass directly through to outside VPN servers without being routed through a second tunnel (i.e. the one we're setting up).

If you use this feature, you'll note that the rules that check for other, directly connected VPN tunnels use specific IP addresses, not symbolic host names. This is because using symbolic names in an iptables rule could cause problems (consider what would happen to the packets bound for a DNS server, requesting the lookup of a name, as they traversed the iptables filters -- can you say "recursion"). However, in most cases, VPN connections are usually established using a symbolic name. But, the IP address that the symbolic name maps to could change (after you used reverse DNS to look up its mapped IP address and set it in your NARC custom configuration).

This being the case, if you want to punch holes through for directly connected VPN tunnels, you might want to create a script that checks to see if the VPN servers are all where you think they should be and send you email if anything changes. The script can be run at regular intervals as a cron job:

/etc/ppp/vpncheck or /etc/openvpn/vpncheck:

     #!/bin/sh
     #
     # Check that the VPN tunnel servers still have the same IP addresses as
     # expected.  The reason we do this is because the rules used by iptables to
     # allow unhindered routing to the VPN tunnel servers should not use
     # symbolic machine names (think about doing a remote DNS lookup in the
     # component that is routing all of the system's packets -- potentially not
     # a good situation).  Instead, the rules use exact IP addresses so, if an
     # IP address changes, unhindered routing to one or more of the VPN tunnel
     # servers can fail silently.
     #
     # If any IP doesn't match what is expected, send email to root.
     #
     # Note that this script may be passed an argument of "debug" to cause
     # debugging information to be dumped to stdout.  Normally, it runs silently
     # and sends email to root if an error occurs.
     #
     # Note that this script uses the SERVERROLE setting in
     # /etc/sysconfig/clustering to determine which server it is running on.  If
     # it determines that it is running on a secondary server, it does nothing.
     # If no setting is found, the default is to act as a standalone server.
     #
     ############################################################################
     #
     # The list of VPN tunnel servers and their IP addresses (see
     # /etc/narc/narc-custom.conf).
     #
     VPNServs=("vpn1.mydomain.com" "vpn2.mydomain.com")
     VPNAddrs=(  "123.45.67.89"      "123.45.67.98"   )
     #
     # Debugging flag.
     #
     DebugFl="no"
     ############################################################################
     #
     # Check if a VPN server's DNS entry maps its expected IP address.
     #
     IsMappedOK()
     {
     #
     # Get the IP address that the server name maps to.  See if it is mapped OK.
     # We need to check this because the narc-custom.conf file must use real IP
     # addresses, not DNS addresses.  However, some VPN servers are subject to
     # remapping, so we need to check that they are where we think they are.
     #
     # Note that piping the result of host through sed twice handles the case
     # where the output of host looks like:
     #
     #      vpn.myhost.com is an alias for myhost.com.
     #      myhost.com has address 71.183.239.131
     #
     # Sometimes, the DNS address for the VPN server is aliased and can change
     # so we want to be able to look up the alias herein to ensure that the
     # actual VPN server is at the address we expected, not what its aliased to.
     #
     ServMapped=`host $1 | sed "s/^. \([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\)/\1/"`
     ServMapped=`echo $ServMapped | sed "s/^. \([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\)/\1/"`
     if test $3 == "yes"; then
         echo Server $1, expected $2, got $ServMapped
     fi
     if test x"$ServMapped" == x"$2"; then
         return 0  #Success
     else
         return 1  #Failure
     fi
     }
     ############################################################################
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     #
     # If this is the Secondary server, we're all done.
     #
     if [ x"$SERVERROLE" == xSecondary ]; then
         exit 0
     fi
     #
     # See if we're debugging.
     #
     if test x"$1" == x"debug"; then
         DebugFl="yes"
     fi
     #
     # Loop through all the VPN servers and check them all.
     #
     Curr=0
     ServFail=0
     FailMsg=""
     for ServName in ${VPNServs[*]}; do
         #
         # Check if the server is mapped OK.
         #
         if (! IsMappedOK $ServName ${VPNAddrs[$Curr]} $DebugFl); then
             let ServFail+=1
             FailMsg="${FailMsg}Server $ServName is not mapped to expected address ${VPNAddrs[$Curr]}"$'\n'
         fi
      let Curr+=1

done
#
# See how everything went.
#
if ( test $ServFail -gt 0 ); then

      if test $DebugFl == "yes"; then
          echo $FailMsg
      else
          /bin/mail -s "Problem with VPN tunneling" root <<-ENDMSG
          There seems to be a problem with VPN tunneling.  Here's a synopsis:
          $FailMsg
          Perhaps you should check /etc/narc/narc-custom.conf
          ENDMSG
      fi

fi

Starting/Stopping an OpenVPN Tunnel and Routing Traffic Through It

The previous sections showed how to bring up an OpenVPN tunnel and optionally how to use iproute2 to route traffic through the VPN tunnel. However, the setting up or knocking down of a VPN tunnel is actually a lot more complicated than was alluded to in those sections. This section outlines all of the steps that are necessary to route traffic from one or more locally-connected systems through an OpenVPN tunnel on a router system and back.

Starting the VPN tunnel on the router itself is simple enough but if packets from other locally-connected systems are also to be routed through it, the router's kernel packet filters must be adjusted to masquerade the outbound packets so that they all appear to be coming from the tunneled client (i.e. the router). This is because the remote VPN server has no knowledge of the other locally-connected, routed systems, just the tunneled client.

However, if masquerading is used, confusion within the kernel code about the proper route being used for the masqueraded/tunneled reply packets causes the router to incorrectly apply reverse path filtering to all packets received on the tunnel interface and, hence, drop them as Martians. Consequently, reverse path filtering must be disabled on the tunnel interface, each time the tunnel is brought up, or response packets will just disappear. The VPN tunnel will appear to be set up correctly but no connections from the locally-connected systems will work (although traffic from/to the router itself will work fine).

In many instances, a VPN tunnel can be used to create a connection to a remote location that is wild and wooley (e.g. a connection to the Internet that is located in a faraway place, far from prying eyes). But, bringing up such a VPN tunnel, especially if reverse path filtering is disabled, can allow all sorts of nasty stuff into the local system. This makes it prudent to alter the firewall (iptables) rules, whenever the VPN tunnel is brought up, to do spoof checking and handle the inbound reception and forwarding of VPN packets.

Once the VPN tunnel is set up, as noted in "Routing Traffic Through a VPN Connection" iptables can also be used to figure out which packets should be routed through the VPN tunnel and so mark them. Then, iproute2 can be used to establish the routing tables that actually route the marked packets through the VPN tunnel and any replies back to the local systems. This means that the routing tables must set up and knocked down whenever the VPN tunnel is brought up or down.

Finally, it would be great if the operation of the VPN tunnel could be automated so that it is started on the router whenever the router system is brought up and stopped whenever it is shut down. Thus, we need a startup script that ties all of these steps together.

In preparation for running an OpenVPN tunnel, to begin with, if you haven't already done so, obtain the actual credentials that you will be using to make your OpenVPN tunnel connection and put them in the /etc/openvpn directory, as described in the "OpenVPN Client" section. You can test the tunnel connection manually before you proceed any furnther, if you wish. Its probably a good idea to keep things simple until the connection is known to be working OK.

The next preparatory step is to set up the "special" routing table that will be used by iproute2 to route packets through the VPN tunnel, as outlined in the "Routing Traffic Through a VPN Connection" section. The table itself will not be activated until the tunnel is actually brought up but it must be configured ahead of time.

The last preparatory step is to determine what criteria you will use to select the packets that will be routed through the VPN tunnel and decide how you will implement the selection. As we said in the section on "Routing Traffic Through a VPN Connection", we prefer to use iptables to mark the packets that are to be routed through the OpenVPN tunnel using iptables and then direct the marked packets to the "special" routing table and thence through the VPN tunnel.

You can certainly set the rules directly with the iptables command and then save/restore the configuration with iptables-save and iptables-restore. Since our preferred method of setting up iptables is the NARC firewall package, we instead add the rules to the /etc/narc/narc-custom.conf file and allow NARC to set them up when the firewall is brought up, presumably at system startup.

We now turn our attention to the tun-updown script which handles all of the alterations to the firewall rules and routing tables required when an OpenVPN tunnel is started or stopped. It includes code to: update the firewall rules in a manner that is consistent with the NARC firewall; masquerade the outbound traffic; and establish the routing of marked packets through the VPN tunnel. You may wish to put it in the /etc/openvpn directory which you already created to hold the OpenVPN credentials.

/etc/openvpn/tun-updown:

     #!/bin/bash
     #
     # Script to handle the bringup and shutdown of a VPN tunnel started by
     # OpenVPN.
     #
     # This script is called by the openvpn command as a result of its name being
     # passed to openvpn via the --up and --down parameters.  The openvpn command
     # should be invoked like this:
     #
     #      openvpn --route-noexec --script-security 2
     #              --up /etc/openvpn/tun-updown
     #              --down /etc/openvpn/tun-updown --down-pre ...
     #
     # Using these parameters, the openvpn command invokes tun-updown after the
     # "tun" device is configured and connected.  It also invokes tun-updown just
     # before the "tun" device is torn down, upon shutdown.  The environment
     # variable "script_type" is consulted to determine whether the tunnel is
     # being brought up or shut down, thereby allowing a single script to do all
     # of the work.
     #
     # Note that this script is meant only to be used with "tun" devices, not
     # "tap" devices.  If you are using a "tap" device for the tunnel, you will
     # need to alter this script (perhaps significantly).
     #
     # When OpenVPN runs this script, it calls it like this:
     #
     #      tun-updown tun_dev tun_mtu link_mtu ifconfig_local_ip ifconfig_rmt_ip
     #                 {init|restart}
     #
     # As a consequence of being called by the "--up" parameter, this script will
     # add special rules to the iptables netfilter tables that will assist with
     # firewalling all traffic on the VPN tunnel.  In addition, the regular
     # firewall rules will still be in effect and, in conjunction with the
     # special rules, they will ensure that all inbound VPN tunnel traffic is
     # properly firewalled.
     #
     # This script will then add a rule to the routing tables that will send
     # all traffic marked as VPN traffic to a special routing table which will
     # be set up with a default gateway that points to the VPN tunnel.  This
     # will route all non-local traffic marked as VPN traffic out the VPN
     # tunnel.
     #
     # As a result of being called by the "--down" parameter, this script undoes
     # all of the work it did during startup.  It removes the route to the
     # special VPN routing table that it added to the system's routing table and
     # then removes all of the special rules from the iptables netfilter tables
     # that were added to assist with firewalling traffic on the VPN tunnel.
     #
     ##########################################################################
     #
     # We add rules to the netfilter tables in the same manner that NARC does
     # so we need to know where its config file is.
     #
     CONFIG="/etc/narc/narc.conf"
     #
     # Minimum acceptable NARC config file version.
     #
     MINCONF_VERSION=0.6.3
     #
     # Pick up the VPN interface that was created by OpenVPN.
     #
     VPN_INTERFACE=$1
     VPN_INTERFACE_IP=$4
     VPN_REMOTE_IP=$5
     ##########################################################################
     #
     # Routine to allow us to bail out of failed rules, etc.
     #
     abortexit()
         {
         echo "Failed to set up iptables and/or iproute2."
         exit 1
         }
     ##########################################################################
     #
     # Load the NARC Configuration file.
     #
     if test -f $CONFIG ; then
         . $CONFIG
     else
         echo "Cannot find the NARC config file $CONFIG."
         exit 1
     fi
     #
     # Check for the iptables binary.
     #
     if ! test -f "$IPTABLES" ; then
         echo "The iptables binary $IPTABLES is missing."
         exit 1
     else
         $IPTABLES -V >/dev/null 2>&1 || BADBIN='yes'
         if [ "$BADBIN" == 'yes' ] ; then
             echo "The iptables binary $IPTABLES exists but it failed to run."
             echo "Try `which iptables`."
             exit 1
         fi
     fi
     #
     # Check if the config file is compatible with this script.
     #
     if [ `expr $CONF_VERSION \< $MINCONF_VERSION` == 1 ] ; then
         echo "The NARC config file $CONFIG is at an incompatibile version."
         echo "The minimum vesion is $MINCONF_VERSION."
         echo "The config file is at $CONF_VERSION.  Please upgrade it."
         exit 1
     fi
     #
     # Set the SPOOF_CHK target.
     #
     if [[ "$LOG_DROPS" == 'yes' && "$LOG_SPOOF" == 'yes' ]] ; then
         SPOOF_TARGET='CUST_LOG'
     else
         SPOOF_TARGET='DROP'
     fi
     #
     # If the VPN tunnel is being brought up, set everything up.
     #
     if [ x"$script_type" == xup ] ; then
         #
         # Create the VPN_CHK chain.
         #
         # This chain does anti-spoofing checking for the reserved and private
         # IP addresses on the VPN tunnel.  It also checks for established
         # connections and lets their packets in.  All other packets are
         # dumped as invalid.
         #
         # This chain is hung off the INPUT chain for all inbound traffic on
         # the tunnel device.
         #
         $IPTABLES -N VPN_CHK || abortexit
         #
         # Check reserved networks.
         #
         if [ "$RESERVED_NETWORKS" != '' ] ; then
             echo -n "Enabling spoof checking on $VPN_INTERFACE for reserved \
                 network(s): "
             for network in $RESERVED_NETWORKS ; do
                 $IPTABLES -A VPN_CHK -s $network -i $VPN_INTERFACE \
                     -j $SPOOF_TARGET || abortexit
                 echo -n "$network "
             done
             echo "."
         fi
         #
         # Check private networks.
         #
         if [ "$PRIVATE_NETWORKS" != '' ] ; then
             echo -n "Enabling spoof checking on $VPN_INTERFACE for private \
                 network(s): "
             for network in $PRIVATE_NETWORKS ; do
                 IPVAL2=`echo $VPN_INTERFACE_IP | cut -d . -f 1`
                 NETVAL2=`echo $network | cut -d . -f 1`
              if [[ $IPVAL2 == 10 && $NETVAL2 == 10 ]] ; then
                  DONOTHING=1
              else
                  IPVAL=`echo $VPN_INTERFACE_IP | cut -d . -f 1,2`
                  NETVAL=`echo $network | cut -d . -f 1,2`
                  if [ $IPVAL != $NETVAL ] ; then
                      DONOTHING=0
                      $IPTABLES -A VPN_CHK -s $network -i $VPN_INTERFACE \
                         -j $SPOOF_TARGET || abortexit
                  else
                      DONOTHING=1
                  fi
              fi
              if [ "$DONOTHING" != 1 ] ; then
                  echo -n "$network "
              fi
          done
          echo "."
      fi
      #
      # Add the local IP address of the VPN tunnel too.
      #
      echo "Enabling spoof checking on $VPN_INTERFACE for tunnel IP: \
          $VPN_INTERFACE_IP."
      $IPTABLES -A VPN_CHK -s $VPN_INTERFACE_IP -i $VPN_INTERFACE \
          -j $SPOOF_TARGET || abortexit
      #
      # At this point, we can let responses to any packets that we sent out
      # back in through the tunnel.
      #
      $IPTABLES -A VPN_CHK -i $VPN_INTERFACE -m state \
          --state RELATED,ESTABLISHED -j ACCEPT || abortexit
      #
      # Anything that makes it this far is logged as failing the VPN check
      # and then dropped.  For the VPN tunnel, only established sessions are
      # allowed.
      #
      $IPTABLES -A VPN_CHK -i $VPN_INTERFACE -j LOG \
          --log-level $WARN_LOG_LEVEL --log-tcp-options --log-ip-options \
          --log-prefix "VPN_CHK " || abortexit
      $IPTABLES -A VPN_CHK -i $VPN_INTERFACE -j DROP || abortexit
      #
      # Hang the VPN tunnel spoof checking chain off the INPUT filter
      # chain.
      #
      # This chain also accepts any established packets, which lets them
      # in if they are OK.
      #
      echo "Hooking VPN rules into the INPUT and FORWARD filter chains."
      $IPTABLES -I INPUT -i $VPN_INTERFACE -j VPN_CHK || abortexit
      #
      # Now that we are checking to make sure that all incoming packets on
      # the VPN tunnel are legit, we can allow forwarding from the tunnel
      # to our internal network.  This turns on the tunnel.
      #
      $IPTABLES -I FORWARD -i $VPN_INTERFACE -j ACCEPT || abortexit
      #
      # Packets sent out on the VPN tunnel must be masqueraded so that the
      # recipient at the other end can send the answers back to this system.
      # It will then forward them to the original sender.
      #
      # Note that we acutally use SNAT, which allows us to specify the IP
      # address to masquerade to, and which doesn't monitor the state of
      # the connection.  Otherwise, SNAT works just like MASQUERADE (or
      # vice versa).
      #
      $IPTABLES -A POSTROUTING -t nat -o $VPN_INTERFACE \
          -j SNAT --to $VPN_INTERFACE_IP || abortexit
      #
      # For some reason, when packets are masqueraded and sent down the
      # tunnel, the router gets confused and doesn't add the proper route
      # to its forwarding table.  This means that, if reverse path filtering
      # is turned on, replies to any packets sent out the VPN tunnel will be
      # dropped as Martians.  To fix this problem, we turn off reverse path
      # filtering on the VPN tunnel.
      #
      # Note that other firewall rules that we add herein should take care
      # of any real Martians.
      #
      # Also note that there's no need to undo this step upon shutdown, since
      # the rp_filter file is deleted when the VPN tunnel is torn down.
      #
      # However, beware that there is a VPN tunnel vulnerability that is
      # enabled by turning off reverse path filtering.  When it is turned
      # off, any applications (e.g. BitTorrent) that listen on an external
      # UDP port which is port forwarded from the edge router or otherwise
      # visible to the regular (i.e. not tunneled) Internet can be tricked
      # into revealing the actual IP address of its system, if its system
      # is routed through the VPN tunnel.
      #
      # This being the case, it is imperative that any systems whose packet
      # traffic is routed through the VPN tunnel do not have an
      # outward-facing port of any kind visible through the regular
      # Internet (i.e. not the VPN tunnel).  This especially includes the
      # system that is doing the routing and running the VPN tunnel.
      #
      # To ensure anonimity, do not route any of this system's packets
      # through the VPN tunnel, for any reason, and do not expose any ports
      # on any hidden system to the outside Internet via any kind of port
      # forwarding, mapping, proxying, etc.
      #
      echo 0 >/proc/sys/net/ipv4/conf/${VPN_INTERFACE}/rp_filter
      #
      # Use iproute2 to add a default route to the vpn.tunnel table to send
      # all packets bound for the WAN out through the VPN tunnel.  Then,
      # send all packets with the firewall (i.e. iptables) mark value of 1
      # to to this table.
      #
      # Note that sometimes, depending on the type of connection, OpenVPN
      # will assign the same IP address to the local end of the tunnel as
      # it does to the remote end of the tunnel.  When it does this, it
      # passes a remote IP address to us that looks like 255.255.255.0.
      # So, we need to check for this situation and use the local interface
      # IP address instead of the remote address.
      #
      if [ x"${VPN_REMOTE_IP:0:7}" == "x255.255" ]; then
          echo "Routing all marked packets to the VPN tunnel via local IP."
          /sbin/ip route add default via $VPN_INTERFACE_IP dev $VPN_INTERFACE \
              table vpn.tunnel
      else
          echo "Routing all marked packets to the VPN tunnel via remote IP."
          /sbin/ip route add default via $VPN_REMOTE_IP dev $VPN_INTERFACE \
              table vpn.tunnel
      fi
      /sbin/ip route flush cache
      /sbin/ip rule add fwmark 1 table vpn.tunnel

#
# Otherwise, shut everything down.
#
else

      #
      # Tell iproute2 not to do anything special with marked packets and
      # not to send WAN packets out through the VPN tunnel.
      #
      echo "Suspending routing of all marked packets to the VPN tunnel."
      /sbin/ip rule del fwmark 1 table vpn.tunnel
      /sbin/ip route del default table vpn.tunnel
      /sbin/ip route flush cache
      #
      # Turn off masquerading for the VPN tunnel packets.
      #
      $IPTABLES -D POSTROUTING -t nat -o $VPN_INTERFACE \
          -j SNAT --to $VPN_INTERFACE_IP
      #
      # Disallow forwarding from the VPN tunnel.
      #
      echo "Unhooking VPN rules from the INPUT and FORWARD filter chains."
      $IPTABLES -D FORWARD -i $VPN_INTERFACE -j ACCEPT
      #
      # Unhook the VPN tunnel spoof checking chain from the INPUT chain.
      #
      $IPTABLES -D INPUT -i $VPN_INTERFACE -j VPN_CHK
      #
      # Delete all of the rules from the VPN tunnel spoof checking chain and
      # then get rid of the chain itself.
      #
      echo "Deleting all rules from VPN tunnel spoof checking chain."
      FOUNDRULE=1
      while [ "$FOUNDRULE" != 0 ] ; do
          $IPTABLES -D VPN_CHK 1 >/dev/null 2>&1 || FOUNDRULE=0
      done
      #
      # Get rid of the now-empty chain.
      #
      $IPTABLES -X VPN_CHK

fi
#
# That's all she wrote.
#
exit 0

The next step is to create a startup script that will bring the OpenVPN tunnel up and down at system startup and shutdown. Or, if you prefer, you can simply use this script to start/stop the OpenVPN tunnel as required. While we're at it, we can set this script up to start/stop PPTP tunnels as well, since the process is basically the same. This yields a more versatile start/stop script.

/etc/init.d/vpnconnect:

     #!/bin/sh
     #
     # vpnconnect    This script starts or stops an OpenVPN or PPTP connection to
     #               a faraway land, via the primary server in the cluster, over
     #               the Internet.  Its purpose is to tunnel packets from/to
     #               certain local IP addresses (or even just certain types of
     #               traffic) transparently to those addresses' related local
     #               systems.  This lets the local systems send/receive packets
     #               from the remote, VPN-connected world without even knowing
     #               that a VPN tunnel is involved.  Also, packets from more
     #               than one local system can be tunneled to the remote world
     #               over a single VPN tunnel.
     #
     #               Since the VPN tunnel must run over a pre-existing
     #               Internet connection, this script must start after the WAN
     #               connection is brought up (i.e. after either the adsl or
     #               wanconnect script has run).  And, since starting up the
     #               VPN connection also alters the routing tables and changes
     #               the filetering done by iptales, this script must obviously
     #               be run and after the iptables script.
     #
     #               Aditionally, we may want the VPN connection to be up and
     #               running before we start any of the application-type
     #               services (e.g. sendmail, smb), if those services expect to
     #               be able to talk over the VPN tunnel to systems at the
     #               remote location.  Thus, we must carefully choose the
     #               sequence in which the VPN connection is started/stopped.
     #
     #               Finally, all of the real work is done by iptables, which
     #               marks the actual packets bound for the VPN tunnel with
     #
     # chkconfig: 2345 51 59
     # description: Connects to a remote system, via an OpenVPN or PPTP tunnel, \
     #              over a preexisting Internet connection, on the the primary \
     #              server in the cluster.
     #
     # Revision History:
     # ewilde      2011Nov6   Initial coding.
     # ewilde      2011Dec4   Add PPTP support.
     #
     #
     # Define the type of VPN connection.
     #
     VPNCONN=OpenVPN
     #VPNCONN=PPTP
     #
     # Define paths to the programs used herein.
     #
     OPENVPN=/usr/local/sbin/openvpn
     PPTP=/usr/sbin/pppd
     #
     # If logging is desired, define the path to the logfile here.  Otherwise,
     # set this variable to empty.
     #
     #LOG_FILE=""
     LOG_FILE="/var/log/vpnconnect"
     #
     # Start/stop scripts used to configure/deconfigure the OpenVPN tunnel
     # after it is brought up or before it is shut down.
     #
     # Typically, these scripts can add or remove rules with iptables to
     # control the flow of packets to/from the tunnel, for example by marking
     # packets from one or more IP addresses for tunneling or alternately only
     # by marking certain types of traffic (e.g. all sendmail traffic).  The
     # added rules can also be used to firewall the VPN tunnel, in case the
     # other end of the tunnel leads to a non-secure location.
     #
     # These scripts are also frequently tasked with setting up special routing
     # tables that direct marked packets down the VPN tunnel for deliver to the
     # remote location.
     #
     # Note that, through clever coding, you may be able to use a single script
     # to handle both the tunnel up and tunnel down events.
     #
     # Also note that there is an equivalent script to the OpenVPN script or
     # scripts chosen here but it is not chosen by parameters within this script.
     # Rather the PPP local start/stop scripts ip-up.local and ip-down.local call
     # it directly and its name is hard-coded therein.  By convention, the name
     # of the script is /etc/ppp/vpn-updown.
     #
     TUNUP="/etc/openvpn/tun-updown"
     TUNDOWN="/etc/openvpn/tun-updown"
     #
     # OpenVPN configuration file for the remote VPN server or servers.  This
     # file lists all of the tunnels that are possible and contains the
     # parameters that give the connection password, certificates and keys,
     # along with the tunnel configuration information.
     #
     # Note that this script assumes that the config file name ends with ".conf"
     # and that the full path name to the file is given below.
     #
     #TUNCONFIG="/etc/openvpn/Hostizzle.conf"
     TUNCONFIG="/etc/openvpn/Cryptocloud.conf"
     #
     # Name of PPTP tunnel.  This name actually selects the tunnel configuration
     # file from the /etc/ppp/peers directory.
     #
     VPNNAME=ipredator-tunnel
     #
     # Load the function library if it exists.
     #
     if [ -f /etc/rc.d/init.d/functions ]; then
         . /etc/rc.d/init.d/functions
     fi
     #
     # Source networking configuration.
     #
     if [ -f /etc/sysconfig/network ]; then
         . /etc/sysconfig/network
     else
         NETWORKING="no"
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ]; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ]; then
         SERVERROLE=Standalone
     fi
     #
     # Check that networking is up.
     #
     if [ ${NETWORKING} = "no" ]; then
         exit 0
     fi
     #
     # If this isn't the primary server in the cluster or this isn't a standalone
     # server, we're outta here.
     #
     if [ x"$SERVERROLE" != xPrimary ] && [ x"$SERVERROLE" != xStandalone ]; then
         exit 0
     fi
     #
     # Get the name of the VPN connection from the config file.
     #
     TUNNAME="generic"
     if [ x"$VPNCONN" == xOpenVPN ]; then
         TUNNAME=`echo ${TUNCONFIG} | /bin/grep -e "\/[-_0-9A-Za-z]\+\.conf" -o | \
             /bin/grep -e "\/[-_0-9A-Za-z]\+" -o`
         TUNNAME=${TUNNAME:1}
         PIDFILE="/var/lock/subsys/vpnconnect.${TUNNAME}"
     fi
     if [ x"$VPNCONN" == xPPTP ]; then
         TUNNAME=${VPNNAME}
      if echo ${TUNNAME} | /bin/grep -e "-tunnel\$" ; then
          TUNNAME=${TUNNAME:0:${#TUNNAME}-7}
      fi
      PIDFILE="/var/run/ppp-${TUNNAME}.pid"

fi
#
# Routine to start up the VPN connection. #
start()

      {
      if [ ! -f /var/lock/subsys/vpnconnect.${TUNNAME} ]; then
          #
          # Bring up the VPN connection.  We redirect the output depending
          # on whether the user wants a logfile or not.
          #
          echo -n "Bringing up a VPN connection to $TUNNAME "
          if [ -z "$LOG_FILE" ]; then
              #
              # Bring up an OpenVPN connection without a log file.
              #
              if [ x"$VPNCONN" == xOpenVPN ]; then
                  ${OPENVPN} --route-noexec --script-security 2 \
                      --up ${TUNUP} --down ${TUNDOWN} --down-pre \
                      --config ${TUNCONFIG} >/dev/null 2>&1 &
              fi
              #
              # Bring up a PPTP connection without a log file.
              #
              if [ x"$VPNCONN" == xPPTP ]; then
                  ${PPTP} call mytunnel mtu 1435 mru 1435 \
                      linkname ${TUNNAME} >/dev/null 2>&1
              fi
          else
              #
              # Bring up an OpenVPN connection with a log file.
              #
              if [ x"$VPNCONN" == xOpenVPN ]; then
                  ${OPENVPN} --route-noexec --script-security 2 \
                      --up ${TUNUP} --down ${TUNDOWN} --down-pre \
                      --config ${TUNCONFIG} >>${LOG_FILE} 2>&1 &
              fi
              #
              # Bring up a PPTP connection with a log file.
              #
              if [ x"$VPNCONN" == xPPTP ]; then
                  ${PPTP} call mytunnel mtu 1435 mru 1435 \
                      linkname ${TUNNAME} logfile ${LOG_FILE} >/dev/null 2>&1
              fi
          fi
          #
          # If everything went OK, create a PID file.
          #
          if [ $? = 0 ]; then
              echo_success
              if [ x"$VPNCONN" == xOpenVPN ]; then
                  echo $! >${PIDFILE}
              fi
          else
              echo_failure
          fi
          echo ""
      fi
      }

#
# Routine to stop the VPN connection. #
stop()

      {
      #
      # If the VPN connection is up, shut it down.  Sending OpenVPN a SIGTERM
      # (signal 15) causes it to gracefully shut down.
      #
      if [ -f ${PIDFILE} ]; then
          echo -n "Shutting down the VPN connection to $TUNNAME "
          TUNPID=`cat ${PIDFILE}`
  #       kill -15 $TUNPID >/dev/null 2>&1
          killproc -p ${PIDFILE}
          #
          # If everything went OK, delete the PID file.
          #
          if [ $? = 0 ]; then
              rm -f ${PIDFILE} >/dev/null 2>&1
  #           echo_success
          #
          # Otherwise, if the PID doesn't exist, ditch the PID file.
          #
          else
              if ! checkpid $TUNPID ; then
                  rm -f ${PIDFILE} >/dev/null 2>&1
              fi
              echo_failure
          fi
          echo ""
      fi
      }

#
# Based on which operation we were asked to perform, have at it. #
case "$1" in

      #
      # Fire up the VPN connection.
      #
      start)
          start
          ;;
      #
      # Bye, bye VPN connection.
      #
      stop)
          stop
          ;;
      #
      # Refresh the VPN connection.
      #
      restart)
          echo "Restarting the VPN connection to $TUNNAME"
          stop
          start
          ;;
      #
      # Waaaaa 'sappenin'?
      #
      status)
          if [ -f ${PIDFILE} ]; then
              echo "Connected to $TUNNAME via VPN"
          else
              echo "No VPN connection to $TUNNAME"
          fi
          ;;
      #
      # Help text.
      #
      *)
          echo "Usage: vpnconnect {start|stop|restart|status}"
          exit 1

esac
#
# Heading home.
#
exit 0

Once we have the script created, we need to give it execute permissions and make sure that it is started/stopped at the proper runlevels:

     su
     chown root:root /etc/init.d/vpnconnect
     chmod ugo+x /etc/init.d/vpnconnect
     /sbin/chkconfig --add vpnconnect
     /sbin/chkconfig vpnconnect on
     /sbin/chkconfig --list vpnconnect

You should see a result that looks like this:

     vpnconnect      0:off   1:off   2:on    3:on    4:on    5:on    6:off

Incidentally, although this script will establish the OpenVPN tunnel properly and ensure the smooth flow of traffic, just in case you need them, here are a few useful commands for debugging or just observing the operation of the tunnel and the packet filters:

     /usr/local/sbin/openvpn --route-noexec --script-security 2 \
         --up /etc/openvpn/tun-updown --down /etc/openvpn/tun-updown \
         --down-pre --config /etc/openvpn/mytunnel.conf
     /sbin/iptables -v [-t (nat|filter|mangle)] -L [NAME]
     /sbin/iptables -v -t filter -L INPUT
     /sbin/iptables -v -t filter -L FORWARD
     /sbin/iptables -v -t filter -L VPN_CHK
     /sbin/iptables -v -t nat -L POSTROUTING
     /sbin/ip route list table vpn.tunnel
     cat /proc/sys/net/ipv4/conf/*/rp_filter
     dmesg | tail -20

Once everything is working properly and the OpenVPN tunnel is started and stopped automatically (you can verify this under real operating circumstances by rebooting the system or you can just check that "vpnconnect start" and "vpnconnect stop" work from the command line), the final step is to set up logrotate to rotate the logfiles.

You can add the log file named above to an individual configuration file in the logrotate directory or include the following lines in the /etc/logrotate.conf file itself. We prefer to use the individual configuration file so that setting up logrotate can be accomplished by simply dropping the file into the logrotate directory, just like the various install programs do.

/etc/logrotate.d/vpnconnect:

     /var/log/vpnconnect
         missingok
         notifempty
         copytruncate
     }

Starting/Stopping a PPTP Tunnel and Routing Traffic Through It

In preceeding sections, we discussed how to bring up a PPTP VPN tunnel and how to use iproute2 to route traffic through the VPN tunnel. The Setting up and tearing down of the tunnel was handled by a rudimentary vpn-updown script that was invoked by the ip-up.local and ip-down.local scripts when the ppp daemon brought up or shut down the tunnel.

However, the setting up or tearing down of a VPN tunnel is actually a lot more complicated than was shown in the vpn-updown script. A more complete list of the steps involved follows.

Starting the VPN tunnel on the router itself is simple enough but if packets from other locally-connected systems are also to be routed through it, the router's kernel packet filters must be adjusted to masquerade the outbound packets so that they all appear to be coming from the tunneled client (i.e. the router). This is because the remote VPN server has no knowledge of the other locally-connected, routed systems, just the tunneled client.

However, if masquerading is used, confusion within the kernel code about the proper route being used for the masqueraded/tunneled reply packets causes the router to incorrectly apply reverse path filtering to all packets received on the tunnel interface and, hence, drop them as Martians. Consequently, reverse path filtering must be disabled on the tunnel interface, each time the tunnel is brought up, or response packets will just disappear. The VPN tunnel will appear to be set up correctly but no connections from the locally-connected systems will work (although traffic from/to the router itself will work fine).

In many instances, a VPN tunnel can be used to create a connection to a remote location that is wild and wooley (e.g. a connection to the Internet that is located in a faraway place, far from prying eyes). But, bringing up such a VPN tunnel, especially if reverse path filtering is disabled, can allow all sorts of nasty stuff into the local system. This makes it prudent to alter the firewall (iptables) rules, whenever the VPN tunnel is brought up, to do spoof checking and handle the inbound reception and forwarding of VPN packets.

Once all of the VPN tunnel's ducks are in a row, iptables can then be used to figure out which packets should be routed through the VPN tunnel and mark them accordingly, as noted in "Routing Traffic Through a VPN Connection". When that is done, iproute2 can be used to establish special routing tables that will route the marked packets through the VPN tunnel and route any replies back to the local systems. This means that the routing tables must set up and knocked down whenever the VPN tunnel is brought up or down.

Lastly, it would be great if the operation of the VPN tunnel could be automated so that it is started on the router whenever the router system is brought up and stopped whenever it is shut down. Thus, we need a startup script that ties all of these steps together.

To begin with, in preparation for running an PPTP VPN tunnel, if you haven't already done so, obtain the actual username and password along with the IP address of the PPTP server that you will be using to make the PPTP VPN tunnel connection and put them in the /etc/ppp/chap-secrets and /etc/ppp/peers/mytunnel files, as described in the "PPTP Client" section. You can test the tunnel connection manually before you proceed any furnther, if you wish. Its probably a good idea to keep things simple until the connection is known to be working OK.

The next preparatory step is to set up the "special" routing table that will be used by iproute2 to route packets through the VPN tunnel, as outlined in the "Routing Traffic Through a VPN Connection" section. The table itself will not be activated until the tunnel is actually brought up but it must be configured ahead of time.

The last preparatory step is to determine what criteria you will use to select the packets that will be routed through the VPN tunnel and decide how you will implement the selection. As we said in the section on "Routing Traffic Through a VPN Connection", we prefer to use iptables to mark the packets that are to be routed through the PPTP VPN tunnel using iptables and then direct the marked packets to the "special" routing table and thence through the VPN tunnel.

It is definitely possible to set the rules directly with the iptables command and then save/restore them with the iptables-save and iptables-restore commands. However, our preferred method of setting up iptables is the NARC firewall package so instead we add the rules to the /etc/narc/narc-custom.conf file and allow NARC to set them up when the firewall is brought up, presumably at system startup.

In the "PPTP Client" section, we already put forth a simple vpn-updown script that is invoked by either by ip-up.local or ip-down.local. Its purpose was solely to alter the routing of all packets through the VPN tunnel by setting up a route using iproute2. Here, we show a vpn-updown script which may be used to handle the starting and stopping of PPTP tunnels, including updating the firewall rules in a manner that is consistent with the NARC firewall, masquerading the outbound traffic, and establishing the routing of marked packets through the VPN tunnel.

/etc/ppp/vpn-updown:

     #!/bin/bash
     #
     # Script to handle the bringup and shutdown of a VPN tunnel started by
     # pppd/pptp.
     #
     # This script is called by the ppp ip-up.local script when it determines
     # that a VPN tunnel is being brought up.  It decides this when it sees an
     # interface with a number greater than one (e.g. ppp2, ppp3).  This number
     # is chosen when the VPN tunnel is started by passing a number to pppd via
     # the unit parameter, like this:
     #
     #      /usr/sbin/pppd call my-tunnel unit 2
     #
     # When ip-up.local calls this script, it must pass the VPN interface name
     # (e.g. ppp2), the interface's local IP address (e.g. 192.168.1.99), and
     # VPN tunnel's remote IP address (e.g. 93.182.128.101) as the first three
     # parameters.
     #
     # This script will add special rules to the iptables netfilter tables
     # that will assist with firewalling of traffic on the VPN connection.  In
     # addition, the regular firewall rules will still be in effect and, in
     # conjunction with the special rules, they will ensure that all inbound
     # VPN tunnel traffic is properly firewalled.
     #
     # This script will then add a rule to the routing tables that will send
     # all traffic marked as VPN traffic to a special routing table which will
     # be set up with a default gateway that points to the VPN tunnel.  This
     # will route all non-local traffic marked as VPN traffic out the VPN
     # tunnel.
     #
     # This script is also called by the ppp ip-down.local script when it
     # determines that a VPN tunnel is being shut down.  As with ip-up.local,
     # ip-down.local decides this when it sees an interface with a number
     # greater than one (e.g. ppp2, ppp3).
     #
     # When ip-down.local calls this script, it must pass the VPN interface
     # name (e.g. ppp2) as the only parameter.  The fact that the interface's
     # local IP address is not passed indicates to this script that it should
     # shut down the VPN tunnel, rather than start it up.
     #
     # During shutdown, this script undoes all of the work it did during
     # startup that added a rule and route, to the routing tables, to send
     # all traffic marked as VPN traffic out through the VPN tunnel.  It also
     # deletes all of the special rules from the iptables netfilter tables
     # that assist with firewalling of traffic on the VPN connection.
     #
     # Note that you must modify /etc/ppp/ip-up.local and /etc/ppp/ip-down.local
     # to call this script when either of these scripts determines that a VPN
     # tunnel is being brought up or shut down by pppd.
     #
     ##########################################################################
     #
     # We add rules to the netfilter tables in the same manner that NARC does
     # so we need to know where its config file is.
     #
     CONFIG="/etc/narc/narc.conf"
     #
     # Minimum acceptable NARC config file version.
     #
     MINCONF_VERSION=0.6.3
     #
     # Pick up the VPN interface that was created by pppd/pptp.
     #
     VPN_INTERFACE=$1
     VPN_INTERFACE_IP=$2
     VPN_REMOTE_IP=$3
     ##########################################################################
     #
     # Routine to allow us to bail out of failed rules, etc.
     #
     abortexit()
         {
         echo "Failed to set up iptables and/or iproute2."
         exit 1
         }
     ##########################################################################
     #
     # Load the NARC Configuration file.
     #
     if test -f $CONFIG ; then
         . $CONFIG
     else
         echo "Cannot find the NARC config file $CONFIG."
         exit 1
     fi
     #
     # Check for the iptables binary.
     #
     if ! test -f "$IPTABLES" ; then
         echo "The iptables binary $IPTABLES is missing."
         exit 1
     else
         $IPTABLES -V >/dev/null 2>&1 || BADBIN='yes'
         if [ "$BADBIN" == 'yes' ] ; then
             echo "The iptables binary $IPTABLES exists but it failed to run."
             echo "Try `which iptables`."
             exit 1
         fi
     fi
     #
     # Check if the config file is compatible with this script.
     #
     if [ `expr $CONF_VERSION \< $MINCONF_VERSION` == 1 ] ; then
         echo "The NARC config file $CONFIG is at an incompatibile version."
         echo "The minimum vesion is $MINCONF_VERSION."
         echo "The config file is at $CONF_VERSION.  Please upgrade it."
         exit 1
     fi
     #
     # Set the SPOOF_CHK target.
     #
     if [[ "$LOG_DROPS" == 'yes' && "$LOG_SPOOF" == 'yes' ]] ; then
         SPOOF_TARGET='CUST_LOG'
     else
         SPOOF_TARGET='DROP'
     fi
     #
     # If the VPN tunnel is being brought up, set everything up.
     #
     if [ x"$VPN_INTERFACE_IP" != x ] ; then
         #
         # Create the VPN_CHK chain.
         #
         # This chain does anti-spoofing checking for the reserved and private
         # IP addresses on the VPN tunnel.  It also checks for established
         # connections and lets their packets in.  All other packets are
         # dumped as invalid.
         #
         # This chain is hung off the INPUT chain for all inbound traffic on
         # the tunnel device.
         #
         $IPTABLES -N VPN_CHK || abortexit
         #
         # Check reserved networks.
         #
         if [ "$RESERVED_NETWORKS" != '' ] ; then
             echo -n "Enabling spoof checking on $VPN_INTERFACE for reserved \
                 network(s): "
             for network in $RESERVED_NETWORKS ; do
                 $IPTABLES -A VPN_CHK -s $network -i $VPN_INTERFACE \
                     -j $SPOOF_TARGET || abortexit
                 echo -n "$network "
             done
             echo "."
         fi
         #
         # Check private networks.
         #
         if [ "$PRIVATE_NETWORKS" != '' ] ; then
             echo -n "Enabling spoof checking on $VPN_INTERFACE for private \
                 network(s): "
             for network in $PRIVATE_NETWORKS ; do
                 IPVAL2=`echo $VPN_INTERFACE_IP | cut -d . -f 1`
                 NETVAL2=`echo $network | cut -d . -f 1`
              if [[ $IPVAL2 == 10 && $NETVAL2 == 10 ]] ; then
                  DONOTHING=1
              else
                  IPVAL=`echo $VPN_INTERFACE_IP | cut -d . -f 1,2`
                  NETVAL=`echo $network | cut -d . -f 1,2`
                  if [ $IPVAL != $NETVAL ] ; then
                      DONOTHING=0
                      $IPTABLES -A VPN_CHK -s $network -i $VPN_INTERFACE \
                         -j $SPOOF_TARGET || abortexit
                  else
                      DONOTHING=1
                  fi
              fi
              if [ "$DONOTHING" != 1 ] ; then
                  echo -n "$network "
              fi
          done
          echo "."
      fi
      #
      # Add the local IP address of the VPN tunnel too.
      #
      echo "Enabling spoof checking on $VPN_INTERFACE for tunnel IP: \
          $VPN_INTERFACE_IP."
      $IPTABLES -A VPN_CHK -s $VPN_INTERFACE_IP -i $VPN_INTERFACE \
          -j $SPOOF_TARGET || abortexit
      #
      # At this point, we can let responses to any packets that we sent out
      # back in through the tunnel.
      #
      $IPTABLES -A VPN_CHK -i $VPN_INTERFACE -m state \
          --state RELATED,ESTABLISHED -j ACCEPT || abortexit
      #
      # Anything that makes it this far is logged as failing the VPN check
      # and then dropped.  For the VPN tunnel, only established sessions are
      # allowed.
      #
      $IPTABLES -A VPN_CHK -i $VPN_INTERFACE -j LOG \
          --log-level $WARN_LOG_LEVEL --log-tcp-options --log-ip-options \
          --log-prefix "VPN_CHK " || abortexit
      $IPTABLES -A VPN_CHK -i $VPN_INTERFACE -j DROP || abortexit
      #
      # Hang the VPN tunnel spoof checking chain off the INPUT filter
      # chain.
      #
      # This chain also accepts any established packets, which lets them
      # in if they are OK.
      #
      echo "Hooking VPN rules into the INPUT and FORWARD filter chains."
      $IPTABLES -I INPUT -i $VPN_INTERFACE -j VPN_CHK || abortexit
      #
      # Now that we are checking to make sure that all incoming packets on
      # the VPN tunnel are legit, we can allow forwarding from the tunnel
      # to our internal network.  This turns on the tunnel.
      #
      $IPTABLES -I FORWARD -i $VPN_INTERFACE -j ACCEPT || abortexit
      #
      # Packets sent out on the VPN tunnel must be masqueraded so that the
      # recipient at the other end can send the answers back to this system.
      # It will then forward them to the original sender.
      #
      # Note that we acutally use SNAT, which allows us to specify the IP
      # address to masquerade to, and which doesn't monitor the state of
      # the connection.  Otherwise, SNAT works just like MASQUERADE (or
      # vice versa).
      #
      $IPTABLES -A POSTROUTING -t nat -o $VPN_INTERFACE \
          -j SNAT --to $VPN_INTERFACE_IP || abortexit
      #
      # For some reason, when packets are masqueraded and sent down the
      # tunnel, the router gets confused and doesn't add the proper route
      # to its forwarding table.  This means that, if reverse path filtering
      # is turned on, replies to any packets sent out the VPN tunnel will be
      # dropped as Martians.  To fix this problem, we turn off reverse path
      # filtering on the VPN tunnel.
      #
      # Note that other firewall rules that we add herein should take care
      # of any real Martians.
      #
      # Also note that there's no need to undo this step upon shutdown, since
      # the rp_filter file is deleted when the VPN tunnel is torn down.
      #
      echo 0 >/proc/sys/net/ipv4/conf/${VPN_INTERFACE}/rp_filter
      #
      # Use iproute2 to add a default route to the vpn.tunnel table to send
      # all packets bound for the WAN out through the VPN tunnel.  Then,
      # send all packets with the firewall (i.e. iptables) mark value of 1
      # to to this table.
      #
      echo "Routing all marked packets to the VPN tunnel."
      /sbin/ip route add default via $VPN_REMOTE_IP dev $VPN_INTERFACE \
          table vpn.tunnel
      /sbin/ip route flush cache
      /sbin/ip rule add fwmark 1 table vpn.tunnel

#
# Otherwise, shut everything down.
#
else

      #
      # Tell iproute2 not to do anything special with marked packets and
      # not to send WAN packets out through the VPN tunnel.
      #
      echo "Suspending routing of all marked packets to the VPN tunnel."
      /sbin/ip rule del fwmark 1 table vpn.tunnel
      /sbin/ip route del default table vpn.tunnel
      /sbin/ip route flush cache
      #
      # We need the IP address of the VPN interface so that we can delete the
      # masquerading rule from iptables.  However, the ip-down.local script
      # didn't pass it to us.  But, since the masquerade rule in the
      # POSTROUTING table is the only one in effect for the VPN interface, we
      # can simply look up the IP address bound to it.
      #
      VPN_INTERFACE_IP=`$IPTABLES -v -t nat -L POSTROUTING | grep $VPN_INTERFACE | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}'`
      #
      # Turn off masquerading for the VPN tunnel packets.
      #
      $IPTABLES -D POSTROUTING -t nat -o $VPN_INTERFACE \
          -j SNAT --to $VPN_INTERFACE_IP
      #
      # Disallow forwarding from the VPN tunnel.
      #
      echo "Unhooking VPN rules from the INPUT and FORWARD filter chains."
      $IPTABLES -D FORWARD -i $VPN_INTERFACE -j ACCEPT
      #
      # Unhook the VPN tunnel spoof checking chain from the INPUT chain.
      #
      $IPTABLES -D INPUT -i $VPN_INTERFACE -j VPN_CHK
      #
      # Delete all of the rules from the VPN tunnel spoof checking chain and
      # then get rid of the chain itself.
      #
      echo "Deleting all rules from VPN tunnel spoof checking chain."
      FOUNDRULE=1
      while [ "$FOUNDRULE" != 0 ] ; do
          $IPTABLES -D VPN_CHK 1 >/dev/null 2>&1 || FOUNDRULE=0
      done
      #
      # Get rid of the now-empty chain.
      #
      $IPTABLES -X VPN_CHK

fi
#
# That's all she wrote.
#
exit 0

Now that we have the VPN tunnel all set up and ready to go, we'll create a startup script that will start and stop the PPTP tunnel at system startup and shutdown. Or, if you'd rather that operation was not automatic, you can simply use this script to start/stop the PPTP tunnel manually. But, wait a second. We already created this script, back in the "Starting/Stopping an OpenVPN Tunnel and Routing Traffic Through It" section. We created a script that starts either an OpenVPN connection or a PPTP connection so here's a reprint of it, just in case you're too lazy to go look it up.

/etc/init.d/vpnconnect:

     #!/bin/sh
     #
     # vpnconnect    This script starts or stops an OpenVPN or PPTP connection to
     #               a faraway land, via the primary server in the cluster, over
     #               the Internet.  Its purpose is to tunnel packets from/to
     #               certain local IP addresses (or even just certain types of
     #               traffic) transparently to those addresses' related local
     #               systems.  This lets the local systems send/receive packets
     #               from the remote, VPN-connected world without even knowing
     #               that a VPN tunnel is involved.  Also, packets from more
     #               than one local system can be tunneled to the remote world
     #               over a single VPN tunnel.
     #
     #               Since the VPN tunnel must run over a pre-existing
     #               Internet connection, this script must start after the WAN
     #               connection is brought up (i.e. after either the adsl or
     #               wanconnect script has run).  And, since starting up the
     #               VPN connection also alters the routing tables and changes
     #               the filetering done by iptales, this script must obviously
     #               be run and after the iptables script.
     #
     #               Aditionally, we may want the VPN connection to be up and
     #               running before we start any of the application-type
     #               services (e.g. sendmail, smb), if those services expect to
     #               be able to talk over the VPN tunnel to systems at the
     #               remote location.  Thus, we must carefully choose the
     #               sequence in which the VPN connection is started/stopped.
     #
     #               Finally, all of the real work is done by iptables, which
     #               marks the actual packets bound for the VPN tunnel with
     #
     # chkconfig: 2345 51 59
     # description: Connects to a remote system, via an OpenVPN or PPTP tunnel, \
     #              over a preexisting Internet connection, on the the primary \
     #              server in the cluster.
     #
     # Revision History:
     # ewilde      2011Nov6   Initial coding.
     # ewilde      2011Dec4   Add PPTP support.
     #
     #
     # Define the type of VPN connection.
     #
     VPNCONN=OpenVPN
     #VPNCONN=PPTP
     #
     # Define paths to the programs used herein.
     #
     OPENVPN=/usr/local/sbin/openvpn
     PPTP=/usr/sbin/pppd
     #
     # If logging is desired, define the path to the logfile here.  Otherwise,
     # set this variable to empty.
     #
     #LOG_FILE=""
     LOG_FILE="/var/log/vpnconnect"
     #
     # Start/stop scripts used to configure/deconfigure the OpenVPN tunnel
     # after it is brought up or before it is shut down.
     #
     # Typically, these scripts can add or remove rules with iptables to
     # control the flow of packets to/from the tunnel, for example by marking
     # packets from one or more IP addresses for tunneling or alternately only
     # by marking certain types of traffic (e.g. all sendmail traffic).  The
     # added rules can also be used to firewall the VPN tunnel, in case the
     # other end of the tunnel leads to a non-secure location.
     #
     # These scripts are also frequently tasked with setting up special routing
     # tables that direct marked packets down the VPN tunnel for deliver to the
     # remote location.
     #
     # Note that, through clever coding, you may be able to use a single script
     # to handle both the tunnel up and tunnel down events.
     #
     # Also note that there is an equivalent script to the OpenVPN script or
     # scripts chosen here but it is not chosen by parameters within this script.
     # Rather the PPP local start/stop scripts ip-up.local and ip-down.local call
     # it directly and its name is hard-coded therein.  By convention, the name
     # of the script is /etc/ppp/vpn-updown.
     #
     TUNUP="/etc/openvpn/tun-updown"
     TUNDOWN="/etc/openvpn/tun-updown"
     #
     # OpenVPN configuration file for the remote VPN server or servers.  This
     # file lists all of the tunnels that are possible and contains the
     # parameters that give the connection password, certificates and keys,
     # along with the tunnel configuration information.
     #
     # Note that this script assumes that the config file name ends with ".conf"
     # and that the full path name to the file is given below.
     #
     #TUNCONFIG="/etc/openvpn/Hostizzle.conf"
     TUNCONFIG="/etc/openvpn/Cryptocloud.conf"
     #
     # Name of PPTP tunnel.  This name actually selects the tunnel configuration
     # file from the /etc/ppp/peers directory.
     #
     VPNNAME=ipredator-tunnel
     #
     # Load the function library if it exists.
     #
     if [ -f /etc/rc.d/init.d/functions ]; then
         . /etc/rc.d/init.d/functions
     fi
     #
     # Source networking configuration.
     #
     if [ -f /etc/sysconfig/network ]; then
         . /etc/sysconfig/network
     else
         NETWORKING="no"
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ]; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ]; then
         SERVERROLE=Standalone
     fi
     #
     # Check that networking is up.
     #
     if [ ${NETWORKING} = "no" ]; then
         exit 0
     fi
     #
     # If this isn't the primary server in the cluster or this isn't a standalone
     # server, we're outta here.
     #
     if [ x"$SERVERROLE" != xPrimary ] && [ x"$SERVERROLE" != xStandalone ]; then
         exit 0
     fi
     #
     # Get the name of the VPN connection from the config file.
     #
     TUNNAME="generic"
     if [ x"$VPNCONN" == xOpenVPN ]; then
         TUNNAME=`echo ${TUNCONFIG} | /bin/grep -e "\/[-_0-9A-Za-z]\+\.conf" -o | \
             /bin/grep -e "\/[-_0-9A-Za-z]\+" -o`
         TUNNAME=${TUNNAME:1}
         PIDFILE="/var/lock/subsys/vpnconnect.${TUNNAME}"
     fi
     if [ x"$VPNCONN" == xPPTP ]; then
         TUNNAME=${VPNNAME}
      if echo ${TUNNAME} | /bin/grep -e "-tunnel\$" ; then
          TUNNAME=${TUNNAME:0:${#TUNNAME}-7}
      fi
      PIDFILE="/var/run/ppp-${TUNNAME}.pid"

fi
#
# Routine to start up the VPN connection. #
start()

      {
      if [ ! -f /var/lock/subsys/vpnconnect.${TUNNAME} ]; then
          #
          # Bring up the VPN connection.  We redirect the output depending
          # on whether the user wants a logfile or not.
          #
          echo -n "Bringing up a VPN connection to $TUNNAME "
          if [ -z "$LOG_FILE" ]; then
              #
              # Bring up an OpenVPN connection without a log file.
              #
              if [ x"$VPNCONN" == xOpenVPN ]; then
                  ${OPENVPN} --route-noexec --script-security 2 \
                      --up ${TUNUP} --down ${TUNDOWN} --down-pre \
                      --config ${TUNCONFIG} >/dev/null 2>&1 &
              fi
              #
              # Bring up a PPTP connection without a log file.
              #
              if [ x"$VPNCONN" == xPPTP ]; then
                  ${PPTP} call mytunnel mtu 1435 mru 1435 \
                      linkname ${TUNNAME} >/dev/null 2>&1
              fi
          else
              #
              # Bring up an OpenVPN connection with a log file.
              #
              if [ x"$VPNCONN" == xOpenVPN ]; then
                  ${OPENVPN} --route-noexec --script-security 2 \
                      --up ${TUNUP} --down ${TUNDOWN} --down-pre \
                      --config ${TUNCONFIG} >>${LOG_FILE} 2>&1 &
              fi
              #
              # Bring up a PPTP connection with a log file.
              #
              if [ x"$VPNCONN" == xPPTP ]; then
                  ${PPTP} call mytunnel mtu 1435 mru 1435 \
                      linkname ${TUNNAME} logfile ${LOG_FILE} >/dev/null 2>&1
              fi
          fi
          #
          # If everything went OK, create a PID file.
          #
          if [ $? = 0 ]; then
              echo_success
              if [ x"$VPNCONN" == xOpenVPN ]; then
                  echo $! >${PIDFILE}
              fi
          else
              echo_failure
          fi
          echo ""
      fi
      }

#
# Routine to stop the VPN connection. #
stop()

      {
      #
      # If the VPN connection is up, shut it down.  Sending OpenVPN a SIGTERM
      # (signal 15) causes it to gracefully shut down.
      #
      if [ -f ${PIDFILE} ]; then
          echo -n "Shutting down the VPN connection to $TUNNAME "
          TUNPID=`cat ${PIDFILE}`
  #       kill -15 $TUNPID >/dev/null 2>&1
          killproc -p ${PIDFILE}
          #
          # If everything went OK, delete the PID file.
          #
          if [ $? = 0 ]; then
              rm -f ${PIDFILE} >/dev/null 2>&1
  #           echo_success
          #
          # Otherwise, if the PID doesn't exist, ditch the PID file.
          #
          else
              if ! checkpid $TUNPID ; then
                  rm -f ${PIDFILE} >/dev/null 2>&1
              fi
              echo_failure
          fi
          echo ""
      fi
      }

#
# Based on which operation we were asked to perform, have at it. #
case "$1" in

      #
      # Fire up the VPN connection.
      #
      start)
          start
          ;;
      #
      # Bye, bye VPN connection.
      #
      stop)
          stop
          ;;
      #
      # Refresh the VPN connection.
      #
      restart)
          echo "Restarting the VPN connection to $TUNNAME"
          stop
          start
          ;;
      #
      # Waaaaa 'sappenin'?
      #
      status)
          if [ -f ${PIDFILE} ]; then
              echo "Connected to $TUNNAME via VPN"
          else
              echo "No VPN connection to $TUNNAME"
          fi
          ;;
      #
      # Help text.
      #
      *)
          echo "Usage: vpnconnect {start|stop|restart|status}"
          exit 1

esac
#
# Heading home.
#
exit 0

Now that we've created the script, we need to give it execute permissions and make sure that it is started/stopped at the proper runlevels:

     su
     chown root:root /etc/init.d/vpnconnect
     chmod ugo+x /etc/init.d/vpnconnect
     /sbin/chkconfig --add vpnconnect
     /sbin/chkconfig vpnconnect on
     /sbin/chkconfig --list vpnconnect

You should see a result that looks like this:

     vpnconnect      0:off   1:off   2:on    3:on    4:on    5:on    6:off

Incidentally, although the script shown will bring up the VPN tunnel and ensure the smooth flow of traffic, here are, in case you need them, some useful commands for debugging or just observing the operation of the tunnel and the packet filters:

     /usr/sbin/pppd call mytunnel debug dump logfd 2 nodetach unit 2
     /sbin/iptables -v [-t (nat|filter|mangle)] -L [NAME]
     /sbin/iptables -v -t filter -L INPUT
     /sbin/iptables -v -t filter -L FORWARD
     /sbin/iptables -v -t filter -L VPN_CHK
     /sbin/iptables -v -t nat -L POSTROUTING
     /sbin/ip route list table vpn.tunnel
     cat /proc/sys/net/ipv4/conf/*/rp_filter
     dmesg | tail -20

When you have the script working properly and it is starting and stopping the PPTP connection automatically (by rebooting the system, you can verify that the script works under real operating circumstances or you can just check that "vpnconnect start" and "vpnconnect stop" work from the command line), the final step is to set up logrotate to rotate the logfiles.

Either hack the /etc/logrotate.conf file itself to include the following lines or you can add the log file named above to an individual configuration file in the logrotate directory. We prefer the latter approach so that setting up logrotate can be accomplished by simply dropping the file into the logrotate directory, just like the various install programs do.

/etc/logrotate.d/vpnconnect:

     /var/log/vpnconnect
         missingok
         notifempty
         copytruncate
     }

You may run into trouble with the setup described herein, when running a PPTP VPN connection through your firewall, although we have used this arrangement with multiple PPTP servers and it works quite well. Probably the biggest show stopper is the problem with reverse path filtering, which we expressly handle in our examples. However there could be other problems lurking, in which case this chapter on "Troubleshooting Linux Firewalls/Running a PPTP Server Behind a NAT Firewall" may prove useful:

     http://flylib.com/books/en/3.105.1.141/1/

Various and sundry other notes deal with problems with PPTP and netfilter connection tracking. If you think you have a connection tracking problem, make sure that ip_nat_pptp and ip_conntrack_pptp modules are being loaded into the kernel by doing:

     /sbin/lsmod | grep pptp

You should see something like this:

     ip_nat_pptp             9797  0 
     ip_conntrack_pptp      15441  1 ip_nat_pptp

If you don't, you can force the modules to be loaded by adding the following lines to /etc/modprobe.conf:

.

       .

install ip_nat_pptp /bin/true
install ip_conntrack_pptp /bin/true

       .
       .
       .

You may be able to get a picture of what's happening by running a packet sniffer such as Wireshark on the various network devices on the firewall. If you prefer the command line approach, try this tcpdump command (tracing the packets on the router 192.168.11.1, from the locally-connected system 192.168.11.101):

     /usr/sbin/tcpdump -i any -n -nn host 192.168.11.1 or \
         host 192.168.11.101

Adding rules to iptables that log packets as they pass through various stages of netfilter may also show you where things are going wrong. And, looking at any messages regarding packets being dumped, with dmesg, may shed some light.

Although we think that these notes referring to connection tracking and GRE are probably red herrings, one never knows. Take them with a grain of salt but they could be helpful.

Sendmail

Before you begin trying to build and install the latest sendmail on CentOS or RedHat, be aware that some genius has decided that the dependencies for postfix should now include cron. This being the case, you can't install the cron daemon without postfix being present. The reason is probably that this genius has decided that cron must be able to send email if a job fails. Ain't that swell? It may be a perfectly good sentiment but it should be the user's business if they don't want email when a job fails. Its not up to the smartest people in the world to tell them how stuff must work.

Anyway, the upshot of this is that, on the surface, you cannot get rid of postfix without also getting rid of cron. But, if you are planning on building and installing sendmail, you really don't want postfix anywhere near your system. Which is unfortunate because cron is probably the most useful package that you can install on your system.

If you are using the package manager or yum, there is no way to fix this idiotic situation. The smartest people in the world know what's best for you and their package management tool is going to enforce their will. Luckily for us, we can simply use rpm to rip postscript out and ignore the misguided dependancies. So, if you find yourself in this bind, here's how to do it:

     su
     rpm -q postfix
     rpm -e --nodeps postfix-2.6.6-2.2.el6_1.i686

Use the result of the first rpm command to obtain the package name. Then, use the second rpm command to rip postfix out but leave cron on the system.

Now, on to building /usr/sbin/sendmail. To build sendmail from scratch, obtain a fresh distribution from http://www.sendmail.org/releases/. Unzip the distribution:

     tar xzvf sendmail.8.14.2.tar.gz

Create a sendmail userid and group:

     su
     /usr/sbin/groupadd -g 51 -r smmsp
     /usr/sbin/useradd -c "Sendmail MTA" -g smmsp -M -N -r \
       -s /sbin/nologin -u 51 smmsp

Then, create a "site.config.m4" file in .../devtools/Site. You can copy the sample file in that directory (but be aware that you must then select or comment out the appropriate options) or just create an empty file.

If you are planning on using a milter to filter mail, add the following line to this file:

     APPENDDEF(`conf_sendmail_ENVDEF', `-DMILTER')dnl

Also, if you don't have the Berkeley DB installed, the sendmail build screws up when using DB1 for the database. You may need the following, in the above file:

     APPENDDEF(`confMAPDEF',`-DNEWDB')dnl
     APPENDDEF(`confINCDIRS', `-I/usr/include/db1')
     APPENDDEF(`confLIBDIRS', `-ldb1')

In the top directory, run the build command:

     sh Build

Prior to doing this you may want to look at the instructions in the various README and INSTALL files (the top level INSTALL file in the install directory is a good place to start). If you add the line above to the "site.config.m4" file after the fact, rebuild sendmail with "sh Build -c"

If you need to build the milter library, switch to the libmilter directory and build "libmilter.a" by typing:

     sh Build

Building the milter library is required before the components like the sendmail filter can be built.

To install sendmail, run "sh Build install" as root in the top directory. Note that you may need to create the /usr/man directory structure on some systems to hold the sendmail man pages. The directories required are "man1", "man8" and possibly "man5" . Their permissions should be set the same as the standard man directories (e.g. /usr/share/man/manx).

You may also want to get rid of any sendmail man pages that are in the /usr/share/man tree, since these will override the true sendmail man pages that are installed by "sh Build install". The pages to delete are:

     /usr/share/man/man1/sendmail.1
     /usr/share/man/man8/mailstats.8
     /usr/share/man/man8/makemap.8
     /usr/share/man/man8/praliases.8
     /usr/share/man/man8/smrsh.8*

/etc/sendmail.cf or /etc/mail/sendmail.cf:

The sendmail configuration file is famous for being impossible to hack. Instead of hacking it, a M4 macro file (e.g. sendmail.mc) in /etc/mail, /usr/share/sendmail-cf/cf, /usr/lib/sendmail-cf/cf or the sendmail build directory .../cf/cf is hacked and then passed through the macro processor to yield /etc/sendmail.cf or /etc/mail/sendmail.cf.

The macro file should allow you to make all of the changes that you want, to the sendmail configuration, without touching sendmail.cf so no more needs to be said about how to hack sendmail.cf.

However, if you are relaying mail (via smart host) to the Cleopatra 2525 version of SMTP (at a remote ISP), because some pinhead at your local ISP decided to dump all packet traffic outbound for port 25 into the bit bucket because some other asshole was spamming them, the easy way to fix this is to hack sendmail.cf slightly. On the other hand, if you ever anticipate having to change the sendmail configuration using sendmail.mc, see the changes to /usr/share/sendmail-cf/mailer/smtp.m4, /usr/lib/sendmail-cf/mailer/smtp.m4 or in the sendmail build directory .../cf/mailer/smtp.m4 below for a much superior, alternative approach.

Having been warned about the pitfals of hacking sendmail.cf directly and having been informed about the enlightened way to do it (below), if you still feel the urge to hack sendmail.cf (e.g. to relay to Cleopatra), first build your sendmail.cf using the M4 macros below or grab a copy of the existing sendmail.cf. Then, hack all mail gateways that will be listening on port 2525 to use "smtpcleo" instead of "smtp". An example for the smart host is as follows:

     DSsmtpcleo:mail.mailgw.com

In the mailers section, add the "smtpcleo" mailer simlar to that shown (you should just copy whatever is there for the "smtp" mailer and add port 2525):

     Msmtpcleo,      P=[IPC], F=mDFMuX, S=EnvFromSMTP/HdrFromSMTP, R=EnvToSMTP,
                     E=\r\n, L=990,
                     T=DNS/RFC822/SMTP,
                     A=TCP $h 2525

In this manner, you can continue to deliver mail to other SMTPs, if any can be reached from your network (e.g. an internal server elsewhere or one reachable via a different link), while also delivering mail to unreachable servers via Cleopatra SMTP.

Similarly, if you want to use the mail redirector to send mail through your ISP but make it look like it is coming from somewhere else (e.g. to send mail through xyz.net but make it appear to be comming from abc.com), you'll, once again, need to hack sendmail.cf slightly (or see the changes to /usr/share/sendmail-cf/mailer/smtp.m4, /usr/lib/sendmail-cf/mailer/smtp.m4 or in the sendmail build directory .../cf/mailer/smtp.m4 below for a much superior, alternative approach).

First build your sendmail.cf using the M4 macros below or grab a copy of the existing sendmail.cf. Then, hack all mail gateways that will be sent through the redirector to use "smtpredir" instead of "smtp". An example for the smart host is as follows:

     DSsmtpredir:mail.mailgw.com/username

In the mailers section, add the "smtpredir" mailer simlar to that shown (you should just copy whatever is there for the "smtp" mailer and replace the "A=..." with the line shown):

     Msmtpredir,     P=/usr/sbin/SMTPRedirect, F=DFMnSu,
                     S=EnvFromSMTP/HdrFromSMTP, R=EnvToSMTP,
                     T=DNS/RFC822/SMTP,
                     A=SMTPRedirect $h $f $u

Another instance where you might want to hack the sendmail.cf file directly is to quickly turn off canonicalization of names, if you experience a problem with DNS (see the discussion below about how canonicalization is related to DNS). To stop sendmail from doing DNS lookups, one of the rules of ruleset 96 should be commented out in the sendmail.cf file.

Edit the sendmail.cf file and look for the rule in ruleset 96 that looks like this and comment it out:

     # pass to name server to make hostname canonical
     R$ $| $ < @ $ > $ $: $2 < @ $[ $3 $] > $4

In all of these examples, save the changed file and restart sendmail.

/usr/share/sendmail-cf/cf/.mc:
/usr/lib/sendmail-cf/cf/
.mc:
.../cf/cf/*.mc (in the sendmail build directory):

This file is a macro file, typically named sendmail.mc, that is used to generate sendmail.cf. Hacking this file is far and above THE preferred method for configuring sendmail, since sendmail.cf is so impossible to change. Once this file is hacked, it is passed through the M4 macro processor to yield the sendmail.cf file. The simple way to invoke the macro processor is:

     cd .../cf/cf
     /usr/bin/m4 sendmail.mc >sendmail.cf

You can use this to test your sendmail.mc file before you build it. Or, you can copy the resulting file directly to /etc/mail, if you like doing everything by hand.

To hack sendmail.mc, start by copying the canned configuration supplied with the OS. For example:

     cd /usr/share/sendmail-cf/cf

or

     cd /usr/lib/sendmail-cf/cf

or

     cd .../cf/cf
     cp redhat.mc sendmail.mc

Alternately, if you unpacked and installed a distribution of sendmail, you can build the sendmail.cf file in that directory tree instead. You should change to the directory .../sendmail-8.12.x/cf/cf (or whatever your's is called). Then, you can use the command "sh Build sendmail.cf" and copy the resulting sendmail.cf to /etc/mail by hand. To do this, you will need to copy your choice for the starting sendmail.mc to it:

     cd .../cf/cf
     cp redhat.mc sendmail.mc

or

     ln -s mysys.mc sendmail.mc

Once you've created the sendmail.mc file, you can hack it with your favorite text editor. You may specifically want to add or make the following changes. If you would like the aliases file to be in /etc/mail (with all of the other sendmail files), you should add or change:

     define(`ALIAS_FILE',`/etc/mail/aliases')dnl

For delivery of all outbound mail to Cleopatra, you should have something like this:

     define(`confDELIVERY_MODE',`interactive')dnl
     define(`SMART_HOST',`smtpcleo:mail.myisp.com')dnl
     MASQUERADE_AS(`domain1.com')dnl

Or for redirection of all outbound mail to the redirector, try this:

     define(`confDELIVERY_MODE',`interactive')dnl
     define(`SMART_HOST',`smtpredir:mail.mailgw.net/username')dnl
     MASQUERADE_AS(`domain1.com')dnl

You can supply whatever parameters the redirector needs, such as a port number, like this:

     define(`SMART_HOST',`smtpredir:mail.mailgw.net:587|user|passwd')dnl

Note that, if you will be using alternate mailers in this manner, you must add them to the smtp.m4 file (see below) before you build the sendmail.cf file.

If you wish to run the sendmail filter (checks for inbound viruses, etc.) then add the following lines:

     define(`FFRMILTER', `1')dnl
     INPUT_MAIL_FILTER(`filter1',`S=inet:2526@localhost, F=R, \
                       T=S:10s;R:60s;E:5m')dnl
     define(`confINPUT_MAIL_FILTERS',`filter1')dnl

If you will be delivering any mail to programs (e.g. mail handling robots), you should change the queueing parameters for such messages, otherwise the mail to programs can sit in a queue for up to an hour before being delivered. The following definition will change this:

     define(`LOCAL_SHELL_FLAGS', `u9')dnl

If you wish to define more than one domain as the local domain, you can do so by placing a definition for each of the additional domains, as it would appear in the sendmail.cf file, at the end of the macro file. For example:

     CNdomain2.com

If your sendmail is simply relaying mail to a mail gateway, you probably should not be trying to canoify names through DNS. Whoever you are relaying to will take care of it. Meanwhile, DNS lookups on email messages with lots of "to names" can take forever, if canonicalization is enabled, especially if your system does not provide a local or have acces to a reliable DNS bind server. This can actually result in timeouts that are problematic, for example causing fetchmail to drop messages, when it is trying to pass on messages with many "to names".

To disable canonicalization, add this feature to the sendmail.mc file along with all the other features and particularly before you define the SMART_HOST:

     FEATURE(`nocanonify')dnl

Here is an example of the complete M4 file (for relaying to Cleopatra):

     divert(-1)
     dnl This is the macro config file used to generate the /etc/sendmail.cf
     dnl file for mysys. If you modify this file you will have to regenerate
     dnl /etc/sendmail.cf by running this macro config through the m4
     dnl preprocessor:
     dnl
     dnl      m4 /usr/lib/sendmail-cf/cf/sendmail.mc > /etc/sendmail.cf
     dnl
     dnl You will need to have the sendmail-cf package installed for this to
     dnl work.
     dnl
     dnl You can do everything in one fell swoop by running:
     dnl
     dnl      /usr/lib/sendmail-cf/cf/newsendmail
     include(`../m4/cf.m4')
     VERSIONID(`linux setup for Red Hat Linux')dnl
     define(`confDEF_USER_ID',``8:12'')dnl
     OSTYPE(`linux')
     undefine(`UUCP_RELAY')dnl
     undefine(`BITNET_RELAY')dnl
     define(`confAUTO_REBUILD')dnl
     define(`confTO_CONNECT', `1m')dnl
     define(`confTRY_NULL_MX_LIST',true)dnl
     define(`confDONT_PROBE_INTERFACES',true)dnl
     define(`confDELIVERY_MODE',`interactive')dnl
     define(`PROCMAIL_MAILER_PATH',`/usr/bin/procmail')dnl
     define(`ALIAS_FILE',`/etc/mail/aliases')dnl
     define(`STATUS_FILE', `/var/log/sendmail.st')dnl
     define(`UUCP_MAILER_MAX', `2000000')dnl
     define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl
     dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn')dnl
     dnl define(`confTO_QUEUEWARN', `4h')dnl
     dnl define(`confTO_QUEUERETURN', `5d')dnl
     dnl define(`confQUEUE_LA', `12')dnl
     dnl define(`confREFUSE_LA', `18')dnl
     define(`LOCAL_SHELL_FLAGS', `u9')dnl
     FEATURE(`smrsh',`/usr/sbin/smrsh')dnl
     FEATURE(`mailertable',`hash -o /etc/mail/mailertable')dnl
     FEATURE(`virtusertable',`hash -o /etc/mail/virtusertable')dnl
     FEATURE(redirect)dnl
     FEATURE(always_add_domain)dnl
     FEATURE(use_cw_file)dnl
     FEATURE(local_procmail)dnl
     FEATURE(`access_db')dnl
     FEATURE(`blacklist_recipients')dnl
     FEATURE(`nocanonify')dnl
     dnl We strongly recommend to comment this one out if you want to protect
     dnl yourself from spam. However, the laptop and users on computers that do
     dnl not hav 24x7 DNS do need this.
     dnl FEATURE(`accept_unresolvable_domains')
     dnl FEATURE(`relay_based_on_MX')
     define(`SMART_HOST',`smtp:mail.myisp.com')dnl
     MASQUERADE_AS(`domain1.com')dnl
     FEATURE(`masquerade_envelope')dnl
     define(RELAY_MAILER, TCP)dnl
     FEATURE(`accept_unqualified_senders')dnl
     define(`FFRMILTER', `1')dnl
     INPUT_MAIL_FILTER(`filter1',`S=inet:2526@localhost, F=R, T=S:10s;R:60s;E:5m')dnl
     define(`confINPUT_MAIL_FILTERS',`filter1')dnl
     MAILER(local)dnl
     MAILER(smtp)dnl
     MAILER(procmail)dnl
     CNdomain2.com

/usr/share/sendmail-cf/mailer/smtp.m4:
/usr/lib/sendmail-cf/mailer/smtp.m4:
.../cf/mailer/smtp.m4 (in the sendmail build directory):

If you used any alternate mailers in the sendmail.mc file (above), for example to set up relaying to Cleopatra or redirection through the redirector, you will have to hack this file to define them.

To set up relaying to Cleopatra SMTP in the sendmail.mc (above), add the following new mailer to the smtp.m4 file after the "smtp" mailer (basically, you just copy the smtp mailer and add the 2525 port number):

     Msmtpcleo,      P=[IPC], F=MODMF(CONCAT(DEFSMTP_MAILER_FLAGS,
                     SMTP_MAILER_FLAGS), `SMTP'), S=EnvFromSMTP/HdrFromSMTP,
                     R=ifdef(`ALLMASQUERADE_', `EnvToSMTP/HdrFromSMTP',
                     `EnvToSMTP'), E=\r\n, L=990,
                     OPTINS(`SMTPMAILER_MAX',
                       `M=', `, ')OPTINS(`SMTPMAILER_MAXMSGS',
                       `m=', `, ')OPTINS(`SMTPMAILER_MAXRCPTS',
                       `r=', `, ')OPTINS(`SMTPMAILER_CHARSET',
                       `C=', `, ')T=DNS/RFC822/SMTP,
                     A=SMTP_MAILER_ARGS 2525

Then, change sendmail.mc to read:

     define(`SMART_HOST',`smtpcleo:mail.myisp.com')dnl

To set up relaying to the redirector for SMTP in the sendmail.mc (above), add the following new mailer to the smtp.m4 file after the "smtp" mailer (basically, just copy whatever is there for the "smtp" mailer and replace the "A=..." with the line shown):

     Msmtpredir,     P=/usr/sbin/SMTPRedirect, F=DFMnSu,
                     S=EnvFromSMTP/HdrFromSMTP, R=ifdef(`ALLMASQUERADE_',
                       `EnvToSMTP/HdrFromSMTP', `EnvToSMTP'),
                     T=DNS/RFC822/SMTP,
                     A=SMTPRedirect $h $f $u

Then, change sendmail.mc to read something like this:

     define(`SMART_HOST',`smtpredir:mail.mailgw.com/username')dnl

/usr/lib/sendmail-cf/cf/submit.mc

Newer versions of sendmail use a second non-priviledged mail-submission program to submit mail for delivery, thereby plugging a security hole. They use a second config file called submit.cf. This file is built by the macro file submit.mc, which embeds sendmail.mc.

So, beware of two things. One is to make sure that sendmail.mc is either the correct file for building your sendmail.cf or that it is symlinked to the correct file. The other is to make sure that submit.cf is built (e.g. "sh Build submit.cf") and copied to /etc/mail.

Another is that some versions of submit.mc file put the PID file in the queue directory instead of /var/run. You might want to change the default location for the PID file to /var/run so that the standard startup script will be able find sm-client and start/stop it properly. Add the following line:

     define(`confPID_FILE', `/var/run/sm-client.pid')dnl

Also, some later versions of sendmail and/or combinations of certain versions of Unix have problems setting the permissions when submit runs so that non-root users can submit mail. This problem manifests itself by displaying the following message when non-root users try to use "mail" to send mail locally:

     WARNING: RunAsUser for MSP ignored, check group ids (egid=51, want=12)
       can not write to queue directory /var/spool/clientmqueue/
       (RunAsGid=0, required=51): Permission denied

Although the submit.cf file correctly has:

     # what user id do we assume for the majority of the processing?
     O RunAsUser=smmsp

And, all of the permissions on the client mail queue are correct but the problem still persists.

Apparently, submit is trying to look up the group permissions to use for the submit operation, based on the username. However, this fails and the group permissions (which are needed to access the client mail queue) are not set correctly. Consequently, the submit fails.

The solution is to change RunAsUser to:

     # what user id do we assume for the majority of the processing?
     O RunAsUser=smmsp:smmsp

This can be done by adding the following lines to the end of submit.mc:

     dnl
     dnl Fix a bug in later versions of submit that don't set the group properly
     dnl from the RunAs username.
     define(`confRUN_AS_USER', `smmsp:smmsp')dnl

/etc/mail/sendmail.cf or submit.cf:

As mentioned above, if you are building sendmail from scratch in a source directory tree, you can build and install the sendmail.cf and submit.cf files using the following commands:

     cd .../cf/cf
     sh Build sendmail.cf
     sh Build submit.cf
     su
     sh Build install-cf

Otherwise, you want sendmail.cf and submit.cf to end up in the /etc/mail directory so copy them there by hand:

     su
     cd .../cf/cf
     cp sendmail.cf /etc/mail
     cp submit.cf /etc/mail

/etc/mail/aliases or /etc/aliases (on older systems):

Aliases for mail delivery on this system. Probably all we care about is to get all mail being sent to root. Here's the clip from /etc/aliases:

.

       .
  # Person who should get root's mail
  root:              joeblow

Run /usr/bin/newaliases to rebuild the alias file, once it is changed. Or, if you use the /etc/init.d/sendmail startup script that we show below, the rebuild will be done automatically at startup.

/etc/mail/relay-domains:

A list of domains from which sendmail will relay (i.e. accept) messages.

Add your local machines to the list of domains. For example:

     stargate.homeworld
     gabriella.homeworld
     clara-bow.homeworld
     phoebe.homeworld

Or, alternately, you can allow hosts based on IP addresses:

     # Allow all local machines to relay
     192.168

/etc/mail/local-host-names:

With later versions of sendmail, this is a list of local host/domain names that will be considered as local to the machine running sendmail. Any mail to a user in these domains will be delivered to same user locally. For example:

     mydomain.com
     myotherdomain.com

/etc/mail/access:

Since sendmail already uses /etc/mail/relay-domains to control who can relay mail, why not have a second control method? Why not indeed. This is the purpose of /etc/mail/access.db. To be the belt and suspenders too. You'd like to use just one method but sendmail won't deliver local mail without the /etc/mail/access.db database. So, you must create the file /etc/mail/access and then build the database.

We populate /etc/mail/access like this:

     # Check the /usr/share/doc/sendmail/README.cf file for a description
     # of the format of this file. (search for access_db in that file)
     # The /usr/share/doc/sendmail/README.cf is part of the sendmail-doc
     # package.
     #
     # by default we allow relaying from localhost...
     localhost.localdomain           RELAY
     localhost                       RELAY
     127.0.0.1                       RELAY

The access.db map file is built from this file automagically by the startup script, /etc/rc.d/init.d/sendmail, that we show below. Set the permissions on the access file like so:

     su
     chown root:root /etc/mail/access
     chmod u=rw,g=r,o= /etc/mail/access

/etc/mail/domaintable:
/etc/mail/mailertable:
/etc/mail/virtusertable:

These three files are used to build the map files that sendmail uses to check domains and mailers, and implement virtual users. Normally, you won't need any of these so just use touch to create empty files:

     su
     touch /etc/mail/domaintable
     touch /etc/mail/mailertable
     touch /etc/mail/virtusertable
     chown root:root /etc/mail/table
     chmod u=rw,g=r,o= /etc/mail/table

/etc/sendmail.cw (earlier versions of sendmail use this instead):

The name of any domains, such that any mail directed to them is actually local. DNS lookup won't be done on the domain name and mail will be delivered directly to the user on this machine.

Add your domain name to this list. For example:

     # sendmail.cw - include all aliases for your machine here.
     domain1.com

/var/spool/mqueue:

For some reason, sendmail can create the files and directories that it needs (such as /var/spool/clientmqueue) but not /var/spool/mqueue. This leaves it to us to create the directory by hand, before sendmail is run. The sendmail install documentation suggests that this directory should be solely owned by root but we prefer root:smmsp. If you set the permissions like this, it seems to work:

     su
     mkdir -p /var/spool/mqueue
     chown root:smmsp /var/spool/mqueue
     chmod ug=rwx,o= /var/spool/mqueue

Alternately, if you prefer to go with sendmail's advice, set the permissions like this:

     su
     mkdir -p /var/spool/mqueue
     chown root:root /var/spool/mqueue
     chmod u=rwx,go= /var/spool/mqueue

/var/spool/clientmqueue:

While we're at it, the clientmqueue directory usually is created by the sendmail install but, even when it is, it appears sometimes to be created with the wrong permissions. We check that it is indeed created and that the permissions are properly set like this:

     su
     mkdir -p /var/spool/clientmqueue
     chown smmsp:smmsp /var/spool/clientmqueue
     chmod ug=rwx,o= /var/spool/clientmqueue

/usr/adm/sm.bin (RedHat):
/usr/lib/sendmail.d/bin (SuSE):

Create this directory and give it permissions of root/root/755. Put soft links to any programs which will be executed by a .forward file. Be careful. Read "man smrsh" for more information.

Note: the sendmail documentation claims that only programs in this directory can be executed in a .forward file and that it matters not what the actual path of the program is called. This is bullshit. Sendmail checks the path that is mentioned in the .forward file to see if the program exists and fails if it does not. Not only that but the symlinked name of the program in this directory must be the same as the name of the actual progam. For example:

     /usr/adm/sm.bin/myprog.pl -> /usr/local/myprograms/myprog.pl  (symlink)
     |"/usr/local/myprograms/myprog.pl"  (.forward)

Do not try anything clever like this:

     /usr/adm/sm.bin/myprog -> /usr/local/myprograms/myprog.pl  (symlink)

or this:

     |"myprog.pl"  (.forward)

~/.forward:

The .forward file is placed in the user's home directory. Any mail that is to be delivered to the user is instead forwarded according to the rules found in the .forward file. The permissions of the .forward file should be set to:

     -rw-------   1 joeuser joeuser    161 Aug 24  2010 .forward

Be sure that ownership is given to the user, as shown above. Sendmail will access this file as the user. If they don't have permissions to this file, you will see a message in /var/log/maillog that reads something like this:

     Sep 28 23:30:02 mtabox sendmail[1234]: s8T3U2Hx013969: forward 
       /home/joeuser/.forward: Permission denied

If you wish to deliver mail to a mail robot program, you will first need to add the program to the sendmail permissions directory (see above). Once you've done that, make sure you can execute the program using the symlink in the permissions directory. The executable and all of the directories, files, etc. that it uses must be available to the user whom the mail is being forwarded from. Sendmail runs everything setuid to the user. Make sure all of this works because you don't get any error messages, it just fails to send the message.

You may include anything in the .forward file that you would include in the aliases file (see "man aliases" for more information). However, care must be taken to avoid alias loops so one significant divergence from the format of the aliases file is that usernames may be prefixed with '\' to inhibit further aliasing. Its a good idea, unless you specifically want the forwarded mail to go to an alias, to always use the '\' on user names.

In general, you will probably want to forward mail to another user or a program. If you have an account that you rarely use and want to see its mail under another account, try this:

     \otheruser

If you want to send mail to a mail robot program, try this:

     "|/usr/local/myprograms/mailrobot --options"

Note the use of the double quotes around the entire command line for the robot. These are very important. You may specify options, etc. along with the program name, if you wish, but always use the quotes.

You can also send mail to a user and a program. For example, the vacation program does this:

     \thisuser, "|/usr/bin/vacation thisuser"

Getting .forward files to work is often a whole lot of grief but if you follow these simple rules you should be fine. Otherwise, if you experience the following errors, they usually mean what is shown:

     Error 127 - Sendmail couldn't find the program being forwarded to, the
                 program has the wrong permissions or the name in the .forward
                 file doesn't match the name in the permissions directory or the
                 path/filename given doesn't exist (even though the program is
                 supposed to be looked up in the permissions directory).

/etc/init.d/sendmail:

Here is a new and improved startup script for sendmail. It will rebuild the sendmail configuration files, if they exist and then it will start sendmail. Next, it will start the sendmail filter and the sendmail client queue runner. If you don't need the filter, you can turn it off in /etc/sysconfig/sendmail.

The clamav and spamassassin configurations are also used to decide whether the filter should use clamd and spamd for virus and spam scanning.

     #!/bin/bash
     #
     # sendmail - Shell script to take care of starting and stopping sendmail.
     #
     # chkconfig: 2345 80 30
     # description: Sendmail is a Mail Transport Agent, which is the program \
     #              that moves mail from one machine to another.
     #
     # processname: sendmail
     # pidfile: /var/run/sendmail.pid
     # config: /etc/mail/sendmail.cf
     #
     #
     # Define the install path for the sendmail binaries, etc.  Define the
     # programs to run.
     #
     INSTALL_PATH="/usr/sbin"
     CONFIG_PATH="/etc/mail"
     MTA_PROG="sendmail"
     FILTER_PROG="sendmailfilter"
     #
     # Load the functions.
     #
     if [ -f /etc/init.d/functions ] ; then
         . /etc/init.d/functions
     fi
     #
     # Source the networking configuration.
     #
     if [ -f /etc/sysconfig/network ] ; then
         . /etc/sysconfig/network
     else
         NETWORKING=no
     fi
     #
     # Source the sendmail configuration or take the defaults.
     #
     if [ -f /etc/sysconfig/sendmail ] ; then
         . /etc/sysconfig/sendmail
     else
         DAEMON=no
         QUEUE=1h
         FILTER=no
     fi
     [ -z "$SMQUEUE" ] && SMQUEUE="$QUEUE"
     [ -z "$SMQUEUE" ] && SMQUEUE=1h
     if [ "x$FILTER" == xyes ] ; then
         [ -z "$FILTER_OPTIONS" ] && FILTER_OPTIONS="-h -i -k -p24 -s1 -t -um"
         [ -z "$FILTER_PORT" ] && FILTER_PORT=2526
     fi
     #
     # Source the clamd configuration.  If it exists, it turns on filtering using
     # clamd.
     #
     if [ -f /etc/sysconfig/clamd ] ; then
         . /etc/sysconfig/clamd
         [ -z "$CLAMD_PORT" ] && CLAMD_PORT=2528
     fi
     #
     # Source the spamd configuration.  If it exists, it turns on filtering using
     # spamd.
     #
     if [ -f /etc/sysconfig/spamd ] ; then
         . /etc/sysconfig/spamd
         [ -z "$SPAMD_PORT" ] && SPAMD_PORT=783
     fi
     #
     # Check that networking is up.
     #
     [ "x$NETWORKING" != xyes ] && exit 0
     #
     # Make sure we have something to run.
     #
     [ -f $INSTALL_PATH/$MTA_PROG ] || exit 0
     RETVAL=0
     #
     # Upon startup, first start sendmail.
     #
     # Then, start the sendmail filter, if we were asked to do so.
     #
     start()
         {
         #
         # As a favor to the world, rebuild the various databases that sendmail
         # uses as configuration files.
         #
         echo -n $"Rebuilding the $MTA_PROG config tables: "
      /usr/bin/newaliases >/dev/null 2>&1
      if test -x /usr/bin/make -a -f $CONFIG_PATH/Makefile ; then
          make -C $CONFIG_PATH -s
      else
          for i in virtusertable access domaintable mailertable ; do
              if [ -f $CONFIG_PATH/$i ] ; then
                  makemap hash $CONFIG_PATH/$i < $CONFIG_PATH/$i
              fi
          done
      fi
      echo [ DONE ]
      #
      # Start sendmail.
      #
      echo -n $"Starting $MTA_PROG: "
      daemon $INSTALL_PATH/$MTA_PROG $([ "x$DAEMON" == xyes ] && echo -bd) \
          $([ -n "$QUEUE" ] && echo -q$QUEUE)
      RETVAL=$?
      echo
      [ $RETVAL -eq 0 ] && touch /var/lock/subsys/sendmail
      #
      # Start the mail filter
      #
      if [ "x$FILTER" == xyes ] ; then
          if [ -x $INSTALL_PATH/$FILTER_PROG ] ; then
              echo -n $"Starting $FILTER_PROG: "
                  $([ "x$FILTER_STRACE" != x ] && echo -e \
                      "/usr/bin/strace $FILTER_STRACE" ) \
                  $INSTALL_PATH/$FILTER_PROG -A $CONFIG_PATH/aliases \
                      $([ "x$FILTER_DOMAINS" != x ] && echo -e \
                          "-D \"$FILTER_DOMAINS\"" ) \
                      $FILTER_OPTIONS \
                      $([ "x$FILTER_PORT" != x ] && echo -e \
                          "-p inet:${FILTER_PORT}@localhost" ) \
                      $([ "x$CLAMD_PORT" != x ] && echo -e \
                          "-v clamd:${CLAMD_PORT}@localhost" ) \
                      $([ "x$SPAMD_PORT" != x ] && echo -e \
                          "-s spamd:${SPAMD_PORT}@localhost" ) \
                      $FILTER_DEBUG >/dev/null 2>&1 &
              FILTVAL=$?
              [ $FILTVAL -eq 0 ] && success "sendmailfilter startup" \
                  || failure "sendmailfilter startup"
              echo
          else
              echo -n $"Starting $FILTER_PROG, bypassed: "
              failure "sendmailfilter startup"
              echo
          fi
      fi
      #
      # Start the sendmail client, if not already running.
      #
      if ! test -f /var/run/sm-client.pid ; then
          echo -n $"Starting $MTA_PROG client: "
          touch /var/run/sm-client.pid
          chown smmsp:smmsp /var/run/sm-client.pid
          daemon --check sm-client $INSTALL_PATH/$MTA_PROG -L sm-msp-queue \
              -Ac -q$SMQUEUE
          RETVAL=$?
          echo
          [ $RETVAL -eq 0 ] && touch /var/lock/subsys/sm-client
      fi
      return $RETVAL
      }

#
# Upon shutdown, do everything in reverse (except the client is shut down # last).
#
stop()

      {
      #
      # Shut down the sendmail client.
      #
      # Note that, most of the time, the sendmail client doesn't exist so the
      # call to killproc fails.  We don't really want to see any whining so we
      # send the results to the bit bucket.
      #
      if test -f /var/run/sm-client.pid ; then
          echo -n $"Shutting down $MTA_PROG client: "
          killproc sm-client >/dev/null 2>&1
          RETVAL=$?
          echo "[  OK  ]"
          [ $RETVAL -eq 0 ] && rm -f /var/run/sm-client.pid
          [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/sm-client
      fi
      #
      # Stop the daemons.  Sendmail requires a big hammer.
      #
      if [ "x$FILTER" == xyes ] ; then
          echo -n $"Shutting down $FILTER_PROG: "
          killproc $FILTER_PROG
          echo
      fi
      echo -n $"Shutting down $MTA_PROG: "
      killproc $MTA_PROG
      RETVAL=$?
      echo
      killproc $MTA_PROG >/dev/null 2>&1
      [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/sendmail
      return $RETVAL
      }

#
# See how we were called.
#
case "$1" in

      #
      # Start.
      #
      start)
          start
          RETVAL=$?
          ;;
      #
      # Stop.
      #
      stop)
          stop
          RETVAL=$?
          ;;
      #
      # Restart or reload (whatever).
      #
      restart|reload)
          stop
          start
          RETVAL=$?
          ;;
      #
      # Conditional restart.
      #
      condrestart)
          if [ -f /var/lock/subsys/sendmail ] ; then
              stop
              start
              RETVAL=$?
          fi
          ;;
      #
      # Give the status of all of the sendmail components that are running.
      #
      status)
          status $MTA_PROG
          RETVAL=$?
          if [ "x$FILTER" == xyes ] ; then
              status $FILTER_PROG
          fi
          ;;
      #
      # Help text.
      #
      *)
          echo $"Usage: $0 {start|stop|restart|condrestart|status}"
          exit 1

esac

     exit $RETVAL

/etc/sysconfig/sendmail:

If you use the above startup script, you will need to either create or update /etc/sysconfig/sendmail. This file contains the options that tell sendmail how to run. By default, the startup script runs sendmail once and then has it exit. This probably not what you intended. Rather you probably want it to run as a daemon. So, set /etc/sysconfig/sendmail to at least the following options:

     DAEMON=yes
     QUEUE=1h

You may also want to set some of the filtering options such as

     FILTER=yes
     FILTER_DOMAINS=
     FILTER_OPTIONS=
     FILTER_PORT=2526

The permissions on this file should be:

     -rw-r--r--    1 root     root

Here is a sample of the complete file:

     #
     # Configuration for sendmail and its various filters and classifiers.
     #
     # Sendmail parameters.
     #
     # Daemonize and set the queue interval to one hour.
     #
     DAEMON=yes
     QUEUE=1h
     #
     # Filter parameters.
     #
     # Yes, we want filtering.
     #
     FILTER=yes
     #
     # Local domains.
     #
     FILTER_DOMAINS="192.168.1.0/24,stargate.homeworld"
     #
     # Filtering options.
     #
     FILTER_OPTIONS="-h -i -k -p24 -s1 -t -um"
     #
     # Sendmail port (as set in the sendmail config file)
     #
     FILTER_PORT=2526
     #
     # Debugging options.
     #
     FILTER_DEBUG="-d /var/log/sendmailfilter -d3"
     #
     # Strace options.  Set this parameter to cause the filter to be run with
     # strace so that system call problems, etc. can be debugged.  If non-blank,
     # the options specified will be passed to strace and tracing will be enabled.
     #
     # FILTER_STRACE="-f -ff -o /var/log/smfstrace"

Simple Forwarding From Local Machines

If you have local machines that you wish to have deliver mail via your sendmail server, you need to hack sendmail.mc on them and restart their local sendmail. Uncomment the SMART_HOST line in the macro file (probably /etc/mail/sendmail.mc) and aim it at your sendmail server:

     define(`SMART_HOST',`stargate')

Rebuild the sendmail.cf file:

     /usr/bin/m4 sendmail.mc >sendmail.cf

Restart sendmail:

     /etc/rc.d/init.d/sendmail stop
     /etc/rc.d/init.d/sendmail start

You should now be able to send mail to the main mail server from the local machine (if relaying is denied, you may need to add the local machine's address to relay-domains on the mail server). You might want to alias the following users in /etc/aliases on the local machine:

     root: joeblow@mydomain.com

As long as the aliased name contains a domain name, it will be forwarded by the local sendmail to the SMART_HOST. If the domain name is marked as a local domain on the main mail server, the mail will get delivered there. Note that you need to recycle sendmail on the local machine when you add aliases to /etc/aliases.

Cleopatra Sendmail

To run Cleopatra 2525 sendmail, set up sendmail_cleo in the init.d directory and then put in all of the symbolic links in the rcx.d directories. Also, add a symlink from "sendmail_cleo" to "sendmail" in the sendmail binary directory.

/etc/rc.d/init.d/sendmail_cleo:

     #!/bin/sh
     #
     # sendmail_cleo This shell script takes care of starting and stopping
     #               the Cleopatra 2525 version of sendmail.  Other than the
     #               fact that it listens on port 2525 instead of port 25, it
     #               is identical to sendmail and uses all of the same config
     #               files, etc.  It may be run in parallel with a port 25
     #               sendmail.
     #
     # chkconfig: 2345 80 30
     # description: Sendmail is a Mail Transport Agent, which is the program
     #              that moves mail from one machine to another.
     # processname: sendmail
     # config: /etc/sendmail.cf
     # pidfile: /var/run/sendmail_cleo.pid
     # Source function library.
     . /etc/rc.d/init.d/functions
     # Source networking configuration.
     . /etc/sysconfig/network
     # Source sendmail configureation.
     if [ -f /etc/sysconfig/sendmail ] ; then
             . /etc/sysconfig/sendmail
     else
             DAEMON=yes
             QUEUE=1h
     fi
     # Check that networking is up.
     [ ${NETWORKING} = "no" ] && exit 0
     # Check for the Cleopatra version of sendmail.  This is just a symbolic
     # link to sendmail but it lets Cleo run as an easily-distinguished, separate
     # task (so we know which is which and can kill it, etc.).
     [ -f /usr/sbin/sendmail_cleo ] || exit 0
     RETVAL=0
     # See how we were called.
     case "$1" in
       start)
             # Start daemons.
          echo -n "Starting Cleopatra sendmail: "
          /usr/bin/newaliases >/dev/null 2>&1
          for i in virtusertable access domaintable mailertable ; do
              if [ -f /etc/mail/$i ] ; then
                  makemap hash /etc/mail/$i < /etc/mail/$i
              fi
          done
          daemon /usr/sbin/sendmail_cleo $([ "$DAEMON" = yes ] && echo -bd) \
                                    $([ -n "$QUEUE" ] && echo -q$QUEUE) \
                                    -ODaemonPortOptions=Port=2525
          RETVAL=$?
          echo
          [ $RETVAL -eq 0 ] && touch /var/lock/subsys/sendmail_cleo
          ;;
    stop)
          # Stop daemons.
          echo -n "Shutting down Cleopatra sendmail: "
          killproc sendmail_cleo
          RETVAL=$?
          echo
          [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/sendmail_cleo
          ;;
    restart|reload)
          $0 stop
          $0 start
          RETVAL=$?
          ;;
    status)
          status sendmail_cleo
          RETVAL=$?
          ;;
    *)
          echo "Usage: sendmail_cleo {start|stop|restart|status}"
          exit 1

esac

     exit $RETVAL

/etc/sysconfig/sendmail:

If you use the above startup script, you should see the notes in the sendmail section about /etc/sysconfig/sendmail.

Queueing and Delivery by Cron Job

On machines that use dialup links, it may not be desirable to bring up the link every time a mail message is sent. Instead, it would be a better idea to put the message into the queue and then deliver the mail all at once under control of a cron job (possibly when the link is already up at some other time or in the wee hours of the morning when the phone line is likely to be free).

The sendmail configuration file may be altered to achieve this queueing and then a cron job may be run to flush the queue at the appropriate time. Make the following changes to the sendmail config file:

/usr/lib/sendmail-cf/cf/*.mc:

See the instructions for /usr/lib/sendmail-cf/cf/*.mc in the Sendmail section, above, for information on how to hack this file.

If you'll be leaving mail in the queue for longer than four hours before it is delivered, you should add or change this line:

     dnl define(`confTO_QUEUEWARN', `24h')dnl

In this case, 24 hours was picked so that mail can be delivered once a day.

Somewhere in the config file, add the following flags:

     define(`confCON_EXPENSIVE', `True')dnl
     define(SMTP_MAILER_FLAGS, e)dnl

The first defines the "Oc" parameter in the sendmail.cf file, which indicates that sendmail should queue all mail bound for expensive mailers. The second indicates that the SMTP mailers should be marked as expensive by adding the "e" parameter to their command line parameters.

If your system does not have a local copy of DNS, you should add this line to the config file to prevent names from being resolved via DNS (thereby requiring the link to be brought up before the mail can be queued):

     FEATURE(nocanonify)dnl

Also, if the link will not be up when the cron job schedules the queued messages for delivery, you may want to increase the dialing delay that sendmail is prepared to accept so that it will not fail when delivering the queued messages. To do this, set the following to an acceptable value:

     define(`confDIAL_DELAY',`60s')dnl

You can include the following line to define a local mailer that isn't affected by the expensive parameter, although the local mailer is supposed to be included by default:

     MAILER(local)dnl

Be sure that you include at least one SMTP-based mailer, for example:

     MAILER(smtp)dnl

/usr/lib/sendmail-cf/mailer/smtp.m4:

If you are using the SMTP redirector and you hacked smtp.m4, you will need to add the expensive flag to the mailer's parameters. Change "F=DFMnSu" to read "F=DFMnSue", as follows:

     Msmtpredir,     P=/usr/sbin/SMTPRedirect, F=DFMnSue,
                     S=EnvFromSMTP/HdrFromSMTP, R=ifdef(`ALLMASQUERADE_',
                         `EnvToSMTP/HdrFromSMTP', `EnvToSMTP'),
                     T=DNS/RFC822/SMTP,
                     A=SMTPRedirect $h $f $u

/usr/lib/sendmail-cf/cf/submit.mc:

If your system does not have a local copy of DNS, you should add these lines to the submit config file to prevent names from being resolved via DNS (thereby throwing mail into the queue for all eternity, since the main sendmail will refuse to resolve the name and the mail will get deferred):

     FEATURE(`nocanonify', `canonify_hosts')dnl
     define(`confDIRECT_SUBMISSION_MODIFIERS', `C')dnl

Once you've done this, build and install submit.cf according to the instructions above.

/etc/rc.d/init.d/sendmail:

You should alter the startup parameters for sendmail to omit automatically running the queue at periodic intervals. If you are using the latest version of sendmail (e.g. 8.12.10) that uses smclient to submit mail without needing sendmail to be setuid root, you should modify only the startup of sendmail itself, not smclient (smclient still needs to periodically run its queue to properly handle local mail delivery -- you cannot set the queue time to zero or remove it because smclient refuses to start). Change:

     daemon /usr/sbin/sendmail $([ "x$DAEMON" = xyes ] && echo -bd) \
                               $([ -n "$QUEUE" ] && echo -q$QUEUE)

to

     daemon /usr/sbin/sendmail $([ "x$DAEMON" = xyes ] && echo -bd)

This will remove the queue option from sendmail startup but not from smclient, which will keep the same queue option as usual. If you are using an earlier version of sendmail, you should make similar changes to those shown to eliminate the queue option from sendmail startup.

/etc/mail/checkmail:

You may want sendmail to send all of the mail in its queue and fetchmail to get all of the mail in the ISP's POP mailboxes at regular intervals but only if the link is already up for some other reason. If so, the following script may be useful:

     #!/bin/sh
     #
     # Script to check whether the PPP link is presently up and, if it is, send
     # all the mail in the queue to the ISP's SMTP server and fetch all available
     # mail from the ISP's POP server and pass it through sendmail on the local
     # machine.
     #
     # Should be run at regular intervals from cron.  However, please note that
     # the interval used should be greater than the interval used by diald to
     # determine link inactivity.  This will allow diald to shut down the link,
     # if it isn't active for some reason other than sending/receiving mail.  If
     # the interval is too short, the act of sending/receiving mail will keep the
     # link up forever, which isn't what we want.  A good suggestion is that the
     # interval for this task be 50% more than the interval for diald link
     # timeout.
     #
     # Note that this script will find all of the .fetchmailrc files in the tree
     # under /home and fetch the mail for every user that has one.
     #
     #
     # Check to see if the link is up.
     #
     IsLinkUp=`/sbin/ifconfig | grep ppp`
     if test x"$IsLinkUp" != x; then
         /usr/sbin/sendmail -q
         /etc/mail/fetchmail-poll
     fi

/etc/crontab:

At a time when the link will be up or when it is convenient to bring the link up for the express purposes of sending mail, the sendmail queue should be processed. The following should do the trick:

     # At least once a day, when the link is already up, send all the queued
     # mail to the ISP's SMTP server.  Log attempts to fetchmail log.
     05 2 * * * root /usr/sbin/sendmail -q >>/var/log/fetchmail 2>&1

If you want to run the checkmail script (above) at regular intervals, you should add something to crontab like this:

     # Every twenty minutes, we'll check to see if the link is currently up.  If
     # it is, someone is already using the link.  We might as well see if we can
     # send and receive all of the email that is waiting in the queues or
     # mailbox.  Note that the interval used here must be greater than the
     # timeout used by diald or this will force the link to be up always.
     00,20,40 * * * * root /etc/mail/checkmail >>/var/log/fetchmail 2>&1

Sendmail Redirector

To run the sendmail redirector (SMTPRedirect.pl), copy the Perl script into /usr/sbin/SMTPRedirect (renamed without the ".pl" extension). Change the ownership and permissions so that it looks like:

     -rwxr-xr-x 1 root root 53912 Nov 16  2009 /usr/sbin/SMTPRedirect

You may want to hack the default mail host ($MAILHOST_DEF) in this file to set it to your default mail host's name. Other than this, nothing is required to be done to sendmail startup, etc. to run the redirector. It is done so, automatically, whenever mail is to be sent.

SpamAssassin

SpamAssassin is commonly used to identify spam, so that it can be given the proper handling by a mail delivery agent such as procmail or directly by the MTA as in the case of a sendmail message filter (milter).

The easiest way to install SpamAssassin is via CPAN. As root, do the following:

     perl -MCPAN -e shell
     install Mail::SpamAssassin
     quit

If you wish to install it from the distro, begin with the latest source which can be found at: http://spamassassin.apache.org/. Once you have downloaded the source tarball, unzip it in the top level directory where you wish the source/build tree to reside:

     tar -xvzf Mail-SpamAssassin-m.n.xx.tar.gz

In the INSTALL file, you can check the list of prerequisites that should be installed (mostly from CPAN) before SpamAssassin can be installed. First off, make sure that you have a version of the Perl Interpreter that is at least 5.8.2 (some of the prerequisite CPAN modules will not build with 5.8.0). Some of the SpamAssassin install notes that mention 5.6.1 are just plain wrong. Ignore them and get a version of Perl later than or equal to 5.8.2. It is known to work on 5.8.8, for example.

Sadly enough, you are on your own when it comes to checking whether you have all the CPAN modules installed or not. The dipsticks who wrote CPAN didn't think to include a check-if-installed command so, short of just installing the module and seeing what happens, you're pretty much on your own. The CPAN FAQ has some notes on how to go about finding out but they're less than satisfying.

If, somehow, you find any of the required modules missing or any of the optional modules that enable features that you may want to use, get them from CPAN and install them. Presuming you have an Internet connection that will allow CPAN to work, do:

     su
     perl -MCPAN -e shell
     install ...
     quit

Switch to the source tree where you extracted the SpamAssassin source and build the makefile that is used to build SpamAssassin. Once you've done that, run it to do the actual build:

     cd Mail-SpamAssassin-m.n.xx
     perl Makefile.PL PREFIX=/usr
     make
     su
     make install

/etc/rc.d/init.d/spamassassin:

This startup script will start SpamAssassin's spamd daemon so that requests to identify spam may be passed to it without incurring the overhead of starting up the Perl interpreter each time a message is to be identified.

     #!/bin/sh
     #
     # spamassassin - This script starts and stops the spamd daemon
     #
     # Revision History:
     # ewilde      2012Jun09  Add pid file.
     # ewilde      2009Jan18  Update to start/stop before/after sendmail.
     #
     # chkconfig: 2345 79 31
     # description: Spamd is a daemon process which uses SpamAssassin to check \
     #              email messages for SPAM.  It is normally called by spamc \
     #              from a MDA.
     #
     # processname: spamd
     # pidfile: /var/run/spamd.pid
     # config: /etc/mail/spamassassin/local.cf (among others)
     #
     #
     # Define the install path for the SpamAssassin binaries, etc.  Define the
     # program to run.
     #
     INSTALL_PATH="/usr/bin"
     SPAM_DAEMON="spamd"
     #
     # Load the function library.
     #
     if [ -f /etc/rc.d/init.d/functions ]; then
         . /etc/rc.d/init.d/functions
     fi
     #
     # Source the networking configuration.
     #
     if [ -f /etc/sysconfig/network ]; then
         . /etc/sysconfig/network
     else
         NETWORKING=no
     fi
     #
     # Source the spamd configuration or take the defaults.
     #
     if [ -f /etc/sysconfig/spamd ] ; then
         . /etc/sysconfig/spamd
     else
         SPAMD_PORT=783
         SPAMD_OPTIONS="-c -d -L"
         SPAMD_PID=/var/run/spamd.pid
     fi
     [ -z "$SPAMD_PORT" ] && SPAMD_PORT=783
     [ -z "$SPAMD_PID" ] && SPAMD_PID=/var/run/spamd.pid
     #
     # Check that networking is up.  We talk to our customers using TCP/IP.
     #
     [ "x$NETWORKING" != xyes ] && exit 0
     #
     # Big kludge-oh-mundo.  Spamd runs out of PATH.  Also, make sure we have
     # something to run.
     #
     [ -f $INSTALL_PATH/$SPAM_DAEMON ] || exit 0
     PATH=$PATH:$INSTALL_PATH
     #
     # Upon startup, start the daemon.
     #
     start()
         {
         echo -n "Starting $SPAM_DAEMON: "
         daemon $SPAM_DAEMON $SPAMD_OPTIONS -p $SPAMD_PORT -r $SPAMD_PID
         RETVAL=$?
         echo
         #
         # If startup succeeded, lock the lockfile.
         #
         [ $RETVAL = 0 ] && touch /var/lock/subsys/spamd
         }
     #
     # Upon shutdown, stop the daemon.
     #
     stop()
         {
         echo -n "Shutting down $SPAM_DAEMON: "
         killproc $SPAM_DAEMON
         RETVAL=$?
         echo
         #
         # If shutdown succeeded, unlock the lockfile.
         #
         if [ $RETVAL = 0 ]; then
             rm -f /var/lock/subsys/spamd
             rm -f $SPAMD_PID
         fi
         }
     #
     # See how we were called.
     #
     case "$1" in
         #
         # Start.
         #
         start)
             start
             ;;
         #
         # Stop.
         #
         stop)
             stop
             ;;
         #
         # Restart or reload (whatever).
         #
         restart|reload)
             stop
             sleep 3
             start
             ;;
         #
         # Conditional restart.
         #
         condrestart)
             if [ -f /var/lock/subsys/spamd ]; then
                 stop
                 start
             fi
             ;;
         #
         # Give the status of spamd.
         #
         status)
             status $SPAM_DAEMON
             ;;
         #
         # Help text.
         #
         *)
             echo $"Usage: $0 {start|stop|restart|condrestart|status}"
             exit 1
     esac
     exit 0

Don't forget to install the startup script using chkconfig:

     /sbin/chkconfig --add spamassassin
     /sbin/chkconfig spamassassin on

/etc/sysconfig/spamd:

If you are planning to use spamd with the sendmailfilter, you should create this file, since doing so will cause the sendmail startup script to invoke spamd for email filtering. The options for the spamassassin startup script (above) are also included in this file.

The permissions on this file should be:

     -rw-r--r--    1 root     root

Here is a sample of the complete file:

     #
     # Configuration for the SpamAssassin daemon.
     #
     SPAMD_PORT=2527
     SPAMD_OPTIONS="-c -d -L"

Note that RPM installations of SpamAssassin install a "spamassassin" file in /etc/sysconfig that does what the "spamd" file, above, does. If you wish to use a file with that name, change "spamd" to "spamassassin" in the /etc/init.d/spamassassin, and possibly the /etc/init.d/sendmail startup scripts herein.

The last step, for later versions of SpamAssassin (e.g. 3.3.2), is to run the sa-update command as root:

     su
     sa-update

You will know if you forgot to do this, if you receive the really informative error message, upon running "/etc/init.d/spamassassin start" that reads:

     child process [18755] exited or timed out without signaling production
         of a PID file

ClamAV

ClamAV is commonly used to scan for viruses in email, so that they can be given the proper handling by a mail delivery agent such as procmail or directly by the MTA as in the case of a sendmail message filter (milter). It is also used to scan for viruses in disk files so that infected files can be deleted or quarantined.

ClamAV is installed in the usual manner by downloading its distribution file from http://www.clamav.net/. Once you've downloaded the distribution file, unpack it like this:

     tar -xvzf clamav-0.94.2.tar.gz

The install instructions are missing a part about creating the ClamAV userid and group before you run configure. Do something like this:

     su
     /usr/sbin/groupadd clamav
     /usr/sbin/useradd -c "ClamAV Virus Scanner" -g clamav -M \
       -s /sbin/nologin clamav

Next, we'll create a place for the daemon to store the logfiles, and an install directory:

     su
     mkdir /var/log/clamav/
     chown root:clamav /var/log/clamav/
     chmod ug=rwx,o=rx /var/log/clamav/
     mkdir /usr/local/clamav
     chown clamav:clamav /usr/local/clamav
     chmod ug=rwx,o=rx /usr/local/clamav

Change to the source directory, and configure and build the source:

     cd clamav-0.97.4
     ./configure --disable-clamuko --disable-milter --with-dbdir=/usr/local/clamav
     make

Then, when the build succeeds, install ClamAV:

     su
     make install

Note that the ClamAV install appears to have omitted the step of informing the linker that it needs to rebuild the linker cache so that it can see the ClamAV dynamic libraries. You can check this by running:

     su
     /sbin/ldconfig -p | grep libclam

If you don't see any results, you won't be able to run anything that links to these libraries (e.g. clamd).

The fix is to first make sure that the installed library directory is in "/etc/ld.so.conf" or one of the files that it includes from "/etc/ld.so.conf.d". If you used the ".configure" command shown above, you should be all set because the default is to install libclam into "/usr/local/lib" and this directory is always named in the "usr-local.conf" file in "/etc/ld.so.conf.d". If you picked some off-label library directory, you need to add another file to "/etc/ld.so.conf.d" that names the off-label directory in it. For example, if you chose "/usr/mystuff/lib" for the libclam library, you might create a file named "/etc/ld.so.conf.d/usr-mystuff.conf" that names "/usr/mystuff/lib" in it.

Once you're ready, run the following command:

     su
     /sbin/ldconfig

This should add libclam* to the /etc/ld.so.cache file and you'll be in business. Or, you can just take the Microsoft approach and reboot your machine a half dozen times.

/etc/clam/clamav.conf

Presumably, to use ClamAV (e.g. for email scanning), one will require that the clamd daemon be run. All of the parameters that control the way clamd operates are contained in its configuration file (there are virtually no command line parameters). The standard location for the config file is /etc/clamav.conf or maybe even /usr/local/etc/clamd.conf.

We, on the other hand, prefer to create a directory for ClamAV under /etc and put the config file, along with any other clam-related files there. It makes the /etc directory neater and groups all of the logically-related files together. Not only that but we don't have to go looking in some screwy place where the product's creator thought it would be cute to put things. So, we do:

     su
     mkdir /etc/clam
     chmod u=rwx,go=rx /etc/clam
     touch /etc/clam/clamav.conf
     chgrp clamav /etc/clam/clamav.conf
     chmod ug=rw,o= /etc/clam/clamav.conf

The options that you'll likely want to consider are those for: turning on logging; choosing how to communicate (e.g. via a socket); indicating where to store the PID file; setting where the virus signatures database is kept; adjusting scanning performance and tuning; picking what type of files to scan; limiting how far scanning goes. Here are some suggestions:

     #
     # Turn on logging, direct where it should go, etc.
     #
     LogTime Yes
     LogFileMaxSize 0
     LogFile /var/log/clamav/clamd.log
     #
     # Set where the PID file is kept and how we tawk, dahling.
     #
     PidFile /var/run/clamd.pid
     TCPSocket 2528
     #
     # Where the virus signatures database is squirreled away.
     #
     DatabaseDirectory /usr/local/clamav
     #
     # Performance and Tooning.
     #
     MaxConnectionQueueLength 25
     MaxThreads 50
     ReadTimeout 30
     MaxDirectoryRecursion 15
     FollowFileSymlinks Yes
     #
     # Use an alter ego.
     #
     User clamav
     #
     # Wot to scan.
     #
     AlgorithmicDetection Yes
     HeuristicScanPrecedence No
     ScanPE Yes
     ScanELF Yes
     ScanOLE2 Yes
     ScanPDF Yes
     ScanHTML Yes
     ScanMail Yes
     ScanArchive Yes
     MailFollowURLs No
     ScanPartialMessages No
     PhishingSignatures Yes
     PhishingScanURLs Yes
     #
     # How far should we try to go?
     #
     MaxScanSize 100M
     MaxFileSize 25M
     StreamMaxLength 25M
     MaxRecursion 16
     MaxFiles 10000
     #
     # Debugging stuff.
     #
     # Debug Yes
     # LeaveTemporaryFiles Yes

/etc/rc.d/init.d/clamav:

This startup script will start ClamAV's clamd daemon so that requests to check for viruses may be passed to it without incurring the overhead of starting up the Perl interpreter each time a message is to be identified.

     #!/bin/sh
     #
     # clamav - This script starts and stops the clamd daemon
     #
     # Revision History:
     # ewilde      2009Jan18  Initial coding.
     #
     # chkconfig: 2345 79 31
     # description: Clamd is a daemon process which uses ClamAV to check email \
     #              messages for VIRUSES.
     #
     # processname: clamd
     # pidfile: pidfile, what pidfile?  Its a Perl program, for cripes sake.
     # config: /etc/clam/clamav.conf
     #
     #
     # Define the install path for the ClamAV binaries, etc.  Define the
     # program to run.
     #
     INSTALL_PATH="/usr/local/sbin"
     CONFIG_FILE="/etc/clam/clamav.conf"
     CLAM_DAEMON="clamd"
     #
     # Load the RedHat functions.
     #
     if [ -f /etc/redhat-release ]; then
         . /etc/rc.d/init.d/functions
     fi
     #
     # Source the networking configuration.
     #
     if [ -f /etc/sysconfig/network ]; then
         . /etc/sysconfig/network
     else
         NETWORKING=no
     fi
     #
     # Source the clamd configuration or take the defaults.
     #
     if [ -f /etc/sysconfig/clamd ] ; then
         . /etc/sysconfig/clamd
     else
         CLAMD_PORT=2528
     fi
     [ -z "$CLAMD_PORT" ] && CLAMD_PORT=2528
     #
     # Check that networking is up.  We talk to our customers using TCP/IP.
     #
     [ "x$NETWORKING" != xyes ] && exit 0
     #
     # Make sure we have something to run.
     #
     [ -f $INSTALL_PATH/$CLAM_DAEMON ] || exit 0
     #
     # Clamd is dain bramaged.  You can't pass it any command line options.  So,
     # the port number must be set in the config file.  But, we want to set it in
     # /etc/sysconfig/clamd.  What to do?  About all we can do is check that it
     # matches what's in the config file.
     #
     ConfSocket=`grep TCPSocket $CONFIG_FILE`
     if [ "x$ConfSocket" != "xTCPSocket $CLAMD_PORT" ]; then
         echo Clamd port $CLAMD_PORT does not match the port set in $CONFIG_FILE
         exit 0
     fi
     #
     # Upon startup, start the daemon.
     #
     start()
         {
         echo -n "Starting $CLAM_DAEMON: "
         daemon $INSTALL_PATH/$CLAM_DAEMON -c $CONFIG_FILE
         RETVAL=$?
         echo
         #
         # If startup succeeded, lock the lockfile.
         #
         [ $RETVAL = 0 ] && touch /var/lock/subsys/clamd
         }
     #
     # Upon shutdown, stop the daemon.
     #
     stop()
         {
         echo -n "Shutting down $CLAM_DAEMON: "
         killproc $CLAM_DAEMON
         RETVAL=$?
         echo
         #
         # If shutdown succeeded, unlock the lockfile.
         #
         [ $RETVAL = 0 ] && rm -f /var/lock/subsys/clamd
         }
     #
     # See how we were called.
     #
     case "$1" in
         #
         # Start.
         #
         start)
             start
             ;;
         #
         # Stop.
         #
         stop)
             stop
             ;;
         #
         # Restart or reload (whatever).
         #
         restart|reload)
             stop
             start
             ;;
         #
         # Conditional restart.
         #
         condrestart)
             if [ -f /var/lock/subsys/clamd ]; then
                 stop
                 start
             fi
             ;;
         #
         # Give the status of clamd.
         #
         status)
             status $CLAM_DAEMON
             ;;
         #
         # Help text.
         #
         *)
             echo $"Usage: $0 {start|stop|restart|condrestart|status}"
             exit 1
     esac
     exit 0

/etc/sysconfig/clamd:

If you are planning to use clamd with the sendmailfilter, you should create this file, since doing so will cause the sendmail startup script to invoke clamd for email filtering. The options for the clamav startup script (above) are also included in this file.

The permissions on this file should be:

     -rw-r--r--    1 root     root

Here is a sample of the complete file:

     #
     # Configuration for the ClamAV daemon.
     #
     CLAMD_PORT=2528

It is very important that the virus signatures database be kept up to date, since it is used to describe all of the viruses that ClamAV detects. Not to put too fine a point on it but, if the virus signature isn't in the database, the virus isn't going to be found.

ClamAV has a program, called freshclam, that can be invoked on a regular basis to contact the mother ship and download a new virus database, if there is one. It should be run at regular intervals out of crontab (or as a daemon). It is easy to set up.

However, before we proceed, consider that virus database freshness is the most important thing in an anti-virus system. Consequently, it is critical that freshclam be able to alert you when something goes wrong.

/etc/clam/notify:

Create the following script to send event notifications to root, via email, whenever ClamAV has something important to say. Note that the indentation in front of "ENDMSG" can be tabs only. If your lame-butt text editor sticks spaces in there, the shell script will get a syntax error. Here is the script:

     #!/bin/sh
     # A shell script that can be used by freshclam to send messages to root
     # when a fresh copy of the virus signatures database cannot be downloaded.
     # Email the message to root so that they can see everything that's going on.
     # After all, if you're omnipotent, you need to know everything.
     HostName=`hostname`
     /bin/mail -s "Urgent message from ClamAV" root <<-ENDMSG
         ClamAV on $HostName failed to obtain a fresh copy of the virus
         signatures database.  Please investigate immediately.
         ENDMSG

Note that, in the above script, the two sets of lines between "/bin/mail -s ... <<-ENDMSG" and up to and including the line with "ENDMSG" must either not be indented at all or only indented with actual tab characters (not blanks). If they are not, the script will fail. Also, don't forget to add execute permissions to the script after you create it:

     su
     chmod ugo+x /etc/clam/notify

Then, it wouldn't hurt try it out after you've got it all set up. For example:

     /etc/clam/notify

Check that root receives the message.

/etc/clam/freshclam.conf:

Freshclam, being almost as brain dead in the command line parameters department as clamd (actually, it has them, the docs claim they override the config file, yet it still whines about the config file being missing and gives up, so maybe it is worse), a config file for it is required. The standard location for the config file is /usr/local/etc/freshclam.conf.

However, since we earlier started the trend of creating a directory for ClamAV under /etc and putting the ClamAV config file and other clam-related files there, we'll stick with a winner:

     su
     touch /etc/clam/freshclam.conf
     chgrp clamav /etc/clam/freshclam.conf
     chmod ug=rw,o= /etc/clam/freshclam.conf

Here is an example of a freshclam configuration file containing the options that you should consider:

     #
     # Turn on logging and direct where it should go, etc.
     #
     LogTime Yes
     LogFileMaxSize 0
     UpdateLogFile /var/log/clamav/freshclam.log
     #
     # Where the virus signatures database is squirreled away.
     #
     DatabaseDirectory /usr/local/clamav
     #
     # Use DNS to verify virus database version.
     #
     DNSDatabaseInfo current.cvd.clamav.net
     #
     # Where to find the database mirrors.  The last one is the fallback mirror.
     #
     DatabaseMirror db.us.clamav.net
     DatabaseMirror db.ca.clamav.net
     DatabaseMirror database.clamav.net
     #
     # Performance and Tooning.
     #
     ConnectTimeout 60
     ReceiveTimeout 120
     MaxAttempts 3
     ScriptedUpdates Yes
     CompressLocalDatabase No
     #
     # Use an alter ego.
     #
     DatabaseOwner clamav
     #
     # Notify root-ski if the database update process fails.
     #
     OnErrorExecute /etc/clam/notify
     #
     # Debugging stuff.
     #
     # Debug Yes

/etc/crontab:

Now we can add a line in /etc/crontab to schedule regular updates to the ClamAV virus signatures database:

     # Check for an updated ClamAV virus signatures database a couple of times a
     # day.
     40 6,18 * * * root /usr/local/bin/freshclam \
                        --config-file=/etc/clam/freshclam.conf >/dev/null 2>&1

/etc/logrotate.d/clamav:

Before the logfiles fill up the all the available disk space, you should add a config file to rotate them, to the logrotate config directory /etc/logrotate.d:

     /var/log/clamav/*.log {
         missingok
         notifempty
     }

Now, before the ClamAV daemon is started, you should run freshclam once to populate the databases:

     su
     /usr/local/bin/freshclam --config-file=/etc/clam/freshclam.conf

Also, if you're running logwatch, you may experience a problem whereby it doesn't process the clam-update logfiles properly and always generates a message that says:

     The ClamAV update process (freshclam daemon) was not running!
     If you no longer wish to run freshclam, deleting the freshclam.log
     file will suppress this error message.

This problem is caused by a timestamp and arrow that were added to the beginning of all clam-update log lines under more recent versions of ClamAV. Apparently, the logwatch scanner for ClamAV hasn't kept up and it does not know about the timestamps. If you get the latest version of logwatch and the problem still isn't fixed, you can try applying this patch:

/usr/share/logwatch/scripts/services/clam-update:

     --- clam-update-7.3.6 2010-09-09 08:37:51.000000000 -0400
     +++ clam-update 2010-09-09 08:16:02.000000000 -0400
     @@ -87,6 +87,10 @@
         # Freshclam ends log messages with a newline.  If using the LogSyslog option, this is
         # turned into a space.  So we remove a space from every line, if it exists.
         $ThisLine =~ s/ $//;
     +   # Later versions of Freshclam prepend each line with a timestamp (a very
     +   # good idea) and an arrow.  This messes up scanning (below).
     +   if ($ThisLine =~ /^\s*\w{3} \w{3} [\d ]\d ..:..:.. \d{4} -> /)
     +      { $ThisLine = substr($ThisLine, length($&)); }
         if (
             # separator of 38 dashes
             ($ThisLine =~ /^\-{38}$/) or

Sendmail Filter or Milter

To filter email messages before they are delivered by sendmail (to remove viruses, etc.), start up the mail filter along with sendmail. The mail filter must be listening on the same port as sendmail milter is configured to use.

/etc/rc.d/init.d/sendmail:

Add the following lines at the appropriate spots:

     # Start the mail filter
     echo -n $"Starting sendmailfilter: "
     /usr/sbin/sendmailfilter -h -p inet:2526@localhost >/dev/nul 2>&1 &
     FILTVAL=$?
     [ $FILTVAL -eq 0 ] && success "sendmailfilter startup" \
         || failure "sendmailfilter startup"
     echo
          .
          .
          .
     echo -n $"Shutting down sendmailfilter: "
     killproc sendmailfilter
     echo

Compile sendmail with the -DMILTER set. If it was already compiled without it, add the following line to sendmail/devtools/Site/site.config.m4:

     APPENDDEF(`conf_sendmail_ENVDEF', `-DMILTER')dnl

Then, delete the entire sendmail directory named for your operating system. For example: sendmail/obj.Linux.2.2.16-22.i586/sendmail and rebuild using "sh Build". Don't forget to install the new sendmail by issuing a "sh Build install" from the sendmail directory.

Switch to the libmilter directory and build "libmilter.a" (use "sh Build"). This is required before the sendmail filter can be built.

Configure the makefile to build the sendmail filter:

     ./configure --with-sendmail=/sendmail/source/dir

Compile and link the sendmail filter by typing "make".

Note that the sendmail guys have messed up the definition of "bool" in the milter header files on some of the later versions of sendmail. The definition in the mfapi.h file in the milter include directory collides with the definition in the gen.h file in the sendmail directory. The solution is to edit the Makefile and add "-DSM_CONF_STDBOOL_H" to the compile line:

     gcc -c $< $(CFLAGS) -pthread -O2 -DSM_CONF_STDBOOL_H -DNEWDB ...

When it compiles successfully, become root and install the filter:

     su
     make install

Hack the mc file for "sendmail.cf" and add the following milter lines (we add them near the end of the "sendmail.cf" file, just before the mailer definitions):

     define(`FFRMILTER', `1')dnl
     INPUT_MAIL_FILTER(`filter1',`S=inet:2526@localhost, F=R, \
                       T=S:10s;R:60s;E:5m')dnl
     define(`confINPUT_MAIL_FILTERS',`filter1')dnl

Rebuild and install the sendmail.cf file:

     /usr/bin/m4 sendmail.mc >sendmail.cf

/etc/logrotate.d/sendmailfilter:

If the sendmail filter writes logfiles, you should add a config file to rotate them, to the logrotate config directory /etc/logrotate.d, before the logfiles fill up the all of the available disk space:

     /var/log/sendmailfilter {
         missingok
         notifempty
     }

Fetchmail

Fetchmail can be run at regular intervals on a system that connects through diald or when you just don't want sendmail to accept mail from the outside world (its more secure that way). Typically, one would edit crontab to run fetchmail.

Note that version 8 of RedHat comes with fetchmail 5.9.0 (or something like that) pre-installed. Unfortunately, a new, annoying behaviour is found in that version. Fetchmail will now try to fall back on procmail for mail delivery, if it cannot connect to port 25 on the local machine. This will effectively bypass all virus/spam filtering if sendmail isn't running -- not a desirable outcome.

To fix the problem or if you just want to run the latest version of fetchmail, obtain the latest fetchmail distribution from:

     http://fetchmail.berlios.de/

Build fetchmail, per the instructions in the INSTALL file. Make sure that the "--enable-fallback=no" option is used on the "./configure" command:

     ./configure --enable-fallback=no
     make

Test the build by entering the command:

     ./fetchmail -V

If the results do not say "Fallback MDA: (none)", you have not succeeded in getting rid of the fallback mailer. If they do say "Fallback MDA: (none)", you can install fetchmail in its usual places:

     su
     make install

/etc/mail/fetchmail-poll:

     A script that can be run a regular intervals (e.g. hourly) by cron to poll
     an ISP's server and see if there's any mail to deliver.  This script is
     basically just a front end wrapped around fetchmail-config (see below), which
     is set up to look for fetchmail config files in all the home directories of
     the users defined on the local machine and fetch mail for any of them that
     has a config file:
     #!/bin/sh
     #
     # Script to fetch mail from the ISP's POP server and pass it through
     # sendmail on the local machine.
     #
     # Should be run at regular intervals from cron.  It will find all of the
     # .fetchmailrc files in the tree under /home and fetch the mail for every
     # user that has one.
     #
     #
     # We may be using diald.  By pinging the POP server, we get diald to dial
     # up the PPP link.  The ping may fail but it brings up the link
     #
     ping -c1 -w120 mail.mydomain.com
     #
     # Find all of the .fetchmailrc files belonging to the users.  Fetch their
     # mail and pass it through sendmail.
     #
     # A typical .fetchmailrc file should look as follows:
     #
     #      poll mail.popserver.com proto pop3
     #        user joeblow with password fromkokomo is jblow here
     #
     /etc/mail/fetchmail-config --primary=192.168.1.1 \
         --secondary=pop.mydomain.com,pop.myotherdomain.com \
         /home .fetchmailrc | \
         /usr/bin/fetchmail -a -f - -l 25000000 -t 600 -v

/etc/mail/fetchmail-config:

A script that can be run against a directory tree to look for individual user config files and consolidate them into a single configuration file that can be piped directly to fetchmail to cause it to fetch mail for all of the users. This script is usually invoked out of cron by fetchmail-poll (see above).

     #!/usr/bin/perl
     #
     # Copyright (c) 2007 Eric Wilde
     #
     # This file is used by fetchmail to set up config files.
     #
     #
     # Modifications
     # -------------
     #
     # Date         Programmer     Description
     #
     # 2009 Feb 22  E.Wilde        Fix bogus close of ping handle when used with
     #                             multiple primaries.
     # 2007 Aug 10  E.Wilde        Initial coding.
     #
     #
     # Usage
     # -----
     #
     # fetchmail-config [--Primary=pri1[,pri2,...]] [--Secondary=sec1[,sec2,...]]
     #                  [--Debug] homedir configfile
     #
     # Primary                     The name or names (comma separated) of the
     #                             primary server(s) that are to be used first, if
     #                             they all exist (i.e. are reachable by ping).
     #                             If this option is omitted, no check for primary
     #                             or secondary servers will be done.
     #
     #                             Note that you must be root to use this option
     #                             and you must also specify the Secondary option.
     #
     #                             Also note that, if any of the primary servers
     #                             resolves to the same IP address as the host on
     #                             which fetchmail-config is running, the switch
     #                             is made to the secondary servers.  This allows
     #                             the same fetchmail-config command to be used on
     #                             both the main and redundant machines, if need
     #                             be.
     #
     # Secondary                   The name or names (comma separated) of the
     #                             secondary server(s) that are to be used only if
     #                             one or more of the primary servers does not
     #                             exist (i.e. aren't reachable by ping).  If this
     #                             option is omitted, no check for primary or
     #                             secondary servers will be done.
     #
     # Debug                       Turn on debugging.  Copious information is
     #                             dumped to stdout (you may want to redirect via
     #                             "perl fetchmail-config ... >file").
     #
     # homedir                     The name of the home directory that is to be
     #                             searched for fetchmail configuration files.
     #                             Only the first level subdirectories below this
     #                             directory will be searched.  This name is
     #                             required.  Typically, one would use something
     #                             like "/home".
     #
     # configfile                  The name of the fetchmail configuration files
     #                             that are to be searched for.  Each subdirectory
     #                             below the home directory given is searched for
     #                             a file that matches this name.  If found, it is
     #                             appended to the fetchmail config file being
     #                             constructed.  Typically, one would use a name
     #                             like ".fetchmailrc".
     #
     #
     # Description
     # -----------
     #
     # This program is run against all users defined in a system to find
     # individual fetchmail configuration files.  It consolidates all of the
     # config files together into a single file which can then be piped through
     # fetchmail.  It also does selection of mail server addresses to allow for
     # fetching of mail on redundant systems.
     #
     # To use it for straight fetching of mail, one might have the following
     # config files:
     #
     #   /home/user1/.fetchmailrc
     #     poll pop.mydomain.com proto pop3
     #       user user1@mydomain.com with password mysecret is user1 here
     #     poll pop.myotherdomain.com proto pop3
     #       user user1@myotherdomain.com with password mysecrettoo is user1 here
     #
     #   /home/user2/.fetchmailrc
     #     poll pop.mydomain.com proto pop3
     #       user user2@mydomain.com with password yoursecret is user2 here
     #
     # This program would be run as follows:
     #
     #   fetchmail-config /home .fetchmailrc
     #
     # To use it for fetching of mail from a main source and a fallback source on
     # a redundant system, one might have the following config files:
     #
     #   /home/user1/.fetchmailrc
     #     poll pop.mydomain.com proto pop3
     #       user user1@mydomain.com with password mysecret is user1 here
     #     poll pop.myotherdomain.com proto pop3
     #       user user1@myotherdomain.com with password mysecrettoo is user1 here
     #     poll thebigguy proto pop3
     #       user user1 with password redundoh is user1 here
     #
     #   /home/user2/.fetchmailrc
     #     poll pop.mydomain.com proto pop3
     #       user user2@mydomain.com with password yoursecret is user2 here
     #     poll thebigguy proto pop3
     #       user user2 with password redundy is user2 here
     #
     # This program would be run as follows:
     #
     #   fetchmail-config /home .fetchmailrc --primary=thebigguy
     #     --secondary=pop.mydomain.com,pop.myotherdomain.com
     #
     # To use it for fetching of mail on the main system with a fallback to the
     # redundant source (e.g. to pick up the mail the system missed while down),
     # one might have the following config files:
     #
     #   /home/user1/.fetchmailrc
     #     poll pop.mydomain.com proto pop3
     #       user user1@mydomain.com with password mysecret is user1 here
     #     poll pop.myotherdomain.com proto pop3
     #       user user1@myotherdomain.com with password mysecrettoo is user1 here
     #     poll elredundoh proto pop3
     #       user user1 with password redundoh is user1 here
     #
     #   /home/user2/.fetchmailrc
     #     poll pop.mydomain.com proto pop3
     #       user user2@mydomain.com with password yoursecret is user2 here
     #     poll elredundoh proto pop3
     #       user user2 with password redundy is user2 here
     #
     # This program would be run as follows:
     #
     #   fetchmail-config /home .fetchmailrc
     #     --primary=pop.mydomain.com,pop.myotherdomain.com
     #     --secondary=elredundoh
     #
     # Note that this program is not as robust as fetchmail when it comes to
     # processing the config file.  It expects the config file lines to look as
     # shown above, viz., the first line begins with "poll", followed by the
     # system to poll, followed by the protocol to use.  Subsequent lines contain
     # the user information, etc., and are copied as is, up to the next line
     # beginning with "poll".
     #
     #
     # Returns
     # -------
     #
     # Fatal errors will cause this script to bail out via the Perl die function.
     # A message (hopefully explanatory) will be generated that describes the error.
     # The return code will be set to one of the allowable values for errno (see
     # the operating system error list in sys/errno.h).
     #
     # Successful completion will see the generated fetchmail config file written
     # to stdout, presumably from whence it may be piped directly to fetchmail.
     #
     use FileHandle;
     use File::Find;
     use Getopt::Long;
     use Net::Ping;
     use Sys::Hostname;
     #
     # Work areas.
     #
     my %Servers = ();
     my ($PollReq, $PollServ, $PollProto);
     my ($PriServ, $PriPing);
     my $PriAlive = 1;
     my ($ServLocal, $ServAddr);
     my (@ServList, $ServName);
     #
     # Config information.
     #
     my $HomeDir = "";                       # Home directory
     my $HomeDirLen = 0;
     my $ConfigFile = "";                    # Config file name
     my $Primary = "";                       # List of primary servers
     my $Secondary = "";                     # List of secondary servers
     #
     # Debugging information.
     #
     $Debugging = 0;                         # Set on for debugging
     #
     # Process the command line options.  If this fails, give a summary of the
     # program's usage.
     #
     ProcessOptions() || UsageInfo();
     #
     # Get the home directory and config file, if any.
     #
     $HomeDir = shift;
     $ConfigFile = shift;
     die "Home directory and/or configuration file name not specified"
       if (!defined($HomeDir) || ($HomeDir eq "")
         || !defined($ConfigFile) || ($ConfigFile eq ""));
     $HomeDir =~ s/\/$//; $HomeDirLen = length($HomeDir) + 1;
     #
     # See if the user is doing the primary/secondary thing.
     #
     die "Primary and secondary servers not both specified"
       if ((($Primary ne "") && ($Secondary eq ""))
         || (($Primary eq "") && ($Secondary ne "")));
     #
     # Check if the primary servers are alive.
     #
     if ($Primary ne "")
       {
       print("Primary servers are ".$Primary."\n" .
         "Secondary servers are ".$Secondary."\n") if ($Debugging);
     $PriPing = Net::Ping->new(icmp, 30);
     ($ServLocal) = (gethostbyname(hostname()))[4];
     $ServLocal = join(".", unpack("C4", $ServLocal));
     foreach $PriServ (split(/,/, $Primary))
       {
       #
       # See if this primary server is the one we're running on.  If so, force
       # the switch to the secondary.
       #
       ($ServAddr) = (gethostbyname($PriServ))[4];
       $ServAddr = join(".", unpack("C4", $ServAddr));
      if ($ServAddr eq $ServLocal)
        {
        print("Primary server ".$PriServ." is the local server\n")
          if ($Debugging);
        $PriAlive = 0;
        }
      #
      # Otherwise, see if we can ping the primary.
      #
      elsif (!$PriPing->ping($PriServ))
        {
        print("Primary server ".$PriServ." at ".$ServAddr." " .
          "appears to be dead\n") if ($Debugging);
        $PriAlive = 0;
        }
      }
     $PriPing->close();
     #
     # If the primary server is up.
     #
     if ($PriAlive) { @ServList = split(/,/, $Secondary); }
     else { @ServList = split(/,/, $Primary); }
     if ($Debugging)
       {
       print("Omitting servers\n");
       foreach $ServName (@ServList) { print("  ".$ServName."\n"); }
       }
     }

#
# Find all the files that match what we're looking for; #
print("Looking in ".$HomeDir." for ".$ConfigFile."\n") if ($Debugging);

     find(\&ReadConfig, $HomeDir);
     #
     # See if we found anything.  If not, its time to bail out.
     #
     die "No configuration files named ".$ConfigFile." found in ".$HomeDir."."
       if (!keys(%Servers));
     #
     # Emit all of the poll requests in a single config file.
     #
     foreach $PollReq (keys(%Servers))
       {
       ($PollServ, $PollProto) = split(/\|/, $PollReq);
       #
       # If we're checking primary/secondary do it now.
       #
       if ($Primary ne "")
         {
         $PriAlive = 0;
      foreach $ServName (@ServList)
        {
        if ($ServName eq $PollServ) { $PriAlive = 1; last; }
        }
      next if ($PriAlive);
      }

#
# Emit this poll request.
#
print("poll ".$PollServ." proto ".$PollProto."\n".$Servers{$PollReq}); }
#
# We're all done.
#
exit(0);

     ############################################################################
     sub ReadConfig
     #
     # $File::Find::dir                      Contains the current directory name.
     #
     # $_                                    Contains the current filename within
     #                                       the directory.
     #
     # $File::Find::name                     Contains the concatenation
     #                                       "$File::Find::dir/$_".
     #
     # $File::Find::prune                    Can be set to true in this routine
     #                                       in order to prune the tree; that is,
     #                                       find() will not descend into any
     #                                       directory when $File::Find::prune is
     #                                       set.
     #
     # This routine is called once for each file that is found in the directory
     # being searched.  It looks for config files that match the name given and
     # processes them, if found.  Only the first subdirectories under the top
     # level are searched.
     {
     my ($UpperDir, $ConfigHand, $Config, $ConfigLine);
     my $PollServer = "";
     #
     # See if we can prune this subdirectory (if its not the first level).
     #
     if (-d)
       {
       $UpperDir = substr($File::Find::name, $HomeDirLen);
     if ($UpperDir =~ /\//)
       {
       print("Pruning ".$File::Find::name."\n") if ($Debugging);
      $File::Find::prune = 1; return;
      }

}
#
# See if the file name matches what we're looking for. #
if ($_ ne $ConfigFile) { return; }
#
# Open up the config file to process it. #
print("Found ".$File::Find::name."\n") if ($Debugging);

     $ConfigHand = new FileHandle;
     if (!$ConfigHand->open("<".$File::Find::name))
       {
       print("Failed to open config file ".$File::Find::name."\n")
         if ($Debugging);
     return;
     }
     binmode($ConfigHand);
     #
     # Read the config file into a variable so that we can work on it.
     #
     if ($ConfigHand->read($Config, 100000) <= 0)
       {
       print("Couldn't read config file ".$File::Find::name."\n") if ($Debugging);
     return;
     }
     $ConfigHand->close();
     #
     # Process all of the poll requests.  They look like this:
     #
     #      poll pop.mydomain.com proto pop3
     #        user user1@mydomain.com with password mysecret is user1 here
     
     foreach $ConfigLine (split(/\n/, $Config))
       {
       next if ($ConfigLine =~ /^\s\/);
       #
       # Is this a new poll?
       #
       if ($ConfigLine =~ /^\spoll\s+([^\s]+)\s+proto[^\s]*\s+([^\s]+)/i)
         {
         print("  Poll = ".$1.", proto = ".$2."\n") if ($Debugging);
      $PollServer = $1."|".$2;
      }

#
# Otherwise, add this line to the existing poll. #
else

      {
      if (exists($Servers{$PollServer}))
        { $Servers{$PollServer} .= $ConfigLine."\n"; }
      else { $Servers{$PollServer} = $ConfigLine."\n"; }
      }

}

     return;
     }
     ############################################################################
     sub ProcessOptions
     #
     # Process command line options and set the appropriate variables.
     #
     {
     my ($Result);
     #
     # Get options using standard option processor.
     #
     $Result = &GetOptions(
         "Debug", \$Debugging,               # Debugging
         "Primary=s", \$Primary,             # Primary servers list
         "Secondary=s", \$Secondary);        # Secondary servers list
     #
     # Return success if options gotten OK.
     #
     return ($Result);
     }
     ############################################################################
     sub UsageInfo
     #
     # Routine to display program usage information.  Called whenever the command
     # line parameters don't make sense.
     #
     {
     #
     # Exit after giving help.
     #
     print <<USAGE ;
     Usage: $0
          [--Primary=pri1[,pri2,...]] [--Secondary=sec1[,sec2,...]] [--Debug]
          homedir configfile
     Primary        List of primary servers to try first.
     Secondary      List of secondary servers to try if no primary servers.
     Debug          Turn on debugging.
     homedir        Home directory where config files are found.
     configfile     Name pattern to use for looking for config files.
     USAGE
     exit 0;
     }

/home/*/.fetchmailrc:

Config file for each user who's mail is to be fetched by fetchmail-poll. Placing this file in the user's home directory will cause automatic fetching of their mail:

     poll mail.myisp.com proto pop3
       user joeblow with password fromkokomo is joe here

If you wish to run redundant systems and you want inbound mail to be delivered to both of them, using fetchmail allows you to do this by setting up a dummy account, that is used only for the redundant system, for all users who need redundant delivery:

     /usr/sbin/useradd -c "Redundant mirror mail account" -m -s /sbin/nologin
          joemirror
     passwd joemirror

Delete all "." files in the new home directory:

     rm -f /home/joemirror/.ba /home/joemirror/.em /home/joemirror/.gt*

/etc/mail/aliases:

Add an alias to this file that can be used by fetchmail to deliver the mail to both locations:

     joe-drop: joe joemirror

/home/*/.fetchmailrc:

Now, instead of using the user's mail address directly in the local fetchmail config, do something like this:

     poll mail.myisp.com proto pop3
       user joeblow with password fromkokomo is joe-drop here

Presumably, on the redundant system you can then set up fetchmail to poll from the mirrored mail box and deliver the messages locally to the users on the redundant system.

SquirrelMail

Grab the latest source from www.squirrelmail.org. Stuff the tar file in a directory and untar it:

     tar -xvzf squirrelmail-1.4.10a.tar.gz

Run the config script to set SquirrelMail up with local options:

     cd squirrelmail-1.4.10a
     config/conf.pl

You might want to change the organization name, provider name and URL (under "Organiztion Preferences"), domain name (under "Server Settings"), and the data and attachment directories (under "General Options"). The data directory can be changed to a relative directory so that it is relative to wherever you install SquirrelMail (e.g. "data/"). The attachment directory should be changed to "/tmp/" or "/var/spool/squirrelmail/attach/".

If you are using one of the well-known IMAP servers (e.g. Dovecot), you should select the appropriate presets from the "Set pre-defined settings for specific IMAP servers" item and then select the IMAP server you'll be using (e.g. "Dovecot"). This can save you a world of hurt later on.

Note that you may have to actually edit config/config.php to set some of the values to what you really want. For example, later versions of conf.pl seem to want to put "config/" on the front of "data/", erroneously giving "config/data/" instead of what you specified. Also, in earlier versions of SquirrelMail, it used to be sufficient to change provider name to get your organization's name to show up on the login page but now it appears that you must change both organization name and provider name.

As super user, copy the untarred/set-up directory to the install directory. For example:

     mkdir /usr/share/squirrelmail
     cp -r * /usr/share/squirrelmail

Set the permissions on the data directory so that it is accessible by the Web servitron:

     chgrp apache /usr/share/squirrelmail/data
     chmod g+w /usr/share/squirrelmail/data

If need be (i.e. if you didn't choose "/tmp"), set the permissions on the attach directory so the Web servitron can add to it but not see it:

     mkdir /var/spool/squirrelmail
     mkdir /var/spool/squirrelmail/attach
     chgrp apache /var/spool/squirrelmail/attach
     chmod g=wx,o= /var/spool/squirrelmail/attach

Possibly, set up Apache to get to SquirrelMail from a special port.

/etc/httpd/conf/httpd.conf:

Add a special port for SquirrelMail to the Apache config file. This port will allow users to logon directly to SquirrelMail, through the port of by aiming DNS at the special port when the domain is resolved. For example:

     webmail.mydom.com  -->  www.mydom.com:8580/

Here is a snipped from an Apache config file that defines the SquirrelMail port, etc.

     ##
     ## SquirrelMail Virtual Host Context
     ##
     Listen 8580
     <VirtualHost default:8580>
     #
     #  Document root directory for SquirrelMail html.
     #
     DocumentRoot "/usr/share/squirrelmail"
     <Directory "/usr/share/squirrelmail">
         Options +Includes
     </Directory>
     #
     #  Directories defined in the main server that we don't want people to see
     #  under this port.
     #
     Alias /manual "/usr/share/squirrelmail/limbo"
     Alias /doc "/usr/share/squirrelmail/limbo"
     </VirtualHost>

To run SquirrelMail, you will need an IMAP server to allow it to read and manipulate the mailboxes. Many installations use the Cyrus IMAP Server for this job. On the later versions of Linux, it even comes pre-installed or can be automatically installed by the package manager, thereby obviating the need to install it by hand. We include notes on how to install Cyrus, just in case you haven't yet had your daily dose of goat fu*king today and you wish to get it now.

But, before you begin, bear in mind a couple of things. Cyrus is an-IMAP only solution. It has its own mail box structure and it cannot read the standard mbox format used on Linux. You will have to convince your MTA to send it email directly so that it can store the messages itself, in its mail box directories. Furthermore, once you go through all of the security bullshit necessary to get it working, you will have spent a couple of days dicking around. Installing and configuring Cyrus is no walk in the park (if you still want to give it a shot, see the Cyrus IMAP section below).

For this reason, many people use Dovecot or the UW-IMAP server. Dovecot is frequently mentioned as being easy to install. It too usually comes pre-installed or available through the package manager on modern Linuxi. Plus, it can read and index mbox-format mailboxes, apparently with great success. Any wonder it is a popular choice for newer systems?

The UW-IMAP server can be found pre-installed on older systems (such as RedHat 8), which are now becoming past history. It was a good product and we miss it. It is still available and can still be installed but most people, as we said, use Dovecot now (which would you rather have, working or a goat fu*k).

If your system's package manager doesn't allow you to install the package on the system, here are the steps to set up Dovecot from source (which can be downloaded from www.dovecot.org/download.html):

     tar -xvzf dovecot-1.1.11.tar.gz

Follow the usual steps to build Dovecot:

     cd dovecot-1.1.11
     ./configure --prefix=/usr --sysconfdir=
     make

Install Dovecot on your system:

     su
     make install

If you'll be using sieve, get the plugin source from the Dovecot site and build it:

     tar -xvzf dovecot-sieve-1.1.6.tar.gz

Follow the usual same series of steps to build the sieve:

     cd dovecot-sieve-1.1.6
     ./configure --prefix=/usr --sysconfdir=
     make

Then, install the sieve on your system:

     su
     make install

/etc/dovecot.conf, /etc/dovecot/dovecot.conf or /usr/local/etc/dovecot.conf:

If you used "--sysconfdir=", above, the configuration file for Dovecot is found in /etc/dovecot.conf. If you installed Dovecot with the system's package manager, the configuration file is probably /etc/dovecot/dovecot.conf. Otherwise, it is likely to be /usr/local/etc/dovecot.conf. You may need to begin by copying it from the example file, for instance /usr/local/etc/dovecot-example.conf. Edit the file to look something like this:

     # Working directory.
     base_dir = /var/run/dovecot/            # Restating the obvious
     # For the promiscuous at heart, with no secure protocols.
     protocols = imap pop3
     disable_plaintext_auth = no
     ssl_disable = yes
     # Or, for the security-minded, with regular + secure protocols.
     protocols = imap imaps pop3 pop3s
     disable_plaintext_auth = yes            
     ssl_disable = no
     # Logging where its supposed to be (don't forget logrotate).
     log_path = /var/log/dovecot
     # Login handling.
     login_dir = /var/run/dovecot/login      # More restating the obvious
     login_chroot = yes
     login_user = dovecot
     # To use Mbox format, set the mail location as follows:
     mail_location = mbox:~/mail:INBOX=/var/mail/%u
     # If you're into debugging (who isn't):
     mail_debug = yes
     auth_debug = no                         # Unless you really need it
     # Authorization methods.
     auth default {
       user = root                           # Required for PAM or shadow passwd
     }

The defaults should work for the rest of the options.

/etc/rc.d/init.d/dovecot:

You will need a startup script to start Dovecot when your system boots up. Typically, the package manager will install one but, if it doesn't, here's an example of one you can add (cribbed from CentOS) as /etc/rc.d/init.d/dovecot:

     #!/bin/bash
     #
     #        /etc/rc.d/init.d/dovecot
     #
     # Starts the dovecot daemon
     #
     # chkconfig: - 65 35
     # description: Dovecot Imap Server
     # processname: dovecot
     # Source function library.
     . /etc/init.d/functions
     test -x /usr/sbin/dovecot || exit 0
     RETVAL=0
     prog="Dovecot Imap"
     start() {
         echo -n $"Starting $prog: "
         daemon /usr/sbin/dovecot
         RETVAL=$?
         [ $RETVAL -eq 0 ] && touch /var/lock/subsys/dovecot
         echo
     }
     stop() {
         echo -n $"Stopping $prog: "
         killproc /usr/sbin/dovecot
         RETVAL=$?
         [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/dovecot
         echo
     }
     #
     # See how we were called.
     #
     case "$1" in
       start)
         start
         ;;
       stop)
         stop
         ;;
       reload|restart)
         stop
         start
         RETVAL=$?
         ;;
       condrestart)
         if [ -f /var/lock/subsys/dovecot ]; then
             stop
             start
         fi
         ;;
       status)
         status /usr/sbin/dovecot
         RETVAL=$?
         ;;
       *)
         echo $"Usage: $0 {condrestart|start|stop|restart|reload|status}"
         exit 1
     esac
     exit $RETVAL

You will need to add this script to the startup sequence, for example like this:

     su
     chkconfig --add dovecot
     chkconfig dovecot on

Either start the service manually (from the Services panel or from the comand line) or reboot the machine to make sure that everything comes up OK at boot time.

To verify the configuration of a running dovecot server, try:

     /usr/sbin/dovecot -n

You can check that Dovecot is doing IMAP and POP3 properly by telnetting to the respective service ports (in this case IMAP):

     telnet localhost 143

Should produce output something like this (the commands "login", "select" and "logout", including the periods, were typed by the user):

     Trying 127.0.0.1...
     Connected to localhost.
     Escape character is '^]'.
     * OK Dovecot ready.
     . login joeuser SecretPass
     . OK Logged in.
     . select INBOX
     * FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
      OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \)] ...
     * 260 EXISTS
     * 0 RECENT
     * OK [UNSEEN 1] First unseen.
     * OK [UIDVALIDITY 1234723003] UIDs valid
     * OK [UIDNEXT 261] Predicted next UID
     . OK [READ-WRITE] Select completed.
     . logout
     * BYE Logging out
     . OK Logout completed.

To test POP3:

     telnet localhost 110

The output Should look something like this (the commands "USER", "PASS", "STAT", and "QUIT" were typed by the user):

     Trying 127.0.0.1...
     Connected to localhost.
     Escape character is '^]'.
     +OK Dovecot ready.
     USER joeuser
     +OK
     PASS SecretPass
     +OK Logged in.
     STAT
     +OK 260 10682624
     QUIT
     +OK Logging out.

/etc/logrotate.d/dovecot:

     To prevent Dovecot's logfiles from filling up all of the available disk
     space, you should add a config file to rotate them, to the logrotate config
     directory /etc/logrotate.d:
     /var/log/dovecot {
         missingok
         notifempty
     }

Once you have everything up and running, if you want to test the SquirrelMail configuration, try this URL:

     http://your-squirrelmail-host/src/configtest.php

If you are paranoid about entering passwords for a real account via a Web UI (as you should be), create a dummy account, that is used only for SquirrelMail, for all SquirrelMail users:

     /usr/sbin/useradd -c "SquirrelMail Web mail account" -m -s /sbin/nologin
          xymail
     passwd xymail

Delete all "." files in the new home directory:

     rm -f /home/xymail/.ba /home/xymail/.em /home/xymail/.gt*

Add a line in /etc/crontab to copy the user's regular email file to the SquirrelMail account:

/etc/crontab:

     Schedule the copy to the user's SquirrelMail spool directory as needed:
     # Mirror local mail accounts so that Webmail may securly access mail
     # without revealing actual logon accounts.
     10,40 * * * * root /bin/cp --reply=yes /var/spool/mail/xyuser \
                        /var/spool/mail/xymail >/dev/null 2>&1

Finally, if you have sent mail folders that were in use under another IMAP server (e.g. UW-IMAP), you can try copying them to the Dovecot directories. In general, you want to do something like this:

     cp /home/joeuser/INBOX.Sent /home/joeuser/mail/Sent

When you try opening the sent folder with the Squirrel, it will windge about not being able to open it. Take a look in the Dovecot log, where you'll see something like:

     dovecot: Feb 15 14:32:50 Error: IMAP(joeuser): UIDVALIDITY changed
       (1234726067 -> 1141943958) in mbox file /home/joeuser/mail/Sent

Now, open up the "Sent" file with your trusty text editor. You'll see something like this:

     From MAILER-DAEMON Wed Sep 17 02:00:25 2008
     Date: 17 Sep 2008 02:00:25 -0400
     From: Mail System Internal Data <MAILER-DAEMON@your-squirrelmail-host>
     Subject: DON'T DELETE THIS MESSAGE -- FOLDER INTERNAL DATA
     Message-ID: <1221631225@your-squirrelmail-host>
     X-IMAP: 1141943958 0000000131
     Status: RO

Change the "X-IMAP" line to match the first number in the error message, like this:

     X-IMAP: 1234726067 0000000131

Save the file from the text editor and try opening the folder again, with the Squirrel. You should be in business.

Cyrus IMAP Server

If you're sure you really want to do this, download the latest Cyrus IMAP Server source from http://cyrusimap.web.cmu.edu/imapd/ or, more specifically, ftp://ftp.andrew.cmu.edu/pub/cyrus/. Once you have downloaded the source tarball, unzip it in the top level directory where you wish the source/build tree to reside.

     tar -xvzf cyrus-imapd-2.3.9.tar.gz
     cd cyrus-imapd-2.3.9

If you haven't already done so or are not reusing pre-existing ones, create a userid and group for cyrus:

     su
     /usr/sbin/groupadd -g 12 -r mail
     /usr/sbin/useradd -c "Cyrus IMAP Server" -d /var/lib/imap -g mail -M -n -r \
       -s /bin/bash -u 76 cyrus
     passwd cyrus

By setting the password for the cyrus userid, you will be able to log in as that user to do administration for cyrus, since the admin will be set to "cyrus" in the config file (below). If you don't care about this feature, you can use "/sbin/nologin" for the shell when you create the user and skip setting their password. But, trust us, you're probably going to need it so its best to set the userid up with a login shell and password.

Build cyrus, per the instructions in the INSTALL file. If you used different names than the ones suggested above, you will want to choose the user and group that Cyrus will run under by setting --with-cyrus-user=user and --with-cyrus-group=group. By default, the user chosen is "cyrus" and the group chosen is "mail":

     ./configure
     make depend
     make all CFLAGS=-O

As super user, install the software:

     su
     make install

Create the file "/etc/imapd.conf". Here is a sample "imapd.conf":

     configdirectory: /var/lib/imap
     partition-default: /var/spool/imap
     admins: cyrus
     sievedir: /var/lib/imap/sieve
     sendmail: /usr/sbin/sendmail
     hashimapspool: true
     sasl_pwcheck_method: saslauthd
     sasl_mech_list: PLAIN
     tls_cert_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem
     tls_key_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem
     tls_ca_file: /etc/pki/tls/certs/ca-bundle.crt

The last three lines are common in the pre-installed versions of Cyrus, because the OS winkies thought you'd like one more layer of bogus, annoying security to fu*k with instead of just setting something up that works. So, they set Cyrus up to talk to the SASL daemon using TLS. Earlier versions of the config file don't have these three lines.

Create all of the directories needed to run Cyrus:

     su
     mkdir /var/lib
     mkdir /var/lib/imap
     chown cyrus /var/lib/imap
     chgrp mail /var/lib/imap
     chmod u=rwx,g=rx,o= /var/lib/imap
     mkdir /var/lib/imap/sieve
     chown cyrus /var/lib/imap/sieve
     chgrp mail /var/lib/imap/sieve
     chmod u=rwx,go= /var/lib/imap/sieve
     mkdir /var/spool/imap
     chown cyrus /var/spool/imap
     chgrp mail /var/spool/imap
     chmod u=rwx,go= /var/spool/imap

Now, use the tool "tools/mkimap" to create the rest of the subdirectories below the directories you just created:

     su cyrus
     tools/mkimap

Next, make sure you remove any old imap, imaps, pop3, pop3s, kpop, lmtp and sieve lines from /etc/[x]inetd.conf (the Cyrus daemon will be taking over these functions). Once this is done, [x]inetd needs to be restarted to make the changes permanent.

Read and follow the install notes as to how to set Cyrus up to respond to IMAP and POP requests, and how to have your MTA deliver email to Cyrus using LMTP.

/etc/rc.d/init.d/cyrus-imapd:

You will need a startup script to start Cyrus when your system boots up. The following script (for RedHat) can be added as /etc/rc.d/init.d/cyrus-imapd:

     #!/bin/sh
     #
     # chkconfig: - 65 35
     # description: The Cyrus IMAPD master serves as a master process for the \
     #              Cyrus IMAP and POP servers.
     # config: /etc/cyrus.conf
     # config: /etc/imapd.conf
     # pidfile: /var/run/cyrus-master.pid
     # author:     Simon Matter, Invoca Systems <simon.matter@invoca.ch>
     # version:    2005111100
     # changed:    Add quickstart/stop option to init script to bypass db
     #             import/export
     # Source function library
     if [ -f /etc/init.d/functions ]; then
       . /etc/init.d/functions
     elif [ -f /etc/rc.d/init.d/functions ]; then
       . /etc/rc.d/init.d/functions
     else
       exit 0
     fi
     # Source networking configuration.
     . /etc/sysconfig/network
     # Check that networking is up.
     [ ${NETWORKING} = "no" ] && exit 0
     # check if the config files are present
     [ -f /etc/cyrus.conf ] || exit 0
     [ -f /etc/imapd.conf ] || exit 0
     # This is our service name
     BASENAME=$(basename $0)
     if [ -L $0 ]; then
       BASENAME=$(find $0 -name $BASENAME -printf %l)
       BASENAME=$(basename $BASENAME)
     fi
     # Source service configuration.
     if [ -f /etc/sysconfig/$BASENAME ]; then
       . /etc/sysconfig/$BASENAME
     else
       echo "$BASENAME: configfile /etc/sysconfig/$BASENAME does NOT exist !"
       exit 1
     fi
     # get_config [config default]
     # extracts config option from config file
     get_config() {
       if config=$(grep "^$1" /etc/imapd.conf); then
         echo $config | cut -d: -f2
       else
         echo $2
       fi
     }
     # where to find files and directories
     CONFIGDIRECTORY=$(get_config configdirectory /var/lib/imap)
     CYRUSMASTER=/usr/lib/cyrus-imapd/cyrus-master
     CYRUS_PROC_NAME=$(basename $CYRUSMASTER)
     ALWAYS_CONVERT=1
     # fallback to su if runuser not available
     if [ -x /sbin/runuser ]; then
       RUNUSER=runuser
     else
       RUNUSER=su
     fi
     RETVAL=0
     RETVAL2=1
     QUICK=0
     start() {
       if [ $(/sbin/pidof -s $CYRUSMASTER) ]; then
         echo -n $"$BASENAME already running."
         false
         echo
       else
         if [ $QUICK -eq 0 ]; then
           echo -n $"Importing $BASENAME databases: "
           cd $CONFIGDIRECTORY
           $RUNUSER - cyrus -c "umask 166 ; \
             /usr/lib/cyrus-imapd/cvt_cyrusdb_all \
               > ${CONFIGDIRECTORY}/rpm/db_import.log 2>&1" </dev/null
           RETVAL=$?
           if [ $RETVAL -eq 0 ]; then
             success $"$BASENAME importing databases"
           else
             failure $"$BASENAME error importing databases, check \
               ${CONFIGDIRECTORY}/rpm/db_import.log"
           fi
           echo
         fi
         if [ $RETVAL -eq 0 ]; then
           echo -n $"Starting $BASENAME: "
           daemon $CYRUSMASTER -d $CYRUSOPTIONS
           RETVAL2=$?
           echo
         fi
       fi
       [ $RETVAL -eq 0 -a $RETVAL2 -eq 0 ] && touch /var/lock/subsys/$BASENAME
       return $RETVAL
     }
     stop() {
       echo -n $"Shutting down $BASENAME: "
       killproc $CYRUSMASTER
       RETVAL=$?
       if [ $QUICK -eq 0 ]; then
         if [ ! $(/sbin/pidof -s $CYRUSMASTER) ]; then
           echo
           echo -n $"Exporting $BASENAME databases: "
           cd $CONFIGDIRECTORY
           $RUNUSER - cyrus -c "umask 166 ; \
             /usr/lib/cyrus-imapd/cvt_cyrusdb_all export \
               > ${CONFIGDIRECTORY}/rpm/db_export.log 2>&1" </dev/null
           RETVAL2=$?
           if [ $RETVAL2 -eq 0 ]; then
             success $"$BASENAME exporting databases"
           else
             failure $"$BASENAME error exporting databases, check \
               ${CONFIGDIRECTORY}/rpm/db_export.log"
           fi
         fi
       fi
       echo
       [ $RETVAL -eq 0 -a $RETVAL2 -eq 0 ] && rm -f /var/lock/subsys/$BASENAME
       return $RETVAL
     }
     restart() {
       stop
       start
     }
     reload() {
       echo -n $"Reloading cyrus.conf file: "
       killproc $CYRUSMASTER -HUP
       RETVAL=$?
       echo
       return $RETVAL
     }
     condrestart() {
       [ -e /var/lock/subsys/$BASENAME ] && restart || :
     }
     rhstatus() {
       status $CYRUSMASTER
       RETVAL=$?
       return $RETVAL
     }
     case "$1" in
       start)
         start
         ;;
       stop)
         stop
         ;;
       restart)
         restart
         ;;
       reload)
         reload
         ;;
       condrestart)
         condrestart
         ;;
       status)
         rhstatus
         ;;
       quickstart)
         QUICK=1
         start
         ;;
       quickstop)
         QUICK=1
         stop
         ;;
       *)
         echo $"Usage: $BASENAME {start|stop|restart|reload|condrestart\
           |status|quickstart|quickstop}"
         RETVAL=1
     esac
     exit $RETVAL

/etc/rc.d/init.d/saslauthd:

You will need a startup script to start the SASL authorization daemon, used by Cyrus to authorize mail users at login time, when your system boots up. The following script (for RedHat) can be added as /etc/rc.d/init.d/saslauthd:

     #! /bin/bash
     #
     # saslauthd      Start/Stop the SASL authentication daemon.
     #
     # chkconfig: - 95 05
     # description: saslauthd is a server process which handles plaintext \
     #              authentication requests on behalf of the cyrus-sasl library.
     # processname: saslauthd
     # Source function library.
     . /etc/init.d/functions
     # Source our configuration file for these variables.
     SOCKETDIR=/var/run/saslauthd
     MECH=shadow
     FLAGS=
     if [ -f /etc/sysconfig/saslauthd ] ; then
         . /etc/sysconfig/saslauthd
     fi
     RETVAL=0
     # Set up some common variables before we launch into what might be
     # considered boilerplate by now.
     prog=saslauthd
     path=/usr/sbin/saslauthd
     # Ugh. Switch to a specific copy of saslauthd if there's one with $MECH
     # in its name, in case it wasn't included in the base cyrus-sasl package
     # because it would have dragged in too many undesirable dependencies.
     if test -x ${path}.${MECH} ; then
         path=/usr/sbin/saslauthd.$MECH
     fi
     start() {
         echo -n $"Starting $prog: "
         daemon $path -m $SOCKETDIR -a $MECH $FLAGS
         RETVAL=$?
         echo
         [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
         return $RETVAL
     }
     stop() {
         echo -n $"Stopping $prog: "
         killproc $path
         RETVAL=$?
         echo
         [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
         return $RETVAL
     }
     restart() {
           stop
         start
     }
     case "$1" in
       start)
           start
         ;;
       stop)
           stop
         ;;
       restart)
           restart
         ;;
       status)
         status $path
         ;;
       condrestart)
           [ -f /var/lock/subsys/$prog ] && restart || :
         ;;
       *)
         echo $"Usage: $0 {start|stop|status|reload|restart|condrestart}"
         exit 1
     esac
     exit $?

Install both scripts and turn them on:

     su
     chkconfig --add cyrus-imapd
     chkconfig cyrus-imapd on
     chkconfig --add saslauthd
     chkconfig saslauthd on

Once you've installed the scripts, start the daemons:

     su
     /etc/init.d/saslauthd start
     /etc/init.d/cyrus-imapd start

Now, before you can use Cyrus for IMAP, you must create mail boxes. Good thing you set the cyrus password so you can now logon to the administrator:

     cyradm --user cyrus --auth login localhost

You'll be prompted for the cyrus user's password. Enter it and continue with building mailboxes. For each user, do the following:

     cm user.joeuser
     lam user.joeuser

You should see something like this:

     joeuser lrswipkxtecda

The list of letters following the user's name are their permissions. A nice, long string (like that shown) is good. They need 'c' for sure, if SquirrelMail is to work. If things are not as they should be, try "man cyradm" for a list of commands, such as "sam", that may be useful.

Incidentally, should you ever wish to delete a mailbox, you can't do it. That's right. The cyrus user doesn't have permissions to delete a mailbox (what a crock). But, they can give themselves the permissions (see, I told you it was a crock). So, deletion becomes a two-step process:

     sam joeuser cyrus x                                         
     dm joeuser

After you've set up a new mailbox, you can test that the user at least has permissions to log in by doing this:

     imtest -m login -p imap -a joeuser localhost
     . logout

You should see a message about them being authenticated, etc.

Atomic Clock/NTP

The atomic clock program can be run periodically by cron to synchronize your system's clock with one of the net-based atomic clocks. Typically, one would edit crontab to poll the atomic clock once a day.

/usr/bin/atomclock-poll:

A script that can be run a regular intervals (e.g. daily) by cron to poll a net-based atomic clock and synchronize the local machine's clock.

     #!/bin/sh
     #
     # Script to synchronize the server's clock with the atomic clock at CMU.
     #
     # Should be run at regular intervals from cron.  It will connect to the
     # atomic clock at CMU and set the system time.  If this works, it will then
     # set the hardware clock so that reboots in between will get the correct
     # time.
     #
     #
     # We may be using diald.  By pinging the time server, we get diald to dial up
     # the PPP link.  The ping may fail but it brings up the link
     #
     ping -c1 -w120 clock-1.cs.cmu.edu
     #
     # Get the correct time from the clock server.  If that works, the hardware
     # clock is set.
     #
     if rdate -p -s clock-1.cs.cmu.edu; then
     /sbin/hwclock --systohc
     fi

If you run the atomic clock synchronization script on a Samba server (see below) and would like all of your NT-based networked machines to synchronize their clocks with the one on the Samba server and thus the atomic clock, create a batch file (in some common directory) and hook it in as follows:

synchclock.bat:

     NET TIME \\samba-server /SET /YES

Create a shortcut to synchclock.bat in:

     \winnt\Profiles\All Users\Start Menu\Programs\Startup

This will run synchclock at startup on each machine where it is installed.

Note that you may need to map a drive and supply a username/password before you run the "NET TIME" command because the username/password may be different on the Samba server than what the NT machine remembers and the command will fail. Mapping the drive kludges around this.

If, instead of the atomic clock program, you'd like to run NTP, obtain the latest version of the source from http://www.ntp.org/downloads.html.

Build the source as follows:

     ./configure --prefix=/usr --enable-linuxcaps
     make

Note that it is important, if you wish to run NTP as some other user besides root, that you include "--enable-linuxcaps". You will need to have a version of libcap available and your kernel will have to be compiled with CONFIG_SECURITY_CAPABILITIES. If you cannot get this option to build properly, you cannot use "-u ntp" in the startup options (see /etc/sysconfig/ntpd below).

If you have an old version of NTP installed by an OS RPM, do the following:

     rm -f /usr/sbin/ntp*
     rm -f /etc/ntp/ntpservers

Install the new build as root:

     su
     make install

/etc/rc.d/init.d/ntpd:

If you use the pre-existing OS ntpd script but build NTP yourself with /usr/bin as the install directory, you'll need to change /usr/sbin to /usr/bin in the ntpd script.

Run the usual chkconfig:

     chkconfig --add ntpd
     chkconfig ntpd on

/etc/sysconfig/ntpd:

If you are using a later version of NTP, you may have to modify the default RedHat file. Earlier versions of this file included:

     OPTIONS="-U ntp"

With the later versions of NTP, the "-U" option's meaning has changed. To get the same effect, change the OPTIONS line to read:

     OPTIONS="-u ntp"

The latest RedHat/CentOS RPM installations have the following sysconfig options settings:

     OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g"

This is a good example to follow. It runs NTP under the ntp user and group, sets the PID file to "/var/run/ntpd.pid" and allows a large clock slew to be dealt with one time only. This later option is useful if the system has been down for a long time and the clock hasn't been keeping up. It allows NTP to correct the problem instead of just bailing out.

If you do choose either of the "-u" options, you'll have to add an ntp user and group. We do so like this, if they don't already exist:

     su
     /usr/sbin/groupadd -g 38 -r ntp
     /usr/sbin/useradd -c "Network Time Protocol" -g ntp -M -N -r \
       -s /sbin/nologin -u 38 ntp

/etc/ntp.conf (probably):
/etc/ntp/ntp.conf (possibly):

Hack this file to describe the synchronization server to be used by this machine. For the local master, it should be some machine, out in networkland, that is close by, that can supply time synchronization information to your machine.

Look at the following lists of public NTP servers. Usually, you will want to pick a stratum 2 server (which means that your local master will run at stratum 3 and all machines synchronizing to it will run at stratum 4). However, if you are running a machine that allows public access to other NTP servers, you may be able to run at stratum 2 and synchronize to a stratum 1 machine.

     http://support.ntp.org/bin/view/Servers/StratumOneTimeServers
     http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers

Here is a sample NTP config file for a local master server at stratum 3. Note that the "restrict" line for the reference server must use an IP address, not a host name, since DNS lookups aren't allowed (its a security hole):

     # Prohibit general access to this service.
     restrict default ignore
     # Permit all access over the loopback interface.  This could
     # be tightened as well, but to do so would effect some of
     # the administrative functions.
     restrict 127.0.0.1
     # -- CLIENT NETWORK -------
     # Permit systems on this network to synchronize with this
     # time service.  Do not permit those systems to modify the
     # configuration of this service.  Also, do not use those
     # systems as peers for synchronization.
     #
     restrict 192.168.1.0 mask 255.255.255.0 notrust nomodify notrap
     # --- OUR TIMESERVERS -----
     # Use this or remove the default restrict line at the head of this file.
     # Permit time synchronization with our time source, but do not
     # permit the source to query or modify the service on this system.
     #
     # The restrict line only allows access to our server by the servers listed.
     # This must be by IP address, since it is a big security hole to allow a DNS
     # lookup for a trusted server (or so it would seem).  So, you must look up
     # the IP address of all the servers you'll use and, if any of them changes,
     # you're screwed.  C'est la vie.
     #
     # The servers to synchronize with, on the other hand, can be given via
     # symbolic names that are resolved via DNS.  Fat lot of good it will do you
     # but at least you'll remember what they're called so that you can look them
     # up again when they silently stop working.
     #
     # restrict mytrustedtimeserverip mask 255.255.255.255 nomodify notrap noquery
     # server mytrustedtimeserverip
     restrict 209.51.161.238 mask 255.255.255.255 nomodify notrap noquery
     restrict 66.92.68.11 mask 255.255.255.255 nomodify notrap noquery
     restrict 132.236.56.250 mask 255.255.255.255 nomodify notrap noquery
     server clock.nyc.he.net minpoll 6 maxpoll 16
                                             # Stratum 1, primary (14 hop)
     server time.keneli.org minpoll 6 maxpoll 16
                                             # Stratum 1, primary (10+ hop
     server cudns.cit.cornell.edu minpoll 6 maxpoll 16
                                             # Stratum 2, backup (16 hop)
     # --- NTP MULTICASTCLIENT ---
     #
     #multicastclient            # listen on default 224.0.1.1
     # restrict 224.0.1.1 mask 255.255.255.255 notrust nomodify notrap
     # restrict 192.168.1.0 mask 255.255.255.0 notrust nomodify notrap
     # --- GENERAL CONFIGURATION ---
     #
     # Undisciplined Local Clock. This is a fake driver intended for backup
     # and when no outside source of synchronized time is available. The
     # default stratum is usually 3, but in this case we elect to use stratum
     # 0. Since the server line does not have the prefer keyword, this driver
     # is never used for synchronization, unless no other other
     # synchronization source is available. In case the local host is
     # controlled by some external source, such as an external oscillator or
     # another protocol, the prefer keyword would cause the local host to
     # disregard all other synchronization sources, unless the kernel
     # modifications are in use and declare an unsynchronized condition.
     #
     server    127.127.1.0    # local clock
     fudge     127.127.1.0 stratum 10
     #
     # Drift file.  Put this in a directory which the daemon can write to.
     # No symbolic links allowed, either, since the daemon updates the file
     # by creating a temporary in the same directory and then rename()'ing
     # it to the file.
     #
     driftfile /etc/ntp/drift
     #broadcastdelay    0.008
     #
     # Authentication delay.  If you use, or plan to use someday, the
     # authentication facility you should make the programs in the auth_stuff
     # directory and figure out what this number should be on your machine.
     #
     #authenticate yes
     authenticate no
     #
     # Keys file.  If you want to diddle your server at run time, make a
     # keys file (mode 600 for sure) and define the key number to be
     # used for making requests.
     #
     # PLEASE DO NOT USE THE DEFAULT VALUES HERE. Pick your own, or remote
     # systems might be able to reset your clock at will. Note also that
     # ntpd is started with a -A flag, disabling authentication, that
     # will have to be removed as well.
     #
     #keys        /etc/ntp/keys

Using this config file, NTP will automatically synchronize with the publicly available atomic clock(s) and then serve time signals to the local network.

Only one master server on the local network need run an NTP daemon that synchronizes with the outside world. All of the other machines can use ntpdate to synchronize with the local NTP server, thereby reducing network traffic. Alternately, the local machines can run ntpd at a lower stratum than the master local server and synchronize with it that way.

For a local server at stratum 4, replace the stratum 1/2 server (e.g. clock.nyc.he.net) with the local stratum 3 server (e.g. 192.168.1.1) in the above file. For example:

     restrict 192.168.1.1 mask 255.255.255.255 nomodify notrap noquery
     server 192.168.1.1 minpoll 6 maxpoll 16

If you wish to run NTP as a local master server but don't wish to enable all of the packet traffic that is generated by NTP, you can allow NTP to run off the local machine's system clock and then periodically set the clock via a cron job (see the script for setting the clock, below). Here is the NTP config file for the free-running NTP:

     #
     # Prohibit general access to this service.
     #
     restrict default ignore
     #
     # Permit all access over the loopback interface.  This could
     # be tightened as well, but to do so would effect some of
     # the administrative functions.
     #
     restrict 127.0.0.1
     # -- CLIENT NETWORK -------
     #
     # Permit systems on this network to synchronize with this
     # time service.  Do not permit those systems to modify the
     # configuration of this service.  Also, do not use those
     # systems as peers for synchronization.
     #
     restrict 192.168.1.0 mask 255.255.255.0 notrust nomodify notrap
     # --- OUR TIMESERVERS -----
     #
     # We don't have any timeservers.  NTP will run as a service based on the
     # local clock.  I know.  This is bad news.  However, this machine has only a
     # dialup connection to the Internet, managed by diald.  We don't want diald
     # to bring up the connection, every time NTP takes it into its head to poll
     # one of its time servers.
     #
     # By running off the local clock, we can run "ntpd -q" via a cron job to set
     # the local clock while still serving as an NTP server for the local net.
     #
     # restrict mytrustedtimeserverip mask 255.255.255.255 nomodify notrap noquery
     # server mytrustedtimeserverip
     # --- NTP MULTICASTCLIENT ---
     #
     # multicastclient            # listen on default 224.0.1.1
     # restrict 224.0.1.1 mask 255.255.255.255 notrust nomodify notrap
     # restrict 192.168.1.0 mask 255.255.255.0 notrust nomodify notrap
     # --- general configuration ---
     #
     # Undisciplined Local Clock. This is a fake driver that will run off the
     # local clock when no outside source of synchronized time is available.  The
     # default stratum is usually 3, but in this case we elect to use stratum
     # 10. Since the server line does not have the prefer keyword, this driver
     # is never used for synchronization, unless no other other synchronization
     # source is available.
     #
     server    # --- GENERAL CONFIGURATION ---    # local clock
     fudge     127.127.1.0 stratum 10
     #
     # Drift file.  Put this in a directory which the daemon can write to.
     # No symbolic links allowed, either, since the daemon updates the file
     # by creating a temporary in the same directory and then rename()'ing
     # it to the file.
     #
     driftfile /etc/ntp/drift
     #broadcastdelay    0.008
     #
     # Authentication delay.  If you use, or plan to use someday, the
     # authentication facility you should make the programs in the auth_stuff
     # directory and figure out what this number should be on your machine.
     #
     #authenticate yes
     authenticate no
     #
     # Keys file.  If you want to diddle your server at run time, make a
     # keys file (mode 600 for sure) and define the key number to be
     # used for making requests.
     #
     # PLEASE DO NOT USE THE DEFAULT VALUES HERE. Pick your own, or remote
     # systems might be able to reset your clock at will. Note also that
     # ntpd is started with a -A flag, disabling authentication, that
     # will have to be removed as well.
     #
     #keys        /etc/ntp/keys
     #
     # Indicate that NTP should not try to set the local clock.
     #
     disable ntp

/etc/ntp/step-tickers:

This file is supposed to be used by init to quickly synchronize the local server with its time source, if the local clock is grossly out of whack. The NTP documentation claims that ntpdate, which is used to perform this synchronization, is about to become obsolete. Maybe so but ntpd fails to run if the clock is way out of line, so this file should probably be set up anyway.

Put the same server in this file that you used in ntp.conf or delete the file altogether, if you're willing to let ntpd do all the work.

Better yet, if your NTP synchronization server is down on the odd occasion, put a list of servers in this file so that NTP can synchronize the local clock to something, upon startup. For example:

     clock.nyc.he.net
     time.keneli.org
     cudns.cit.cornell.edu
     tick.mit.edu
     bonehed.lcs.mit.edu

/etc/ntp/ntpcheck:

Since the the "restrict" line for the reference server, in the NTP config file, must use an IP address, not a host name, (DNS lookups aren't allowed because its a security hole), it is a good idea to check that the server name resolves to the expected IP address. NTP is not very good about complaining about things that don't work. It just fails and falls back on the local clock (if the config file above is used).

This shell script can be scheduled as a cron job to check the NTP server's IP address. It will send an email message to root, if the IP address changes. While we're at it, we also check to see that each NTP server is up and running and that our local NTP server has at least one stratum 1 or 2 server that it can synchronize to.

Note that the indentation in front of "ENDMSG" can be tabs only. If your lame-butt text editor sticks spaces in there, the shell script will get a syntax error. Here it is:

     #!/bin/sh
     #
     # Check that the NTP reference servers still have the same IP addresses as
     # expected.  The reason we do this is because the NTP config file won't allow
     # symbolic machine names to be used for the restrict parameter (for security
     # reasons).  Exact IP addresses are used instead so, if the IP address
     # changes, NTP can fail silently.
     #
     # If the addresses are all mapped OK, ping them to see if they are up.
     #
     # If that works, see if there's a stratum 1 or 2 server in our list of
     # reference servers (this will tell us if NTP is actually working).
     #
     # If any IP doesn't match what is expected, a server is down or there is no
     # stratum 1 or 2 reference server, send email to root.
     #
     # Note that this script may be passed an argument of "debug" to cause
     # debugging information to be dumped to stdout.  Normally, it runs silently
     # and sends email to root if an error occurs.
     #
     ############################################################################
     #
     # The list of NTP servers and their IP addresses (see /etc/ntp.conf).
     #
     NTPServs=("clock.nyc.he.net"   "time.keneli.org"   "cudns.cit.cornell.edu")
     NTPAddrs=( "209.51.161.238"      "66.92.68.11"        "132.236.56.250"    )
     #
     # Debugging flag.
     #
     DebugFl="no"
     ############################################################################
     IsRunning()
     {
     #
     # We used to ping the NTP server to see if we could get a response.  But,
     # some NTP servers block ping, while still allowing NTP requests.  Not
     # only that but we're exhibiting a modicum of Three Mile Island syndrome,
     # viz. testing somethig that supposedly represents the thing that we want
     # to test (i.e. the valve actuator) when what we really want to test is
     # the thing itself (i.e. the valve).
     #
     # So, we now use ntpdate in query-only mode to test that the NTP server
     # is actually responding to NTP requests (i.e. we're now testing whether
     # the valve is open or not).
     #
     if ( ntpdate -q $1 2>&1 | grep -q "no server" ); then
         return 1  #Failure
     else
         return 0  #Success
     fi
     }
     ############################################################################
     #
     # Check if an NTP server's DNS entry maps its expected IP address.
     #
     IsMappedOK()
     {
     #
     # Get the IP address that the server name maps to.  See if it is mapped OK.
     # We need to check this because the ntp.conf file must use real IP
     # addresses, not DNS addresses.  However, some time servers are subject to
     # remapping, so we need to check that they are where we think they are.
     #
     # Note that piping the result of host through sed twice handles the case
     # where the output of host looks like:
     #
     #      time.keneli.org is an alias for serenity.keneli.org.
     #      serenity.keneli.org has address 66.92.68.11
     #
     # Often, the DNS address for the time server is aliased and can change so
     # we want to be able to look up the alias herein to ensure that the actual
     # time server is at the address we expected, not what its aliased to.
     #
     # Also, we need to trim the address that is returned because the host
     # command can return two addresses, an IPV4 address and an IPV6 address.
     # This looks like:
     #
     #      clock.nyc.he.net has address 209.51.161.238
     #      clock.nyc.he.net has IPv6 address 2001:470:0:2c8::2
     #
     # So far, we only care about IPV4 addresses hence the reason for trimming
     # off any IPV6 part.
     #
     ServMapped=`host $1 | sed "s/^. \([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\
         \{1,3\}\.[0-9]\{1,3\}\)/\1/"`
     ServMapped=`echo $ServMapped | sed "s/^. \([0-9]\{1,3\}\.[0-9]\
         \{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\)/\1/"`
     ServMapped=${ServMapped%% *}
     if test $3 == "yes"; then
         echo Server $1, expected $2, got $ServMapped
     fi
     if test x"$ServMapped" == x"$2"; then
         return 0  #Success
     else
         return 1  #Failure
     fi
     }
     ############################################################################
     #
     # See if we're debugging.
     #
     if test x"$1" == x"debug"; then
         DebugFl="yes"
     fi
     #
     # Loop through all the NTP servers and check them all.
     #
     Curr=0
     ServFail=0
     FailMsg=""
     for ServName in ${NTPServs[*]}; do
         #
         # Check if the server is mapped OK.
         #
         # We need to do this because the NTP config file uses real IP
         # addresses instead of host names.  This leaves DNS out of the NTP
         # equation (which is good) but it means that, if the DNS mapping
         # for one of our hosts has changed from what we expect, we're in
         # trouble, since NTP will just stop working.
         #
         if (! IsMappedOK $ServName ${NTPAddrs[$Curr]} $DebugFl); then
             let ServFail+=1
             FailMsg="${FailMsg}Server $ServName is not mapped to expected \
                 address ${NTPAddrs[$Curr]}"$'\n'
         else
             #
             # If it is mapped OK, check to see if it is running.
             #
             if (! IsRunning ${NTPAddrs[$Curr]} $DebugFl); then
                 let ServFail+=1
                 FailMsg="${FailMsg}Server $ServName appears to be down"$'\n'
             fi
         fi
      let Curr+=1

done
#
# Let's see what ntpq thinks of the situation. We're looking for either a # stratum 1 or stratum 2 server.
#
NTPStatus=`/usr/sbin/ntpq -p`

     if (! sh -c "echo \"$NTPStatus\" | grep \"1 u\"" >/dev/null 2>&1) \
         && (! sh -c "echo \"$NTPStatus\" | grep \"2 u\"" >/dev/null 2>&1); then
         if test $DebugFl == "yes"; then
             /usr/sbin/ntpq -p
         fi
         let ServFail+=1
         FailMsg="${FailMsg}Couldn't find a stratum 1 or 2 server on the \
             NTP servers list"$'\n'
     fi
     #
     # See how everything went.
     #
     if ( test $ServFail -gt 0 ); then
         if test $DebugFl == "yes"; then
             echo $FailMsg
         else
             /bin/mail -s "Problem with NTP" root <<-ENDMSG
                 There seems to be a problem with the NTP service.  Here's a
                 synopsis:
              $FailMsg
              Perhaps you should check /etc/ntp.conf and/or
              try /usr/sbin/ntpq -p
              ENDMSG
      fi

fi

Note that the "here file" delimited by "ENDMSG" must have tabs on each of its lines, something to pay attention to if you're cutting and pasting from this document (the lines will have spaces).

/etc/ntp/ntpset:

If you are running a free-running NTP server, this script can be run by cron at periodic intervals to set the server's local clock, which NTP can then use as its reference source.

     #!/bin/sh
     #
     # The NTP server on this machine is free-wheeling, based on the system clock.
     # Periodically, cron will run this script to synchronize the system clock to
     # an outside time source but only when we want to (i.e. in the middle of the
     # night when the link is automatically brought up).  This allows us to
     # maintain a fairly accurate time reference server for the local network but
     # not require that the Internet link be up all the time.
     #
     # First, ping the reference server to see if it is up.
     #
     # If that works, try to set the local clock from that server.  If it fails or
     # the ping fails, count it as one failure and move on.  If it works, exit the
     # script with the time set.
     #
     # If the loop ends and there are as many errors as we had reference servers,
     # send an email message to root telling them about the problem.
     #
     # Note that this script may be passed an argument of "debug" to cause
     # debugging information to be dumped to stdout.  Normally, it runs silently
     # and sends email to root if an error occurs.
     #
     ############################################################################
     #
     # The list of NTP servers to use for setting the clock.
     #
     NTPServs=("clock.nyc.he.net" "bonehed.lcs.mit.edu" "ntp0.cornell.edu")
     #
     # Debugging flag.
     #
     DebugFl="no"
     ############################################################################
     #
     # Check whether an NTP server is up and connected.
     #
     IsConnected()
     {
     #
     # Ping the NTP server and see if we get a response.  If so, its up.
     #
     if ping -c1 -w120 $1 &> /dev/null; then
         return 0  #Success
     else
         if test $2 == "yes"; then
             ping -c1 -w120 $1
         fi
         return 1  #Failure
     fi
     }
     ############################################################################
     #
     # See if we're debugging.
     #
     if test x"$1" == x"debug"; then
         DebugFl="yes"
     fi
     #
     # Loop through all the NTP servers and try to set the time.
     #
     Curr=0
     ServFail=0
     FailMsg=""
     for ServName in ${NTPServs[*]}; do
         let Curr+=1
         #
         # See if we can ping the server.  The try to set the time.
         #
         if (! IsConnected $ServName $DebugFl); then
             let ServFail+=1
             FailMsg="${FailMsg}Server $ServName appears to be down"$'\n'
         else
             if (/usr/bin/ntpdate -u $ServName >/dev/null 2>&1); then
                 break
             else
                 let ServFail+=1
                 FailMsg="${FailMsg}Couldn't set time from $ServName"$'\n'
             fi
         fi
     done
     #
     # See how everything went.
     #
     if ( test $ServFail -ge $Curr ); then
         if test $DebugFl == "yes"; then
             echo $FailMsg
         else
           /bin/mail -s "Problem with NTP" root <<-ENDMSG
                 There seems to be a problem with setting the clock for the
                 NTP service.  Here's a synopsis:
              $FailMsg
              Perhaps you should run /etc/ntp/ntpset debug
              ENDMSG
      fi

fi

Note that the "here file" delimited by "ENDMSG" must have tabs on each of its lines, something to pay attention to if you're cutting and pasting from this document (the lines will have spaces).

/etc/ppp/ip-up or ip-up.local:

You'll need to hack this script if you use PPP or some other non-permanent link. NTP will need to be restarted upon IP address changes. Add the following lines:

     #
     # Restart the NTP daemon in case the lease on our IP address has expired.
     # NTP needs to receive packets sent to the address it registered with the
     # stratum 2 NTP server.  If the IP address changes, this won't happen and
     # NTP will cease to work (silently, of course).
     #
     /etc/rc.d/init.d/ntpd restart

Time Zone Files

Time zones are defined by a set of files that are compiled by a time zone compiler called zic and which are stored in the subdirectory tree located under /usr/share/zoneinfo. Under most Linux systems, /etc/localtime is a symlink to the appropriate file in the zoneinfo tree, for example:

     lrwxrwxrwx  1  root  root  32  Mar 11 12:35  /etc/localtime
                                    -> /usr/share/zoneinfo/America/New_York

However, under some RedHat systems, /etc/localtime appears to be a copy of the appropriate file in the zoneinfo tree so you may have to act accordingly, if you update the zoneinfo files.

To see if your zoneinfo file is correct, try the following command as root:

     /usr/sbin/zdump -v /etc/localtime | grep yyyy

where yyyy is the year that you'd like to check for correctness. You should see something like this:

     Sun Apr  1 06:59:59 2007 UTC = Sun Apr  1 01:59:59 2007 EST isdst=0 \
       gmtoff=-18000
     Sun Apr  1 07:00:00 2007 UTC = Sun Apr  1 03:00:00 2007 EDT isdst=1 \
       gmtoff=-14400
     Sun Oct 28 05:59:59 2007 UTC = Sun Oct 28 01:59:59 2007 EDT isdst=1 \
       gmtoff=-14400
     Sun Oct 28 06:00:00 2007 UTC = Sun Oct 28 01:00:00 2007 EST isdst=0 \
       gmtoff=-18000

which gives the end time for daylight wasting time, the start time for daylight saving time, the end time for daylight saving time, and the start time for daylight wasting time, respectively.

If the start/end times for DST have been changed by local fiat or act of the Congress Critters and your times are not correct, or if there are any other errors in the file, you can update it as follows.

A description of the timezone database can be found at this link:

     http://www.twinsun.com/tz/tz-link.htm

The source and data for the timezone files can be downloaded from:

     ftp://elsie.nci.nih.gov/pub/

You should make a directory where the timezone database source and data can be downloaded and then change to that directory. For example:

     mkdir /rpm/Timezone
     cd /rpm/Timezone

The following command should do the trick for downloading the files:

     wget 'ftp://elsie.nci.nih.gov/pub/tz*.tar.gz'

Look in the download directory for the latest gzip files and extract them (if there are already files from a previous download, you might want to get rid of everything but the previous and current gzip files):

     tar -xvzf tzcodeyyyyc.tar.gz
     tar -xvzf tzdatayyyyc.tar.gz

If you wish to build the entire timezone database and all of the C-library functions from scratch, the README file should be consulted and then the Makefile built. Warning: the path names for the install directories do not match RedHat's idea of where things should go so you'll have to change them, if you're doing this on a RedHat system.

To just update the system's zoneinfo database, as superuser, do the following:

     /usr/sbin/zic africa
     /usr/sbin/zic antarctica
     /usr/sbin/zic asia
     /usr/sbin/zic australasia
     /usr/sbin/zic europe
     /usr/sbin/zic northamerica
     /usr/sbin/zic southamerica
     /usr/sbin/zic pacificnew
     /usr/sbin/zic etcetera
     /usr/sbin/zic factory
     /usr/sbin/zic backward

This will build and update everything. If you want, you need only do the parts you care about but updating the whole database is not a bad idea.

Once you've done that, if you're on a RedHat system that has copied the zoneinfo file to /etc/localtime, you should copy the appropriate timezone file to /etc/localtime, for example:

     cp /usr/share/zoneinfo/America/New_York /etc/localtime

Network UPS Tools (NUT)

The quick command to set the UPS battery date, after changing batteries is (note that /usr/local/ups/bin may just be /bin on some installations):

     /usr/local/ups/bin/upsrw -s battery.date=mm/dd/yy -u username -p password \
       upsname@localhost

This presupposes that the username and password are set in upsd.users and that "actions = set" are specified for the user. Also, upsd and the appropriate UPS driver must be up and running for the UPS in question.

If you are running a version of NUT that does not set the battery.date variable correctly for an APC SmartUPS, you can do it manually.

First you must make sure that you have a copy of "screen" installed on the system in question (for RedHat/CentOS use "yum install screen" or for Debian/Ubuntu us "apt-get install screen"). You must also make sure that the UPS driver is not running. Then, assuming that you have the UPS connected to /dev/ttyS0, do the following (note that there are no carriage returns used for any of the commands sent to the UPS):

     su
     screen /dev/ttyS0 2400,cs8
     Y
     (UPS echoes "SM")
     x
     (UPS echoes current date)
     -
     (UPS echoes nothing)
     mm/dd/yy
     (after about 5 seconds, UPS echoes "|", indicating EEPROM is changed)
     R
     (UPS echoes "BYE")
     <Ctrl-a>\
     (answer yes to exit screen)

Similarly, if you are running a version of NUT that does not set the UPS identifier correctly for the APC SmartUPS, you can set an up-to-eight-character identifier manually, using "screen" and assuming that you have the UPS connected to /dev/ttyS0, as follows:

     su
     screen /dev/ttyS0 2400,cs8
     Y
     (UPS echoes "SM")
     c
     (UPS echoes current UPS ID, probably UPS_IDEN for a new UPS)
     -
     (UPS echoes nothing, possibly including the "-", i.e. really nothing)
     upsname<Enter>
     (after about 5 seconds, UPS echoes "|", indicating EEPROM is changed;
      some versions echo "OK", instead)
     (on some UPS, you may have to press <Enter> twice)
     c
     (UPS echoes the new UPS ID)
     R
     (UPS echoes "BYE")
     <Ctrl-a>\
     (answer yes to exit screen)

Also, the NUT drivers may not return the UPS serial number so that upsrw can display it. If that's the case, and you need to find the serial number of an installed APC UPS (e.g. one whose back panel is in a hard to read spot), you can do so in the same fashion as described above for setting the date. Again, assuming that you have the UPS connected to /dev/ttyS0, do the following (note that there are no carriage returns used for any of the commands sent to the UPS):

     su
     screen /dev/ttyS0 2400,cs8
     Y
     (UPS echoes "SM")
     n
     (UPS echoes the serial number)
     R
     (UPS echoes "BYE")
     <Ctrl-a>\
     (answer yes to exit screen)

A full description of the APC SmartUPS protocol can be found at:

     http://www.apcupsd.com/manual/manual.html

If you want to get rid of the "Change the battery" message and the red light on the UPS panel, you can force the battery test to be rerun immediately. However, before you do this, wait for three or four hours after the new battery is installed to give the UPS time to charge it first. Then, run the battery test, which will reset everything, if the new battery is OK. To do this, use:

     /usr/local/ups/bin/upscmd -u username -p password upsname@localhost \
       test.battery.start

The battery test should run (you'll see all of the messages it usually generates) for a few seconds and all will be well.

The quick list of steps to take when setting up a new UPS on an existing platform is as follows.

Change the group ownership of the UPS port(s) to the ups user:

     chgrp ups /dev/ttyS2

If you have one of the later versions of Linux that use udev for dynamically creating the "/dev" device space, you will, as well, need to hack the udev rules for the tty device. These can be found in the /etc/udev/rules.d dirctory, typically in the file /etc/udev/rules.d/50-udev.rules. Add something to the effect of:

     # Special case for the UPS devices.
     KERNEL=="ttyS2",                NAME="%k", GROUP="ups", MODE="0660",
                                     OPTIONS="last_rule"

Add these lines before the general case rules for tty devices.

Alternately, later installs of NUT add a set of rules for the USB drivers in a file named 52_nut-usbups.rules. You could add the following to that file instead of hacking the existing 50-udev.rules:

     # Special case for the UPS devices
     KERNEL=="ttyS1", group="nut", MODE="0660"

Some serial ports will come up with the correct baud rate and interrupt address set automagically. After you've made the changes to the new serial port, as described above, to make it accessible to the UPS userid, you should check that the serial port is configured correctly (reboot the machine first, if you had to change the rules in /etc/udev). To do this use:

     su
     setserial /dev/ttySx

If the system figures things out and sets the serial ports correctly, all on its own, go with its settings because this will cause a lot less grief, in the long run. However, if the type of UART, port address and interrupt vector, along with the baud rate, aren't set correctly (usually looking at /dev/ttyS0 will give some clues), you should add some lines to /etc/rc.serial to set the serial port up at boot time. If it already exists, you can edit file but, if it doesn't already exist, you can just create it with your text editor.

To determine what interrupt numbers and port addresses to use, you should list the PCI devices like this:

     /sbin/lspci -v

For each serial port, figure out from the information given by lspci what values to use and then add something like this to /etc/rc.serial:

     setserial /dev/ttyS2 port 0xdf88 UART 16550A irq 225 Baud_base 115200

When you're done, the permissions on /etc/rc.serial should be:

     -rw-r--r--    1 root     root          138 Aug  6  2005 /etc/rc.serial

Add your new UPS to the file /etc/ups/ups.conf. For an APC SmartUPS, the following config information applies:

     [bevatron]
          driver = apcsmart
          port = /dev/ttyS2

Begin monitoring your new UPS by adding it to /etc/ups/upsmon.conf:

     MONITOR bevatron@localhost 0 lmmonitor ItsASecret master

If you like to be able to monitor the UPS from the Web, add its information to /etc/ups/hosts.conf:

     MONITOR bevatron@localhost "stargate UPS"
     LOGFILE /var/log/upslog.bevatron

If you monitor the UPS via a Web server on another machine, add its information to the /etc/ups/hosts.conf file there, too.

Add your new UPS to the list of UPS boxes that the startup script must start, in /etc/sysconfig/upsd:

     UPS_BOXES="fusion-reactor bevatron"

If the logs from this UPS are pushed to another machine for saving or for other reasons, make sure the following is in /etc/crontab:

     05,20,35,50 * * * * root /etc/ups/ftplogs

Pick some suitable times that won't collide with other machines sending their UPS files to the central server (if you care). Also, if the receiving server is rotating the copied logs (this need not be the case, since the sending server is rotating its logs and the copied log always replaces the log on the receiving server, thereby ensuring that rotation on the sending server is sufficient to guarantee that log never grows too big), you should time the push to be a few minutes after the receiving server rotates its logs so that it can get a good, rotated copy of the last log before the new log is sent over top of it. Typically, logrotate runs out of /etc/cron.daily which is usually run at 04:02 each day. Consequently, the value of five minutes past the hour (chosen above) is a good choice.

On the central server, make sure that the "ups" user exists and that its password is "ItsASecret". Create empty logfiles in /var/log, named after the new UPS and change their ownership to "ups":

     touch /var/log/upslog.bevatron /var/log/upslog.bevatron.1
     chown root:ups /var/log/upslog.bevatron
     chmod g+w /var/log/upslog.bevatron

If you want the UPS to be shown in the graph of UPS activity, add its name and full load power to /etc/ups/upsplot.pl:

     my @UPSNames = (                        # Names of UPS boxes                    
         "Bevatron",
              .
              .
              .
     my @UPSWatts = (                        # Full load wattages of UPS boxes       
         466,
              .
              .
              .

If you'd like the UPS to test the battery at periodic intervals (see the notes below about battery life), add the following to /etc/crontab:

     # Every two months, have the UPS check its battery.
     00 10 4 1,3,5,7,9,11 * root /etc/ups/batterytest bevatron

Once again, if you care, schedule the times for the battery test on days when the other UPS are not also testing their batteries.

Note that the UPS will usually come with automatic self-test turned on and set to every 14 days. This means that the UPS will run the battery test itself every 14 days, exactly 1209600 seconds after it is turned on. How convenient is that? Not to mention how hard is that on the batteries (see the notes below about battery life). If you'd rather the test was done when you decide and under your control, you should turn off this dubious feature like so:

     /usr/local/ups/bin/upsrw -s ups.test.interval=0 -u username -p password \
       upsname@localhost

To take care of the logfiles, add the new UPS' logfile to the logrotate config file (either /etc/logrotate.conf or /etc/logrotate.d/ups):

     /var/log/upslog.bevatron {
         notifempty
         missingok
         create 0664 root ups
         copytruncate
         postrotate
             echo HEADER: Bevatron >/var/log/upslog.bevatron
             /etc/ups/ftplogs bevatron.1
         endscript
     }

Finally, not all UPS come with their control parameters set correctly (especially if you've bought a refurbished one) so you may want to check that they are what you expect them to be:

     /usr/local/ups/bin/upsc upsname@localhost

To find out which parameters you can set, run upsrw with no options:

     /usr/local/ups/bin/upsrw upsname@localhost

If any of the settable parameters are set to values that you don't like, you can change them with upsrw, like this:

     /usr/local/ups/bin/upsrw -s ups.test.interval=0 -u username \
       -p password upsname@localhost

For a typical APC SmartUPS, you may wish to set the following:

     battery.alarm.threshold     L         (the SmartUPS 750 doesn't support
                                           "L".  The default is "0" which alarms
                                           when on battery.  You might want "N",
                                           which produces no alarms).
     battery.charge.restart 00
     battery.date                [batdate]
     battery.runtime.low 120
     input.sensitivity           H
     input.transfer.high 132       (for SmartUPS 750, use 133)
     input.transfer.low 103
     output.voltage.nominal 115       (only necessary on 240V units)
     ups.delay.shutdown 020       (for SmartUPS 750, 090 is the min)
     ups.delay.start 000
     ups.id                      [upsname]
     ups.test.interval           0         (see notes on periodic test, above)

To do a complete install, start by obtaining the latest release of NUT from http://www.networkupstools.org/. Unzip and untar the distro to a directory of your choice:

     tar -xvzf nut-2.6.5.tar.gz

Create a new user, under which the UPS tools can run (otherwise they run as "nobody"). For example:

     su
     /usr/sbin/useradd -c "UPS Monitoring Tools" -M -s /sbin/nologin ups

If you are using RedHat or one of its derivatives (e.g. CentOS), the useradd command will also create a group with the same name as the UPS user, by default. If this isn't the case, you can create the group like this:

     su
     /usr/sbin/groupadd ups

Later on in these notes, we'll describe how to protect the config files that contain secret passwords, etc. For now, if there are any users who should have access to the UPS files (such as apache, which may need access to display UPS statistics on a Web page), you should add them to the ups group in /etc/group, with your favorite editor.

Incidentally, if you have to create the group directly (as opposed to indirectly, when the userid is created -- e.g. on SuSE), you may have to modify the new userid so that its primary group is the new group. If this isn't done, setting group permissions on the various config files and executables (as described below) will have no effect. Do the following:

     su
     /usr/sbin/groupmod -g ups ups

If this will be the centralized server where all log files from other servers will be copied, you should set a password on the ups userid so that FTP from the other servers will work:

     su
     passwd ups ItsASecret

Also note that, although no shell logins are allowed (by using /sbin/nologin as the userid's shell), FTP will be doing logins (hence the need for a password). And, FTP will gratuitously try to CD to the userid's home directory when it does the login so a home directory must be provided and be set in /etc/passwd. The permissions on that directory must be set so that the ups userid can access it. If this directory is not created by useradd (i.e. because you picked the -M flag, to avoid copying all the crap from the new user template), do the following:

     su
     mkdir /home/ups
     chown ups:ups /home/ups
     chmod u=rwx,go= /home/ups

Add a directory where the UPS tools can store state information and make the new user the owner:

     mkdir /var/state  (if necessary)
     mkdir /var/state/ups
     chown ups:ups /var/state/ups
     chmod u=rwx,g=rx,o= /var/state/ups

Change the directory to the distribution directory and configure the build scripts:

     cd nut-2.6.5
     ./configure --with-user=ups --with-group=ups --sysconfdir=/etc/ups \
                 --mandir=/usr/share/man --with-cgi

Note that, if you want to monitor an SNMP UPS over the network, you must first make sure that the net-snmp development package (www.net-snmp.org) is installed (either via your system's package manager, probably by picking the net-snmp-devel package, or by downloading and installing the library from the Web site).

You can check that net-snmp is properly installed and available for NUT to use in building the SNMP drivers by issuing this command:

     net-snmp-config --version --base-cflags --libs

If the library is properly installed, you'll see something like this:

     5.3.0.1
     -DINET6 -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g \
       -fno-strict-aliasing -fstack-protector-all -Dlinux \
       -I/usr/include/rpm -I/usr/include
     -L/usr/lib64 -lnetsnmp -lcrypto -lm -L/usr/lib -lwrap

Once you've installed the net-snmp library, go back to the cofigure step and add this option to the "./configure" line:

     ../configure ... --with-snmp ...

Note that, even if you do see apparently proper results from net-snmp-config, the configure step may fail, claiming that the library doesn't exist. This is because the code in the configure step that checks for the presence of the "init_snmp" entry point fails to compile properly, most likely due to a problem with the link libraries returned by net-snmp-config.

Even if the net-snmp library is properly installed, the requirement for other libraries (e.g. "-lwrap") in the link library parameters returned by net-snmp-config can cause the compile to fail and configure to assume that the library isn't present (even though it is).

If you experience this problem, "-lwrap" is most likely the cause. The net-snmp-config command returned a lib string that included "-lwrap" but libwrap has not been installed. You can test this fact with the following command:

     su
     find / -name libwrap\*

If the find doesn't locate any libwrap modules, that's the problem. You can get rid of libwrap (it doesn't seem to be required) by rerunning "./configure" without the "--with-snmp" parameter and, instead, repeating all of the parameters returned by net-snmp library, minus "-L/usr/lib -lwrap", using the "--with-snmp-libs" configure option, something like this:

     ../configure ... --with-snmp-libs="-L/usr/lib64 -lnetsnmp -lcrypto -lm"

Apply the patches for local source code changes or copy the local source code, if any, to the install directory before building. Currently, there are hacks to upsplot.pl, upsstats.c and upsmon.c (including upsmon.h). Put upsplot.pl in /etc/ups, after the "make install" is run (see below). Put the hacked ".c" and ".h" files in ./clients before doing the build.

Compile the source from the build directory, according to the INSTALL file and then install the build. Basically, you do the following, in the top level directory (but read the next two paragraphs first):

     make
     su
     make install

This should put most things in:

     /etc/ups
     /usr/local/ups/bin
     /usr/local/ups/sbin
     /usr/share/man

If you wish to use the upsimage.cgi, upsset.cgi and/or upsstats.cgi modules anywhere, perhaps in your UPS Status Web pages, make sure that you either install the dynamic link library modules libupsclient.so* somewhere in your load library tree (possibly user local lib), run ldconfig and then relink the .cgi modules, or use the statically-linked .cgi modules.

If you copy the dynamically-linked .cgi modules (from the nut-2.6.5/client directory) to your Web site tree (e.g. your cgi-bin directory), you will likely get an error that reads something like this:

     /usr/bin/ld: cannot open output file /var/www/MySite/cgi-bin/\
       .libs/2764-lt-upsstats.cgi: No such file or directory

As the loader tries to find the libupsclient.so module which was linked to upsstats.cgi from the .libs directory, relative to the client directory.

A good clue that you've gotten the dynamically-linked .cgi modules is their size. The dynamically-linked modules are 8K or 9K in size. Once again, we repeat that these modules won't work unless they are run from the exact directory in which they were built, the .libs directory is copied to the Web tree in the same relative location as they are to the .cgi modules in the build tree, or the dynamic link library libupsclient.so is copied to a public dynamic link directory and ldconfig is run before the .cgi modules are built.

We much prefer to use the statically-linked .cgi modules, mainly because using them is much less of a pain in the butt and the increase in size (another 75K) is essentially chump change for the three modules in question. These can be fround in the nut-2.6.5/client/.libs directory itself and are identified by their size (e.g. 95K to 105K). You can copy these three modules anywhere and they will run just fine. For example:

     cp nut-2.6.5/client/.libs /var/www/MySite/cgi-bin

Incidentally, the upsmon command can be used to force a shutdown of all of the master UPSs connected to the system. The way it is installed by the "make install" command, above, it is given permissions that allow anyone to execute it. In other words, any user could theoretically shut off all of the UPSs. Probably not what you intend. So, you might want to do this:

     su
     chgrp ups /usr/local/ups/sbin/upsmon
     chmod o= /usr/local/ups/sbin/upsmon

If you receive errors about "gd" during configure, go get it and install it before continuing. On some systems, you can do:

     su
     yum install gd gd-devel php-gd

Otherwise, you can get the source here:

     http://www.libgd.org/

In the event that you need libpng or zlib in order to compile gd, they can be found at these URLs:

     http://www.libpng.org/pub/png/pngcode.html
     http://www.gzip.org/zlib/

Also, if you're going to be using upsplot.pl, you will need to make sure that GD.pm is available. If not, you should install it from CPAN:

     perl -MCPAN -e shell
     install GD

After the build and install are successful, if you are using a serially-connected UPS, change the group ownership of the UPS serial port(s) to the ups user:

     chgrp ups /dev/ttyS0

If you have one of the versions of Linux that use udev for dynamically creating the "/dev" device space, you will, instead, need to hack the udev rules for the tty device. On earlier versions of Linux (e g. RHEL 5.x/CentOS 5.x), these can be found in the /etc/udev/rules.d directory, typically in the file /etc/udev/rules.d/50-udev.rules. For this file, add something to the effect of:

     # Special case for the UPS devices.
     KERNEL=="ttyS2",                NAME="%k", GROUP="ups", MODE="0660",
                                     OPTIONS="last_rule"

Add these lines before the general case rules for the tty devices.

Alternately, on later versions of Linux (e.g. RHEL 6.x/CentOS 6.x or Ubuntu) the best place to add your udev rules for NUT is in a separate rules file that you make up just for this purpose (that way, subsequent releases of NUT or the OS won't monkey with your rules for the serial port and you also won't be concerned by the fact that there isn't even a 50-udev.rules file in the first place). We suggest calling the file "52-nut-ttyups.rules. It should be created in the /etc/udev/rules.d directory. You should add something like the following to that file, depending on which serial port you need to use:

     # udev rules for NUT serial drivers
     # Special case for the UPS devices
     KERNEL=="ttyS1", group="ups", MODE="0660"

If you will be managing your UPS via SNMP, now is the time to install the network management card into the UPS, if required, and get the UPS set up. The default setup, according to the manufacturer's instructions, should be OK. However, you will need to take the necessary steps to assign a static IP address to the UPS/Network Card, since many network management interfaces come with DHCP as their default.

/etc/ups/ups.conf:

Copy the file ups.conf.sample and hack it to set up the machine/UPS configuration, per the instructions in the INSTALL file. For an APC SmartUPS, the following config information applies:

     [fusion-reactor]
          driver = apcsmart
          port = /dev/ttyS0

For an APC UPS with a Network Management Card (e.g. AP9606, AP9617, AP9618, AP9619) installed, this configuration should work:

     [fusion-reactor]
          driver = snmp-ups
          port = 192.168.1.20
          mibs = apcc
          community = private
          snmp_version = v1
          pollfreq = 10          (or whatever number of seconds you prefer)

/etc/ups/upsd.conf:

On older versions of NUT, copy the file upsd.conf.sample. It is probably OK as is but, if you'd like to be able to monitor this machine's UPS from your network, add something like:

     ACL homeworld 192.168.1.0/24
     ACCESS grant monitor homeworld

Or, if you don't want to be messing about and you want to be able to do anything from the local network (e.g. run battery tests), try replacing the existing rules with:

     ACL all 0.0.0.0/0
     ACL localhost 127.0.0.1/32
     ACL homeworld 192.168.1.0/24
     ACCEPT homeworld
     ACCEPT localhost
     REJECT all

On newer versions of NUT (e.g. version 2.4 and up), copy the upsd.conf.sample file and replace the LISTEN directives with:

     LISTEN 127.0.0.1
     LISTEN 192.168.1.123 3493

You should use the actual IP address of the machine where NUT is being installed in place of 192.168.1.123 (above). If the machine has two or more NICs and you want NUT to listen on all of them, add additional LISTEN directives for the IP address of each of them.

Note that, if you have an older config file and are upgrading to a newer version of NUT, you will need to remove all of the ACL, ACCEPT and REJECT rules and simply use LISTEN. Apparently, the designers of NUT have decided that it should get out of the security business and leave everything to the firewall. Consequently, the ACL, ACCEPT, etc. rules are no longer accepted and LISTEN is simply used to tell NUT which port to listen on.

After you've made all of your changes, the permissions on the file need to be altered, since the device driver will windge about them if they aren't correct. Set the permissions as follows:

     chmod g=r,o= /etc/ups/upsd.conf

/etc/ups/upsd.users:

Copy the file upsd.users.sample. Since this file contains the secret UPS password, it needs to be protected from prying eyes as follows:

     chgrp ups /etc/ups/upsd.users
     chmod g=r,o= /etc/ups/upsd.users

Add a user that will allow upsmon on the local host to shut down the machine, if the power goes south, as well as do periodic battery tests:

     [lmmonitor]
         password = ItsASecret
         allowfrom = localhost               (deprecated in later versions)
         instcmds = shutdown.return
         instcmds = test.battery.start
         actions = set
         upsmon master

/etc/ups/upsmon.conf:

Copy the file upsmon.conf.sample. Change the group ownership to the ups user, to keep the file safe from the riff raff, and give it group permissions:

     chgrp ups /etc/ups/upsmon.conf
     chmod g=r,o= /etc/ups/upsmon.conf

Make the following changes to the file to begin monitoring the UPS:

     NOTIFYCMD /etc/ups/notify
     RUN_AS_USER ups
     MONITOR fusion-reactor@localhost 1 lmmonitor ItsASecret master
     NOTIFYFLAG COMMBAD  SYSLOG+EXEC
     NOTIFYFLAG COMMOK   SYSLOG+EXEC
     NOTIFYFLAG FSD      SYSLOG+EXEC
     NOTIFYFLAG LOWBATT  SYSLOG+EXEC
     NOTIFYFLAG NOCOMM   SYSLOG+EXEC
     NOTIFYFLAG ONBATT   SYSLOG+WALL+EXEC
     NOTIFYFLAG ONLINE   SYSLOG+WALL+EXEC
     NOTIFYFLAG REPLBATT SYSLOG+EXEC
     NOTIFYFLAG SHUTDOWN SYSLOG+WALL+EXEC
     NOTIFYFLAG OVERTEMP SYSLOG+EXEC

If you want to monitor UPS temperature and you've applied the proper hacks (or the temperature hacks are included in your version of the source), add the following:

     # --------------------------------------------------------------------------
     # UPSOVERTEMP - Temperature (in Celcius) which is too high for operation
     #
     # upsmon will check all UPS that return temperature information against this
     # value.  If the UPS temperature exceeds this value, an OVERTEMP notification
     # will be generated.
     #
     # Note that certain UPS are renown for cooking and even burning up batteries
     # (some reports of spectacular battery fires have been received).  From
     # actual observed log data, it appears that prior to burning up the
     # batteries, the UPS internal temperature rises significantly.  Hence,
     # monitoring the UPS temperature can be a valuable tool towards detecting
     # battery cooking, before the UPS burns the place down (the UPS is supposed
     # to solve problems, not cause them, isn't it).
     #
     # Once again, typical observed internal temperatures are in the 40 to 50
     # degree Celcius range.  Observed temperatures of 80 degrees Celcius prior
     # to an actual battery failure are indicative of pending failure.  Thus, to
     # be safe, the the UPSOVERTEMP value should be set in the 60-70 degree
     # range.
     UPSOVERTEMP 60.0
     # --------------------------------------------------------------------------
     # OTWARNTIME - Time (in seconds) between each over temperature warning
     #
     # upsmon will check all UPS that return temperature information against the
     # UPSOVERTEMP value, above.  If the UPS temperature exceeds the UPSOVERTEMP
     # value, an OVERTEMP notification will be generated.  In addition, a
     # notification will be generated every OTWARNTIME seconds until the over
     # temperature situation is remedied.
     #
     # Since a UPS over temperature situation is a critical one (the batteries
     # could be destroyed, as could the UPS), an OVERTEMP notification will be
     # generated at regular intervals, as specified by OTWARNTIME.  The default
     # is every 3600 seconds.
     OTWARNTIME 3600

/etc/sysconfig/upsd:

If you use any of the following scripts, you will need to either create or update /etc/sysconfig/upsd. This file contains the options that tell the UPS scripts which UPS boxes to monitor, what to do about logging, etc. Here is an example:

     #
     # Configuration for the NUT UPS monitoring and power down daemons.
     #
     # The list of UPS boxes to be controlled are set by UPS_BOXES, for example:
     #
     #      "My_UPS"
     #      "UPS-1 UPS-2"
     #
     # For clustering support, the list of UPS boxes on the primary server is set
     # by UPS_BOXES_PRIMARY and the list of UPS boxes on the secondary server is
     # set by UPS_BOXES_SECONDARY.
     #
     # You may define all three parameters in this config file and the startup
     # script will choose the appropriate one, based on whether clustering is
     # enabled and, if so, which role the server is playing.
     #
     UPS_BOXES=""
     UPS_BOXES_PRIMARY="fusion-reactor"
     UPS_BOXES_SECONDARY="nuke-plant"
     #
     # If you would like to start logging of UPS statistics at regular intervals
     # to a log file too, define the location of the log file and the logging
     # interval.  Otherwise, if the LOG_PATH is set to an empty string, the
     # logging daemon isn't started.
     #
     # A separate file is created for each UPS that is monitored.  The name of the
     # file is created by appending the UPS name to the LOG_PATH value, separated
     # by a period.  For example:
     #
     #      LOG_PATH="/var/log/upslog"
     #      UPS_BOXES="UPS1 UPS1"
     #
     #      Log files = /var/log/upslog.UPS1, /var/log/upslog.UPS2
     #
     # The log interval is given in seconds by LOG_INTERVAL.
     #
     # The group that will be used to create log files is set by UPS_GROUP.  You
     # may find it useful to have your logfiles set to a different group and
     # given read/write permissions so that they can be accessed by UPS users.
     # Otherwise, root permissions are used.
     #
     LOG_PATH="/var/log/upslog"
     LOG_INTERVAL=30
     UPS_GROUP="ups"
     #
     # For secondary and/or standalone servers, if you'd like the UPS logs to be
     # copied to the primary server for consolidation purposes, define the
     # server's name here and the logging directory where they will be copied to.
     #
     # PRI_SERVER="stargate"
     # PRI_LOG_PATH="/var/log/upslog"

While we're at it, we'd like the logfiles to be created with the correct permissions, so let's create an empty one for each UPS before we proceed any further. Logrotate will take care of this when it rotates the log files but let's start with:

     su
     touch /var/log/upslog.fusion-reactor
     touch /var/log/upslog.nuke-plant
     chgrp ups /var/log/upslog.
     chmod g+w /var/log/upslog.

/etc/ups/batterytest:

Set up a script that will test a UPS' battery at regular intervals. This script can be scheduled from crontab:

     #! /bin/sh
     #
     # batterytest - Script to test the battery of a UPS at regular intervals.
     #
     # This script is used by cron to periodically test the battery of a UPS.
     #
     #
     # Define the install path for the UPS binaries.
     #
     INSTALL_PATH="/usr/local/ups"
     #
     # Source the UPS configuration.
     #
     if [ -f /etc/sysconfig/upsd ] ; then
        . /etc/sysconfig/upsd
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ] ; then
         SERVERROLE=Standalone
     fi
     #
     # Determine which UPS boxes we can test.
     #
     if [ ${SERVERROLE} = "Primary" ] ; then
         UPS_BOXES=$UPS_BOXES_PRIMARY
     else
         if [ ${SERVERROLE} = "Secondary" ] ; then
             UPS_BOXES=$UPS_BOXES_SECONDARY
         fi
     fi
     #
     # For all of the UPS on this system, see which one we're testing.
     #
     for UPSBox in $UPS_BOXES; do
      if [ x"$UPSBox" == x"$1" ] ; then
          $INSTALL_PATH/bin/upscmd -u jgmonitor -p ItsASecret \
              ${1}@localhost test.battery.start >/dev/null 2>&1
      fi
     done

Change the permissions of the script, after you create it, to add execute permission and to hide it from regular users, since it contains the secret UPS password:

     su
     chgrp ups /etc/ups/batterytest
     chmod ug+x,o= /etc/ups/batterytest

/etc/ups/notify:

Create the following script to send event notifications to root, via email, whenever the UPS has something important to say. Also, if the event scheduling configuration file upssched.conf is defined, the upssched program will be invoked to start/stop the timers and execute the event handlers described in upssched.conf.

Note that any indentation in front of "ENDMSG" can only be tabs. If your lame-butt text editor sticks spaces at the front of any lines between "<<-ENDMSG" and "ENDMSG", the shell script will get a syntax error. Here is the script:

     #!/bin/sh
     #
     # A shell script that can be used by the UPS power monitor to send messages
     # to root when UPS events occur.  This script can also be used to
     # start/stop the timers and execute the event handlers described in the
     # upssched.conf config file.
     #
     #
     # Define the install path for the UPS binaries, etc.
     #
     INSTALL_PATH="/usr/local/ups"
     CONFIG_DIR="/etc/ups"
     #
     # Source the UPS configuration.
     #
     if [ -f /etc/sysconfig/upsd ] ; then
        . /etc/sysconfig/upsd
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ] ; then
         SERVERROLE=Standalone
     fi
     #
     # Determine which UPS boxes we are logging to.
     #
     if [ ${SERVERROLE} = "Primary" ] ; then
         UPS_BOXES=$UPS_BOXES_PRIMARY
     else
         if [ ${SERVERROLE} = "Secondary" ] ; then
             UPS_BOXES=$UPS_BOXES_SECONDARY
         fi
     fi
     #
     # Write the event to the log, if there is one.  Also, start/stop any
     # associated timers and execute any event handlers that are defined in
     # uspssched.conf.
     #
     if [ x"$LOG_PATH" != x ] ; then
      timestamp=`date "+%Y/%m/%d %H:%M:%S"`
      for LogBox in $UPS_BOXES; do
          echo $UPSNAME | grep -q $LogBox
          matval=$?
          if [ $matval = 0 ] ; then
              echo $timestamp EVENT: $1 >> ${LOG_PATH}.$LogBox
              #
              # If there's a schedule config file, run the UPS scheduler.
              # The scheduler uses the $UPSNAME and $NOTIFYTYPE environment
              # variables to figure out what to do with this event, based on
              # the configuration in upssched.conf.
              #
              if [ -f ${CONFIG_DIR}/upssched.conf ] ; then
                  ${INSTALL_PATH}/sbin/upssched
              fi
          fi
      done

fi
#
# Some events we only log.
#
if ([ x"$NOTIFYTYPE" != xONBATT ]) && ([ x"$NOTIFYTYPE" != xONLINE ]) \

      && ([ x"$NOTIFYTYPE" != xREPLBATT ]) \
      && ([ x"$NOTIFYTYPE" != xSHUTDOWN ]) \
      && ([ x"$NOTIFYTYPE" != xOVERTEMP ]) ; then
      exit 0
     fi
     #
     # Email the message to root so that they can see everything that's going on.
     # After all, if you're omnipotent, you need to know everything.
     #
     HostName=`hostname`
     if [ x"$NOTIFYTYPE" != xSHUTDOWN ] ; then
      /bin/mail -s "Message from the UPS" root <<-ENDMSG
          The power monitor on $HostName has generated the following message:
          $1
          ENDMSG
     else
      /bin/mail -s "Urgent message from the UPS" root <<-ENDMSG
          The power monitor on $HostName has generated the following message:
          $1
          ENDMSG
     fi

Note that, in the above script, the two sets of lines between "/bin/mail -s ... <<-ENDMSG" and up to and including the line with "ENDMSG" must either not be indented at all or only indented with actual tab characters (not blanks). If they are not, the script will fail.

Also, don't forget to add execute permissions to the script after you create it:

     su
     chmod ugo+x /etc/ups/notify

But, be warned that, if a upssched.conf file exists, the notify script will run upssched. If the upssched config file defines an event handler that shuts down the system or the UPS, a malicious user could run the script to shut down your system, if the above permissions are applied to the script. It is better to use the following permissions:

     su
     chgrp ups /etc/ups/notify
     chmod ug+x,o= /etc/ups/notify

Once you've set up the notify script and set whatever permissions you choose, it wouldn't hurt try it out. For example:

     UPSNAME=bevatron; export UPSNAME
     NOTIFYTYPE=ONBATT; export NOTIFYTYPE
     /etc/ups/notify "This is a test"

Check that root receives the message and that the message gets logged to bevatron's log file.

If you'd like to schedule or directly execute other actions, in conjunction with any of the events associated with a UPS, the above notify script checks for the existence of the upssched.conf file in the UPS configuration directory (typically /etc/ups). If this config file exists, the notify script assumes that you've defined some actions therein and that they need to be executed by the upssched program.

The primary reason for creating a upssched.conf file is to implement a set of pessimistic shutdown rules that will shut down the system shortly after the UPS goes on battery, as a result of a power failure, well before the battery is fully depleted, thereby providing a reserve of battery capacity that can be used to ensure an orderly shutdown a second time around, if a second power failure occurs during startup, after an initial power failure. The pessimism stems from the belief that power failures will be frequent and closely spaced and, therefore, startup after a power failure is a dangerous time. The batteries will be depleted from the first failure. If a second failure happens during the subsequent restart, it is possible that the batteries could give out before the system has a chance to execute an orderly shutdown, thereby causing a loss of data, etc. In that case, shutting down early, while there's still plenty of juice left in the batteries can be a hedge against this situation.

/etc/ups/upssched.conf:

You can optionally create the upssched.conf to define actions to be taken when certain UPS events occur. If this file exists, the notify script (above) will invoke upssched to schedule and/or execute the event handlers defined therein.

If you do use upssched, it needs to create several files used for locking and synchronization. Typically, files of this sort are created in an application-specific subdirectory off of /var/lib. So, we'll create such a directory that upssched can use:

     su
     mkdir /var/lib/ups
     chown ups:ups /var/lib/ups
     chmod ug=rw,o= /var/lib/ups

Now, copy the sample upssched.conf file and then hack it with your favorite text editor. You need to set the following options:

     CMDSCRIPT /etc/ups/upssched-cmd
     PIPEFN /var/state/ups/upssched.pipe
     LOCKFN /var/state/ups/upssched.lock

The rest of the file specifies which events should be handled by which event handlers. The event handlers are defined by the "AT" command, which specifies which event type is to be handled for which UPS. When an event type and UPS name is matched, the "AT" command specifies what action to take.

Possible actions include: starting a timer which will eventually fire the event handler if and when the timer runs down; stopping a running timer; executing the event handler immediately.

Naturally, you can schedule or execute whatever event handler you want, in conjunction with each and every UPS event but, by way of example, our goal herein is to implement a pessimistic shutdown policy. So, here goes. Add the following event handlers:

     # As soon as the UPS goes on battery, give the users a message.
     AT ONBATT * EXECUTE UPSOnBattery
     # If the UPS stays on battery for 60 seconds, give the users a second
     # message.
     AT ONBATT * START-TIMER UPSOnBattery60 60
     # If the UPS stays on battery for 120 seconds, shut the system down.
     AT ONBATT * START-TIMER UPSOnBattery120 120
     # If the power returns, cancel all of the timers.
     AT ONLINE * CANCEL-TIMER UPSOnBattery60
     AT ONLINE * CANCEL-TIMER UPSOnBattery120
     # If the power returns, give the users a message that, "He's baaaaak!"
     AT ONLINE * EXECUTE UPSOnLinePower

/etc/ups/upssched-cmd:

This script is executed by upssched whenever a timer counts down or an event handler is directly executed. The script is passed the name of the timer or the name of the directly-executed command so that it may act accordingly. Typically, the script uses a switch statement to select the block of code, based on the timer/event being handled.

Since, in the above configuration file, we defined three events (one directly-executed and two timed) for power failure and one event for power restoration, our command script must handle them. We'll use them to implement the pessimistic shutdown policy.

Begin by copying the sample script from the clients subdirectory of the build tree:

     su
     cp .../clients/upssched-cmd /etc/ups
     chgrp ups /etc/ups/upssched-cmd
     chmod ug+x,o= /etc/ups/upssched-cmd

Note especially that we removed execute permission for "other" from the script. If we didn't do this, a malicious user could run the script to shut down your system. Also, it is possible that the script may contain instant commands, which require a username/password, so making it non-readable too is a good idea.

Edit the script to add handlers for all of the events. Here is the full script that we use:

     #! /bin/sh
     #
     # This script is called by upssched, via the CMDSCRIPT directive.  Its
     # purpose is to handle UPS events that are dispatched by upssched.
     #
     # In conjunction with the events defined in upssched.conf, this script
     # implements a pessimistic shutdown policy to gracefully shut down the
     # attached system after the UPS has been on battery for only two minutes.
     # This allows the UPS to retain enough charge in the batteries to allow
     # a second orderly shutdown, should a power failure occur as soon as the
     # system restarts.
     #
     # The first argument passed to this script is the name of the timer that
     # is being fired or the name of the immediately-executed event that is
     # being signalled.
     #
     #
     # UPS connection type.
     #
     # If the UPS is directly connected to the system (i.e. via a serial or USB
     # cable), set the connection type to "Direct".  If the UPS is
     # network-connected, via SNMP, set the connection type to "SNMP".
     #
     UPSConnection="SNMP"
     if [ x"$UPSConnection" == x ]; then
         UPSConnection="Direct"
     fi
     if [ "$UPSConnection" != SNMP ]; then
         UPSConnection="Direct"
     fi
     #
     # UPS shutdown command.  If this command is defined, it is sent to the UPS
     # before the system shutdown command (below).
     #
     # If the UPS is directly connected to the system (i.e. via a serial or USB
     # cable), there is probably no need for this command (although you could
     # use it to set the "battery.charge.restart" or "ups.delay.shutdown"
     # variables via the upsrw command, as opposed to just setting them once,
     # ahead of time).
     #
     # On the other hand, if the UPS is network-connected, via SNMP, this
     # command is needed to cause the UPS to shut down, with a delay.  This is
     # because the SNMP-connected UPS drivers do not support shutdown commands
     # (with good reason).  Consequently, the UPS must be sent an instant
     # command that shuts it off, while the system is still up and the network
     # connected.  The delay is necessary so that the system will have time
     # (hopefully enough) to be able to shut down gracefully, before the UPS
     # actually powers off.
     #
     # If you have a network-connected UPS, you may want to do some timing
     # tests to ensure that the shutdown will be given enough run time, by the
     # UPS, before it powers off the load.  There is no real harm in giving
     # the UPS a little extra time, since the system will hopefully have shut
     # itself off and the battery drain will be minimal.  Better to be safe
     # than sorry.
     #
     if [ "$UPSConnection" != SNMP ]; then
         UPSCmd=""
     else
         UPSCmd="/usr/local/ups/sbin/upscmd -u lmmonitor -p ItsASecret"
         InstCmd="shutdown.return"
     fi
     #
     # System shutdown command.
     #
     # If the UPS is directly connected to the system (i.e. via a serial or USB
     # cable), you can probably have the UPS shut the system down (i.e. via
     # "/usr/local/ups/sbin/upsmon -c fsd").
     #
     # If the UPS is network-connected, via SNMP, there is no UPS shutdown
     # command.  Consequently, you'll probably want to shut the system down
     # directly (i.e. via "/sbin/shutdown -h now").
     #
     if [ "$UPSConnection" != SNMP ]; then
         ShutdownCmd="/usr/local/ups/sbin/upsmon -c fsd"
     else
         ShutdownCmd="/sbin/shutdown -h now"
     fi
     #
     # Execute the appropriate event handling code.
     #
     case $1 in
         #
         # The UPS just went on battery.
         #
         UPSOnBattery)
             echo Save your work. System has lost power and may be shut down \
                     in 2 minutes. | \
                 wall
             ;;
         #
         # The UPS has been on battery for 60 seconds.
         #
         UPSOnBattery60)
             echo Save your work now! System has had no power for 1 minute. \
                     Shutdown is immenent. | \
                 wall
             ;;
         #
         # The UPS has been on battery for 120 seconds.  Time to shut 'er down.
         #
         UPSOnBattery120)
             #
             # If we need to shut down the UPS separately, do that now.
             #
             if [ x"$UPSCmd" != x ]; then
                 $UPSCmd $UPSNAME "\"$InstCmd\""
             fi
             #
             # Now, its time to shut down the system.
             #
             $ShutdownCmd
             ;;
         #
         # He's baaaaak!
         #
         UPSOnLinePower)
             echo System power has returned. Shutdown has been aborted. Carry \
                     on as if normal. | \
                 wall
             ;;
         #
         # Uh oooh!  Somebody made a boo boo.
         #
         *)
             logger -t upssched-cmd "Unrecognized command: $1"
             ;;
     esac

If your script uses one of the delayed shutdown commands (e.g. shutdown.return), you will need to set up the shutdown delay of the UPS. As noted in the upssched-cmd script (above), you should do some timing tests to determine how long the system takes to shut down. Give yourself a margin for safety and set that value into the UPS like this:

     /usr/local/ups/bin/upsrw -s ups.delay.shutdown=90 \
          -u lmmonitor -p ItsASecret bevatron

If you wish the UPS to delay startup after the power returns, you have two choices. You can delay startup for a fixed interval like this:

     /usr/local/ups/bin/upsrw -s ups.delay.start=600 \
          -u lmmonitor -p ItsASecret bevatron

You can delay startup until a sufficient amount of battery charge (in percent) has been reached. Try something like this:

     /usr/local/ups/bin/upsrw -s battery.charge.restart=50 \
          -u lmmonitor -p ItsASecret bevatron

You can find out if your UPS supports any or all of these variables by listing all of the variables supported, like this:

     /usr/local/ups/bin/upsrw bevatron

/etc/ups/hosts.conf:

If you chose the "--with-cgi" option, when you built NUT, you can define which UPS boxes can be monitored from the Web. Copy the file hosts.conf.sample to hosts.conf. Add all of the ups boxes that you want to be able to monitor from the Web. Also add the name of the logfile, if logged events will be displayed:

     MONITOR fusion-reactor@localhost "stargate UPS"
     LOGFILE /var/log/upslog.fusion-reactor

/etc/ups/ftplogs:

If you'd like to build a consolidated graph of all your UPS activity and have a centralized server where logfiles can be copied for this purpose, you should create the file /etc/ups/ftplogs:

     #! /bin/sh
     #
     # ftplogs - Script to ftp the UPS logs to the primary server at regular
     #           intervals.
     #
     # This script is used both by cron, to send the current UPS logs to the
     # primary server, every 15 minutes, and by logrotate, to send the freshly
     # rotated UPS logs to the primary server whenever UPS logs are rotated.
     #
     #
     # Source the UPS configuration.
     #
     if [ -f /etc/sysconfig/upsd ] ; then
        . /etc/sysconfig/upsd
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ] ; then
         SERVERROLE=Standalone
     fi
     #
     # Determine which UPS boxes we are logging to.  If we aren't the secondary
     # or a standalone system, there are no logfiles to FTP.
     #
     if [ x"$SERVERROLE" == xSecondary ] ; then
         UPS_BOXES=$UPS_BOXES_SECONDARY
     else
         if [ x"$SERVERROLE" != xStandalone ] ; then
             exit 0
         fi
     fi
     #
     # If there's no primary server defined, we're all done.
     #
     if [ x"$PRI_SERVER" == x ] ; then
         exit 0
     fi
     #
     # If we were passed a logfile name, send it to the primary server directly.
     #
     if test x"$1" != x; then
         echo -e "user ups ItsASecret\\nbin\\nput ${LOG_PATH}.$1 \
             ${PRI_LOG_PATH}.$1" | ftp -n $PRI_SERVER
     #
     # For all of the UPS on this system, send the logs to the primary.
     #
     else
         for LogBox in $UPS_BOXES; do
             echo -e "user ups ItsASecret\\nbin\\nput ${LOG_PATH}.$LogBox \
                 ${PRI_LOG_PATH}.$LogBox" | ftp -n $PRI_SERVER
         done
     fi

In the FTP commands in the script, you should pick a user that is valid on the primary server and use the right password for that user. Once you've saved the script, don't forget to add execute permissions and to hide it from regular users, since it contains the secret UPS password:

     su
     chgrp ups /etc/ups/ftplogs
     chmod ug+x,o= /etc/ups/ftplogs

Next, on the primary server, you should create a couple of empty log files with the correct permissions so that FTP can overwrite them without getting permission denied:

     su
     touch /var/log/upslog.bevatron /var/log/upslog.bevatron.1
     chown root:ups /var/log/upslog.bevatron
     chmod g+w /var/log/upslog.bevatron

Then, it wouldn't hurt try out the script, now that you've got it all set up. For example:

     /etc/ups/ftplogs

Check on the primary server that the log files get copied over.

Now, you can add the script to the cron table on the secondary server so that it runs at regular intervals (e.g. every 15 minutes), as shown in the next section.

/etc/crontab:

To schedule all of the UPS-related activities, add the following to your crontab:

     # Push the UPS logs over to the primary server every 15 minutes.
     05,20,35,50 * * * * root /etc/ups/ftplogs
     # Every two months, have the UPS check its battery.
     00 10 3 1,3,5,7,9,11 * root /etc/ups/batterytest fusion-reactor
     00 10 4 2,4,6,8,10,12 * * root /etc/ups/batterytest bevatron

You should probably schedule the times for each UPS' battery test on days when the other UPS are not also testing their batteries so that, if there are any bad batteries, all of your machines won't be down at once.

Furthermore, you should consider the effect that battery testing will have on battery life. Each time the battery is tested it experiences a fairly deep discharge cycle. Most of the cheaper UPS replacement batteries that are sold by replacement battery vendors, such as the UB1280 and UB12120, come from China and do not have long lifetimes. They are warrantied for a year and, if you discharge them every two weeks, you'll be lucky to have them last any more than that. We don't see a compelling reason for testing batteries every two weeks. Instead, we test them every two months. Mind you, if you are running a computer center and you want high-reliability, you should probably buy better quality batteries, in which case you can test your UPS more often.

Note that most UPS will usually come with automatic self-test turned on and set to every 14 days (your UPS vendor sells batteries too). This means that the UPS will run its battery test itself every 14 days, exactly 1209600 seconds after it is turned on. Isn't that convenient, not to mention good for the batteries? If you'd rather the test was done when you decide, under the control of your crontab, you should turn off this dubious feature like so:

     /usr/local/ups/bin/upsrw -s ups.test.interval=0 -u username -p password \
       upsname@localhost

/etc/logrotate.conf, /etc/logrotate.d/ups:

Add the new UPS' logfile to either the global logrotate config file (/etc/logrotate.conf) or the specific UPS config file (/etc/logrotate.d/ups):

     /var/log/upslog.bevatron {
         notifempty
         missingok
         create 0664 root ups
         copytruncate
         postrotate
             echo HEADER: Bevatron >/var/log/upslog.bevatron
             /etc/ups/ftplogs bevatron.1
         endscript
     }

/etc/rc.d/init.d/upsd:

Install the following script using "chkconfig --add upsd" and "chkconfig upsd on". Don't forget to set its permissions to "ugo+x":

     #! /bin/sh
     #
     # upsd - Script to start/stop the NUT UPS monitoring and power down daemons.
     #
     # chkconfig: 2345 19 81
     # description: Uninterruptable Power Supply monitoring and power down daemons
     #
     # pidfile: /var/lock/subsys/upsd
     # pidfile: /var/lock/subsys/upsmon
     # pidfile: /var/lock/subsys/upsdrivers.upsname
     # pidfile: /var/lock/subsys/upslog.upsname
     # config:  /etc/ups/*
     #
     #
     # Define the install path for the UPS binaries, etc.
     #
     INSTALL_PATH="/usr/local/ups"
     PID_UPSDAEMON="/var/lock/subsys/upsd"
     PID_UPSMONITOR="/var/lock/subsys/upsmon"
     PID_UPSDRIVER="/var/lock/subsys/upsdrivers"
     PID_UPSLOG="/var/lock/subsys/upslog"
     LOCK_UPSDOWN="/var/lock/subsys/upsdown"
     #
     # Load the RedHat functions.
     #
     if [ -f /etc/redhat-release ] ; then
         . /etc/rc.d/init.d/functions
     fi
     #
     # Source the UPS configuration.
     #
     if [ -f /etc/sysconfig/upsd ] ; then
        . /etc/sysconfig/upsd
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ] ; then
         SERVERROLE=Standalone
     fi
     #
     # Determine which UPS boxes we are starting up.
     #
     if [ ${SERVERROLE} = "Primary" ] ; then
         UPS_BOXES=$UPS_BOXES_PRIMARY
     else
         if [ ${SERVERROLE} = "Secondary" ] ; then
             UPS_BOXES=$UPS_BOXES_SECONDARY
         fi
     fi
     #
     # Upon startup, start all the UPS drivers that are configured in
     # /etc/ups/ups.conf (or wherever you put it).
     #
     # Next, start the UPS daemon to begin collecting information about all of
     # the connected UPS'.
     #
     # Then, start the power monitoring daemon to watch for low power conditions
     # and shut down the machine, if found.
     #
     start()
         {
         #
         # Remove the shutdown lock, if we're starting up.
         #
         if [ -f $LOCK_UPSDOWN ] ; then
             rm -f $LOCK_UPSDOWN
         fi
         #
         # Start a driver for each of our UPS.
         #
         for UPSBox in $UPS_BOXES; do
          echo -n "NUT starting UPS driver for ${UPSBox}: "
          $INSTALL_PATH/bin/upsdrvctl start ${UPSBox} >/dev/null 2>&1
          startval=$?
          if [ $startval = 0 ] ; then
              touch ${PID_UPSDRIVER}.${UPSBox}
              echo_success
          else
              echo_failure
          fi
          echo
      done
      #
      # Start the UPS daemon to process service requests.
      #
      echo -n "NUT starting UPS daemon: "
      $INSTALL_PATH/sbin/upsd >/dev/null 2>&1
      startval=$?
      if [ $startval = 0 ] ; then
          touch ${PID_UPSDAEMON}
          echo_success
          echo
      else
          echo_failure
          echo
          return $startval
      fi
      #
      # Start power monitoring.
      #
      echo -n "NUT starting power monitor: "
      $INSTALL_PATH/sbin/upsmon >/dev/null 2>&1
      startval=$?
      if [ $startval = 0 ] ; then
          touch ${PID_UPSMONITOR}
          echo_success
          echo
      else
          echo_failure
          echo
          return $startval
      fi
      #
      # If the user has given us a logging directory, start up logging.
      #
      if [ x"${LOG_PATH}" != x ] ; then
          echo -n "NUT starting UPS log: "
          timestamp=`date "+%Y/%m/%d %H:%M:%S"`
          #
          # Start up logging for each UPS box.
          #
          # Note that, above, we could have failed to start one or more UPS
          # and ignored the failure.  Here, since it is the last step, we
          # abort upon the failure of any log.
          #
          if [ x"$LOG_INTERVAL" == x ] ; then
              LOG_INTERVAL=30
          fi
          for UPSBox in $UPS_BOXES; do
              if [ ! -f ${LOG_PATH}.${UPSBox} ] ; then
                  echo HEADER: ${UPSBox} > ${LOG_PATH}.${UPSBox}
              fi
              echo $timestamp EVENT: Starting UPS logging \
                  >> ${LOG_PATH}.${UPSBox}
              if [ x"$UPS_GROUP" != x ] ; then
                  chgrp $UPS_GROUP ${LOG_PATH}.${UPSBox}
                  chmod g=rw ${LOG_PATH}.${UPSBox}
              fi
              $INSTALL_PATH/bin/upslog ${UPSBox}@localhost \
                  ${LOG_PATH}.${UPSBox} $LOG_INTERVAL \
                  "%TIME @Y/@m/@d @H:@M:@S% %VAR input.voltage% \
                      %VAR output.voltage% %VAR input.frequency% \
                      %VAR battery.charge% %VAR ups.load% [%VAR ups.status%] \
                      %VAR ups.temperature%" \
                  >/dev/null 2>&1
              startval=$?
              if [ $startval = 0 ] ; then
                  touch ${PID_UPSLOG}.${UPSBox}
              else
                  echo_failure
                  echo
                  return $startval
              fi
          done
          echo_success
          echo
      fi
      #
      # We're all done with startup.
      #
      return $startval
      }

#
# Upon shutdown, do all of the startups in reverse: logging; power monitor; # UPS daemon; UPS drivers.
#
stop()

      {
      #
      # Shutdown logging for any log that we started.
      #
      echo -n "NUT stoping UPS log: "
      timestamp=`date "+%Y/%m/%d %H:%M:%S"`
      for UPSBox in $UPS_BOXES; do
          if [ -f ${PID_UPSLOG}.${UPSBox} ] ; then
              LogPID=`ps -eo pid,args | grep upslog | grep ${UPSBox} | \
                  sed -n 's/^ \([0-9]\).*/\1/p'`
              if [ x"$LogPID" != x ] ; then
                  kill -9 $LogPID
                  stopval=$?
                  echo $timestamp EVENT: Stopping UPS logging \
                      >> ${LOG_PATH}.${UPSBox}
                  [ $stopval = 0 ] && rm -f ${PID_UPSLOG}.${UPSBox}
              fi
          fi
      done
      echo_success
      echo
      #
      # Stop power monitoring.
      #
      if [ -f ${PID_UPSMONITOR} ] ; then
          echo -n "NUT stoping power monitor: "
          $INSTALL_PATH/sbin/upsmon -c stop >/dev/null 2>&1
          stopval=$?
          if [ $stopval = 0 ] ; then
              rm -f ${PID_UPSMONITOR}
              echo_success
          else
              echo_failure
          fi
          echo
      fi
      #
      # Stop the UPS daemon.
      #
      if [ -f ${PID_UPSDAEMON} ] ; then
          echo -n "NUT stoping UPS daemon: "
          $INSTALL_PATH/sbin/upsd -c stop >/dev/null 2>&1
          stopval=$?
          if [ $stopval = 0 ] ; then
              rm -f ${PID_UPSDAEMON}
              echo_success
          else
              echo_failure
          fi
          echo
      fi
      #
      # Stop all of our UPS drivers.
      #
      for UPSBox in $UPS_BOXES; do
          echo -n "NUT stoping UPS driver for ${UPSBox}: "
          $INSTALL_PATH/bin/upsdrvctl stop ${UPSBox} >/dev/null 2>&1
          stopval=$?
          if [ $stopval = 0 ] ; then
              rm -f ${PID_UPSDRIVER}.${UPSBox}
              echo_success
          else
              echo_failure
          fi
          echo
      done
      return 0
      }

#
# See how we were called.
#
case "$1" in

      #
      # Start.
      #
      start)
          start
          RETVAL=$?
          ;;
      #
      # Stop.
      #
      stop)
          stop
          RETVAL=$?
          ;;
      #
      # Restart or reload (whatever).
      #
      restart|reload)
          stop
          start
          RETVAL=$?
          ;;
      #
      # Conditional restart.
      #
      condrestart)
          if [ -f ${PID_UPSDRIVER} ] ; then
              stop
              start
              RETVAL=$?
          fi
          ;;
      #
      # Give the status of all of the NUT components that are running.
      #
      status)
          #
          # If none of the UPS are running, assume that NUT is down.
          #
          if [ ! -f ${PID_UPSDRIVER} ] ; then
              echo NUT is down
              exit 1
          fi
          #
          # Let's check each UPS.
          #
          for UPSBox in $UPS_BOXES; do
              $INSTALL_PATH/bin/upsc ${UPSBox} output.voltage >/dev/null 2>&1
              RETVAL=$?
              if [ $RETVAL -eq 0 ] ; then
                  echo UPS ${UPSBox} is functioning normally
              else
                  echo UPS ${UPSBox} is down
              fi
          done
          #
          # Check the UPS daemon.
          #
          status upsd
          #
          # Check the power monitor.
          #
          status upsmon
          #
          # If we're logging then let's see what we're logging.
          #
          if [ x"${LOG_PATH}" != x ] ; then
              status upslog
              RETVAL=$?
              if [ $RETVAL -eq 0 ] ; then
                  for UPSBox in $UPS_BOXES; do
                      if [ -f ${PID_UPSLOG}.${UPSBox} ] ; then
                          echo UPS ${UPSBox} is being logged to ${LOG_PATH}.${UPSBox}
                      fi
                  done
              fi
          fi
          ;;
      #
      # Help text.
      #
      *)
          echo $"Usage: $0 {start|stop|restart|condrestart|status}"
          exit 1

esac

     exit $RETVAL

If you are running on SuSE, the UPS startup script needs to be slightly different. Here's the modified script:

     #! /bin/sh
     #
     # Author: Eric Wilde
     #
     # /etc/init.d/upsd
     #   and its symbolic link
     # /usr/sbin/rcupsd
     #
     ### BEGIN INIT INFO
     # Provides:       upsd
     # Required-Start: $network $syslog
     # Required-Stop:
     # Default-Start:  2 3 5
     # Default-Stop:   0 1 6
     # Description:    Start/stop the NUT UPS monitoring and power down daemons.
     ### END INIT INFO
     #
     # Return values acc. to LSB for all commands but status:
     # 0 - success
     # 1 - generic or unspecified error
     # 2 - invalid or excess argument(s)
     # 3 - unimplemented feature (e.g. "reload")
     # 4 - insufficient privilege
     # 5 - program is not installed
     # 6 - program is not configured
     # 7 - program is not running
     #
     ############################################################################
     #
     # Define the install path for the UPS binaries, etc.
     #
     INSTALL_PATH="/usr/local/ups"
     CONFIG_DIR="/etc/ups"
     PID_UPSDAEMON="/var/lock/subsys/upsd"
     PID_UPSMONITOR="/var/lock/subsys/upsmon"
     PID_UPSDRIVER="/var/lock/subsys/upsdrivers"
     PID_UPSLOG="/var/lock/subsys/upslog"
     LOCK_UPSDOWN="/var/lock/subsys/upsdown"
     #
     # First reset status of this service.
     #
     . /etc/rc.status
     rc_reset
     #
     # Check for missing binary.
     #
     if [ ! -x ${INSTALL_PATH}/bin/upsdrvctl ]; then
         echo -n "NUT daemon, ${INSTALL_PATH}/bin/upsdrvctl is not installed. "
         rc_status -s
         exit 5
     fi
     #
     # Check that the configuration directory exists.
     #
     if [ ! -d ${CONFIG_DIR} ]; then
         echo -n "NUT configuration directory ${CONFIG_DIR} does not exist. "
         rc_status -s
         exit 6
     fi
     #
     # Source the UPS configuration.
     #
     if [ -f /etc/sysconfig/upsd ]; then
         . /etc/sysconfig/upsd
     fi
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ]; then
         SERVERROLE=Standalone
     fi
     #
     # Determine which UPS boxes we are starting up.
     #
     if [ ${SERVERROLE} = "Primary" ] ; then
         UPS_BOXES=$UPS_BOXES_PRIMARY
     else
         if [ ${SERVERROLE} = "Secondary" ] ; then
             UPS_BOXES=$UPS_BOXES_SECONDARY
         fi
     fi
     #
     # Upon startup, start all the UPS drivers that are configured in
     # /etc/ups/ups.conf (or wherever you put it).
     #
     # Next, start the UPS daemon to begin collecting information about all of
     # the connected UPS'.
     #
     # Then, start the power monitoring daemon to watch for low power conditions
     # and shut down the machine, if found.
     #
     start()
         {
         #
         # Remove the shutdown lock, if we're starting up.
         #
         if [ -f $LOCK_UPSDOWN ] ; then
             rm -f $LOCK_UPSDOWN
         fi
         #
         # Start a driver for each of our UPS.
         #
         for UPSBox in $UPS_BOXES; do
          echo -n "NUT starting UPS ${UPSBox} "
          if [ -f ${PID_UPSDRIVER}.${UPSBox} ]; then
              echo -n "- Warning: driver already running. "
          fi
          ${INSTALL_PATH}/bin/upsdrvctl start ${UPSBox} >/dev/null 2>&1
          startval=$?
          rc_status -v
          if [ $startval = 0 ] ; then
              touch ${PID_UPSDRIVER}.${UPSBox}
          fi
      done
      #
      # Start the UPS daemon to process service requests.
      #
      echo -n "NUT starting UPS daemon "
      if [ -f ${PID_UPSDAEMON} ]; then
          echo -n "- Warning: daemon already running. "
      fi
      ${INSTALL_PATH}/sbin/upsd >/dev/null 2>&1
      startval=$?
      rc_status -v
      if [ $startval = 0 ] ; then
          touch ${PID_UPSDAEMON}
      else
          return $startval
      fi
      #
      # Start power monitoring.
      #
      echo -n "NUT starting power monitor "
      if [ -f ${PID_UPSMONITOR} ]; then
          echo -n "- Warning: monitor already running. "
      fi
      ${INSTALL_PATH}/sbin/upsmon >/dev/null 2>&1
      startval=$?
      rc_status -v
      if [ $startval = 0 ] ; then
          touch ${PID_UPSMONITOR}
      else
          return $startval
      fi
      #
      # If the user has given us a logging directory, start up logging.
      #
      if [ x"${LOG_PATH}" != x ]; then
          echo -n "NUT starting UPS log "
          timestamp=`date "+%Y/%m/%d %H:%M:%S"`
          #
          # Start up logging for each UPS box.
          #
          # Note that, above, we could have failed to start one or more UPS and
          # ignored the failure.  Here, since it is the last step, we abort upon
          # the failure of any log.
          #
          if [ x"$LOG_INTERVAL" == x ]; then
              LOG_INTERVAL=30
          fi
          for UPSBox in $UPS_BOXES; do
              if [ ! -f ${LOG_PATH}.${UPSBox} ]; then
                  echo HEADER: ${UPSBox} > ${LOG_PATH}.${UPSBox}
              fi
              echo $timestamp EVENT: Starting UPS logging \
                  >> ${LOG_PATH}.${UPSBox}
              if [ x"$UPS_GROUP" != x ]; then
                  chgrp $UPS_GROUP ${LOG_PATH}.${UPSBox}
                  chmod g=rw ${LOG_PATH}.${UPSBox}
              fi
              if [ -f ${PID_UPSLOG}.${UPSBox} ]; then
                  echo -n "- Warning: logging already started. "
              fi
              ${INSTALL_PATH}/bin/upslog \
                  ${UPSBox}@localhost ${LOG_PATH}.${UPSBox} \
                  $LOG_INTERVAL \
                  "%TIME @Y/@m/@d @H:@M:@S% %VAR input.voltage% %VAR output.voltage% %VAR input.frequency% %VAR battery.charge% %VAR ups.load% [%VAR ups.status%] %VAR ups.temperature%" \
                  >/dev/null 2>&1
              startval=$?
              rc_status -v
              if [ $startval = 0 ] ; then
                  touch ${PID_UPSLOG}.${UPSBox}
              else
                  return $startval
              fi
          done
      fi
      #
      # We're all done with startup.
      #
      return $startval
      }

#
# Upon shutdown, do all of the startups in reverse: logging; power monitor; # UPS daemon; UPS drivers.
#
stop()

      {
      #
      # Shutdown logging for any log that we started.
      #
      echo -n "NUT stoping UPS log "
      timestamp=`date "+%Y/%m/%d %H:%M:%S"`
      for UPSBox in $UPS_BOXES; do
          if [ -f ${PID_UPSLOG}.${UPSBox} ]; then
              LogPID=`ps -eo pid,args | grep upslog | grep ${UPSBox} | \
                  sed -n 's/^ \([0-9]\).*/\1/p'`
              if [ x"$LogPID" != x ]; then
                  kill -9 $LogPID
                  stopval=$?
                  echo $timestamp EVENT: Stopping UPS logging \
                      >> ${LOG_PATH}.${UPSBox}
                  [ $stopval = 0 ] && rm -f ${PID_UPSLOG}.${UPSBox}
              fi
          fi
      done
      rc_status -v
      #
      # Stop power monitoring.
      #
      if [ -f ${PID_UPSMONITOR} ]; then
          echo -n "NUT stoping power monitor "
          ${INSTALL_PATH}/sbin/upsmon -c stop >/dev/null 2>&1
          stopval=$?
          rc_status -v
          if [ $stopval = 0 ] ; then
              rm -f ${PID_UPSMONITOR}
          fi
      fi
      #
      # Stop the UPS daemon.
      #
      if [ -f ${PID_UPSDAEMON} ]; then
          echo -n "NUT stoping UPS daemon "
          ${INSTALL_PATH}/sbin/upsd -c stop >/dev/null 2>&1
          stopval=$?
          rc_status -v
          if [ $stopval = 0 ] ; then
              rm -f ${PID_UPSDAEMON}
          fi
      fi
      #
      # Stop all of our UPS drivers.
      #
      for UPSBox in $UPS_BOXES; do
          echo -n "NUT stoping UPS ${UPSBox} "
          ${INSTALL_PATH}/bin/upsdrvctl stop ${UPSBox} >/dev/null 2>&1
          stopval=$?
          rc_status -v
          if [ $stopval = 0 ] ; then
              rm -f ${PID_UPSDRIVER}.${UPSBox}
          fi
      done
      return 0
      }

#
# See how we were called.
#
case "$1" in

      #
      # Start.
      #
      start)
          start
          RETVAL=$?
          if [ $RETVAL != 0 ]; then
              RETVAL=1
          fi
          ;;
      #
      # Stop.
      #
      stop)
          stop
          RETVAL=$?
          ;;
      #
      # Restart or reload (whatever).
      #
      restart|reload)
          stop
          start
          RETVAL=$?
          if [ $RETVAL != 0 ]; then
              RETVAL=1
          fi
        ;;
      #
      # Conditional restart.
      #
      condrestart)
          if [ -f ${PID_UPSDRIVER} ]; then
              stop
              start
              RETVAL=$?
              if [ $RETVAL != 0 ]; then
                  RETVAL=1
              fi
          fi
          ;;
      #
      # Give the status of all of the NUT components that are running.
      #
      status)
          #
          # If none of the UPS are running, assume that NUT is down.
          #
          if [ ! -f ${PID_UPSDRIVER} ]; then
              echo NUT is down
              exit 1
          fi
          #
          # Let's check each UPS.
          #
          for UPSBox in $UPS_BOXES; do
              ${INSTALL_PATH}/bin/upsc ${UPSBox} output.voltage >/dev/null 2>&1
              RETVAL=$?
              if [ $RETVAL -eq 0 ]; then
                  echo UPS ${UPSBox} is functioning normally
              else
                  echo UPS ${UPSBox} is down
              fi
          done
          #
          # Check the UPS daemon.
          #
          status upsd
          #
          # Check the power monitor.
          #
          status upsmon
          #
          # If we're logging then let's see what we're logging.
          #
          if [ x"${LOG_PATH}" != x ]; then
              status upslog
              RETVAL=$?
              if [ $RETVAL -eq 0 ]; then
                  for UPSBox in $UPS_BOXES; do
                      if [ -f ${PID_UPSLOG}.${UPSBox} ]; then
                          echo UPS ${UPSBox} is being logged to ${LOG_PATH}.${UPSBox}
                      fi
                  done
              fi
          fi
          ;;
      #
      # Help text.
      #
      *)
          echo $"Usage: $0 {start|stop|restart|condrestart|status}"
          exit 1

esac

     exit $RETVAL

After you create the startup script, give it execute permissions, put it into service and add a symlink to it, from the expected directory:

     su
     chmod ugo+x /etc/init.d/upsd
     insserv upsd
     ln -s /etc/init.d/upsd /usr/sbin/rcupsd

/etc/rc.d/init.d/upsdown:

If you use upssched.conf, upssched and upssched-cmd to shutdown your UPS, you might not need this script. You most likely will need this script, if you just let upsmon shut down your system in its default mode, and you probably will need this script, if your upssched-cmd script uses upsmon to shut down the UPS, for example like this:

     /usr/local/ups/sbin/upsmon -c fsd

Basically, this script waits for the system to go through all of the steps in an orderly shutdown and then, when it gets to the last step (runlevel 0), it powers off the UPS.

However, a better scheme is undoubtedly to use the "shutdown.return" instant command, like this:

     /usr/local/ups/sbin/upscmd -u lmmonitor -p ItsASecret bevatron \
         "shutdown.return"

This is what the enclosed upssched-cmd does. This gives the system a chance to actually power itself off before the UPS shuts down. And, if the line power comes back, the UPS will restart itself and hopefully bring your system back up, which doesn't happen if you use this upsdown script.

Also, many UPS models have a hardware setting that prevents the UPS from powering up until a specified time has elapsed or, better yet, a specified battery charge has been reached. If you have one of those UPS models (e.g. APC SmartUPS), you won't need to use this script to delay startup so, if you do need it for shutdown, leave the START_SLEEP value empty.

To facillitate physical shutdown of the UPS power, when the system is shut down, install the following script using "chkconfig --add upsdown" and "chkconfig upsdown on". Don't forget to set its permissions to "ugo+x":

     #! /bin/sh
     #
     # upsd - Script to shut down the UPS upon system shutdown due to \
     #        power failure.
     #
     # Revision History:
     # ewilde      2003Apr06  Initial coding.
     #
     # chkconfig: 12345 01 99
     # description: Shut down Uninteruptable Power Supply on power fail shutdown
     #
     # This script runs on startup, right after new microcode has been installed
     # on the machine but before any file system activity.  It can sleep for a
     # while until the UPS batteries have a chance to recharge to a sufficient
     # level to ensure an orderly shutdown, should power fail again.  If you wish
     # to do this, set the START_SLEEP value to a valid "sleep(1)" value (e.g.
     # "5m") and make sure POWERDOWNFLAG points to the same file as found in the
     # upsmon.conf file.
     #
     # It also runs on shutdown, as the system cleans up after entering runlevel
     # 0 (shutdown) or 6 (reboot).  If the UPS POWERDOWNFLAG is set, it physically
     # powers off the UPS.  This should only happen in runlevel 0.
     # Define the install path for the UPS binaries, etc.
     INSTALL_PATH="/usr/local/ups"
     POWERDOWNFLAG="/etc/killpower"
     START_SLEEP=""
     if [ -f /etc/redhat-release ] ; then
         . /etc/rc.d/init.d/functions
     fi
     # See how we were called.
     case "$1" in
      # Start.
      start)
          if [ -f $POWERDOWNFLAG ] ; then
              echo -n "NUT waiting for UPS batteries to recharge: "
              if [ x"$START_SLEEP" != x ] ; then
                sleep $START_SLEEP
              fi
              echo_success
              echo
          fi
          ;;
      # Stop.
      stop)
          if [ -f $POWERDOWNFLAG ] ; then
              echo -n "NUT shutting down the UPS: "
              echo_success
              echo
              $INSTALL_PATH/bin/upsdrvctl shutdown
          fi
          ;;
      # Help text (pushing it, I know).
      *)
          echo $"Usage: $0 {start|stop}"
          exit 1

esac

     exit 0

Go back to the begining of this section and look at the steps for the quick add of a new UPS. You may need to do some of those steps too, to complete your installation of any UPS that are on your machine.

LMSensors

Install lm_sensors if you want to monitor your system's health in real time. Note that your system may already have lm_sensors installed because of some screwed up dependencies in one of the other RPMs that you needed to install. If this is the case, you can probably install right over it, providing you don't ever update lm_sensors with newer packages from the mother ship.

Also, you may need to delete the old sensor libraries from /usr/lib or monkey with the library options on your compiles to get newer code to use the newer libraries. However, this may not be a great idea because older modules that referenced the older lm_sensors may stop working. Since the install puts its library files in /usr/local/lib, you can aim new stuff there and it should work OK while not breaking older modules that rely on the libraries in /usr/lib.

Begin by downloading the appropriate source archive for your kernel version. For 2.4.10+ kernels, you need:

     lm_sensors-2.x.y.tar.gz
     i2c-2.m.n.tar.gz

For 2.6.x kernels, you need:

     lm_sensors-3.x.y.tar.bz2

To build and install the user space tools for 2.6 kernels, do:

     bunzip2 lm_sensors-3.x.y.tar.bz2
     tar -xvf lm_sensors-3.x.y.tar
     make user
     su
     make user install

/etc/ld.so.conf.d/usr-local.conf:

Pay attention to the stuff that the install tells you about /usr/local/lib and running /sbin/ldconfig. If /usr/local/lib is missing from /etc/ld.so.conf, the easiest way to add it is to create the file usr-local.conf in /etc/ld.so.conf.d. It should contain one line:

     /usr/local/lib

Once you've created this file, run:

     su
     /sbin/ldconfig

Run sensors-detect to find out what's up with your mobo. Note that the idiots who built the shell script assume that /usr/sbin will be in your PATH environment variable (although nothing could be further from the truth) and the script will crap out, if it isn't. So, you need to set it first, if it isn't there.

     su
     PATH=$PATH:/usr/sbin
     sensors-detect

Note that, if you have an Asus P4B mobo, the clever people at Asus have decided that the PCI device in the IHC chip, that does mobo health monitoring, should be hidden so that mere mortals like us can't monkey with it. See below for instructions on how to fix this problem.

If you have a 2.6.x kernel, in order for lm_sensors to see the device, you must run the unhide_ICH_SMBus script. But first, the writer of the script unwisely assumed that /sbin will be in your PATH environment variable (although nothing could be further from the truth) and the unhide script will crap out, if it isn't. So, you need to set it first, if it isn't there.

     su
     PATH=$PATH:/sbin
     prog/hotplug/unhide_ICH_SMBus

If this works, you can go back and run sensors_detect again and it should work. Note that, under no circumstances should you use suspend/resume after you run this script because unpleasantries will surely ensue.

/etc/modprobe.conf or /etc/modules.conf (on older systems):

The sensors-detect script will output several lines of stuff to add to modeprobe.conf (or modules.conf). You may or may not add the lines it tells you to. Typically, they look like:

     # I2C module options.
     alias char-major-89 i2c-dev

You only need to add them if the lm_sensors startup script doesn't load the i2c-dev module at startup.

/etc/sysconfig/lm_sensors:

The sensors-detect normally writes this config file. It is used by the startup script in /etc/rc.d/init.d/lm_sensors to load the appropriate modules for your system. If you choose to allow sensors-detect to create this file you need do nothing else. However, if you don't allow it to create this file or it can't create this file, you should add the lines that it tells you to some rc* file. Typically, they'd look something like this:

     # I2C adapter drivers
     modprobe i2c-i801
     # I2C chip drivers
     modprobe w83781d

If you are running on an Asus P4B under a 2.6.x kernel and you'd like to take advantage of the changes to the lm_sensors startup script (see below), you can add the following line to /etc/sysconfig/lm_sensors:

     # Asus P4B motherboard kludge.
     P4BKLUDGE=yes
     PCIBUS="/sys/bus/pci/slots"
     DEVICE="00:1f"

/etc/rc.d/init.d/lm_sensors:

If you are running an Asus P4B mobo and a 2.6.x kernel, you can modify the lm_sensors startup script to make the PCI device visible to lm_sensors at startup time. Here's the complete script:

     #!/bin/sh
     #
     # chkconfig: 2345 26 74
     # description: sensors is used for monitoring motherboard sensor values.
     # config: /etc/sysconfig/lm_sensors
     #
     #    This program is free software; you can redistribute it and/or modify
     #    it under the terms of the GNU General Public License as published by
     #    the Free Software Foundation; either version 2 of the License, or
     #    (at your option) any later version.
     #
     #    This program is distributed in the hope that it will be useful,
     #    but WITHOUT ANY WARRANTY; without even the implied warranty of
     #    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
     #    GNU General Public License for more details.
     #
     #    You should have received a copy of the GNU General Public License
     #    along with this program; if not, write to the Free Software
     #    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
     # See also the lm_sensors homepage at:
     #     http://www2.lm-sensors.nu/~lm78/index.html
     # It uses a config file /etc/sysconfig/lm_sensors that contains the modules
     # to be loaded/unloaded. That file is sourced into this one.
     # The format of that file a shell script that simply defines the modules 
     # in order as normal shell variables with the special names:
     #    MODULE_1, MODULE_2, MODULE_3, etc.
     # If sensors isn't supported by the kernel, try loading the module...
     [ -e /sys/bus/i2c ] || /sbin/modprobe i2c-dev &>/dev/null
     [ -e /sys/bus/i2c ] || sleep 1
     # Don't bother if /sys/bus/i2c still doesn't exist, kernel doesn't have
     # support for sensors.
     [ -e /sys/bus/i2c ] || exit 0
     CONFIG=/etc/sysconfig/lm_sensors
     [ -r "$CONFIG" ] || exit 0
     egrep '^MODULE_' $CONFIG &>/dev/null || exit 0
     # Load config file
     . "$CONFIG"
     PSENSORS=/usr/bin/sensors
     # Source function library.
     . /etc/init.d/functions
     RETVAL=0
     prog="lm_sensors"
     start() {
         echo -n $"Starting $prog: "
         /sbin/MAKEDEV i2c
         if [ "x$P4BKLUDGE" == "xyes" ] ; then
             p4bkludge
         fi
             modules=`grep \^MODULE_ $CONFIG | wc -l | tr -d ' '`
             i=0
             while [ $i -lt $modules ] ; do
                 module=`eval echo '$'MODULE_$i`
     #           echo starting module __${module}__
                 /sbin/modprobe $module &>/dev/null
                 i=`expr $i + 1`
             done
             $PSENSORS -s
      RETVAL=$?
      if [ $RETVAL -eq 0 ] && touch /var/lock/subsys/lm_sensors ; then
          echo_success
          echo
      else
          echo_failure
          echo
      fi

}

     stop() {
         echo -n $"Stopping $prog: "
      modules=`grep \^MODULE_ $CONFIG | wc -l | tr -d ' '`
      i=`expr $modules`
      while [ $i -ge 0 ] ; do
          module=`eval echo '$'MODULE_$i`
          /sbin/modprobe -r $module &>/dev/null
          i=`expr $i - 1`
      done
      RETVAL=$?
      if [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/lm_sensors ; then
          echo_success
          echo
      else
          echo_failure
          echo
      fi

}

     dostatus() {
         $PSENSORS
         RETVAL=$?
     }
     restart() {
         stop
         start
         RETVAL=$?
     }
     condrestart() {
         [ -e /var/lock/subsys/lm_sensors ] && restart || :
     }
     p4bkludge() {
      # Let's begin by checking to see if the PCI device is already there.
      # If so, there's no need to make it visible.
      smbus=`/sbin/lspci -n -s $DEVICE.3 | grep -i '0c05: *8086'`
      if [ -n "$smbus" ] ; then
          return
      fi
      # We use fakephp to hotplug the hidden device.
      /sbin/modprobe fakephp &> /dev/null
      # Make the hidden device visible.
      newval=$( printf '%x' $((0x$(/sbin/setpci -s $DEVICE.0 f2.w) & 0xfff7))) 
      /sbin/setpci -s $DEVICE.0 f2.w=$newval
      # Make sure the device is there.
      echo 1 > $PCIBUS/0000:$DEVICE.0/power 2>/dev/null
      if [ ! -d "$PCIBUS/0000:$DEVICE.3" ] ; then
          echo "Failed to hotplug the P4B PCI device"
          return
      fi
      # Looks good to me.
      return

}

     # See how we were called.
     case "$1" in
         start)
             start
             ;;
         stop)
             stop
             ;;
         status)
             dostatus
             ;;
         restart|reload)
             restart
             ;;
         condrestart)
             condrestart
             ;;
         *)
         echo $"Usage: $0 {start|stop|status|restart|condrestart}"
         exit 1
     esac
     exit $RETVAL

You should install the script with:

     su
     chkconfig --add lm_sensors
     chkconfig lm_sensors on

Note that, under no circumstances should you use suspend/resume if you use this kludge to start lm_sensors because unpleasantries will surely ensue.

/usr/bin/sensors:

If you installed lm_sensors on top of a prexisting lm_sensors (probably from an ill-begotten RPM), you may want to symlink /usr/bin/sensors to the newer /usr/local/bin/sensors:

     su
     mv /usr/bin/sensors /usr/bin/sensors.orig
     ln -s /usr/local/bin/sensors /usr/bin/sensors

Cron

/etc/crontab:

Change crontab to run fetchmail and the atomic clock, if desired. For example:

     SHELL=/bin/bash
     PATH=/sbin:/bin:/usr/sbin:/usr/bin
     HOME=/
     MAILTO=root
     # Note: if you don't want mail for individual commands, send the output
     # somewhere (e.g. /dev/nul).  The "MAILTO=" option doesn't work.  Also note
     # that you should use ">>file 2>&1" or something equivalent so that output
     # to stderr will not generate mail (unless that's what you want).
     # run-parts
     01 * * * * root run-parts /etc/cron.hourly
     02 4 * * * root run-parts /etc/cron.daily
     22 4 * * 0 root run-parts /etc/cron.weekly
     42 4 1 * * root run-parts /etc/cron.monthly
     # Retrieve mail from the ISP's POP server.  Log attempts to fetchmail log.
     00 * * * * root /etc/mail/fetchmail-poll >>/var/log/fetchmail 2>&1
     30 * * * * root /etc/mail/fetchmail-poll >>/var/log/fetchmail 2>&1
     # Once a day at 01:00, notify the users of all of the spam received in the
     # last 24 hours.
     00 1 * * * root /home/mailrobot/SpamNotify.pl --partition
     # Once a day at 00:30, remove all of the filtered mail messages that are
     # over 30 days old.
     30 0 * * * root /etc/mail/filterclean
     # Poll the atomic clock at CMU once a day to set our clock.  Do this at
     # 01:30 so that we're set for the day but don't interfere with switching to
     # or from DST.
     #
     # This has been replaced by ntp.
     #30 1 * * * root /usr/bin/atomclock-poll >> /var/log/fetchmail 2>&1
     # Twice a day, check the reference server used by NTP to see if it is OK.
     00 8,20 * * * root /etc/ntp/ntpcheck
     # Twice a month, have the UPS check its battery.
     00 10 1,15 * * root /etc/ups/batterytest fusion-reactor
     # Reindex any Web pages that have changed since yesterday.
     15 2 * * * root /var/www/reindex Domain1
     15 3 * * * root /usr/local/htDig/bin/htnotify -c \
                     /var/www/Domain1/db/htdig.conf
     45 2 * * * root /var/www/reindex Domain2
     45 3 * * * root /usr/local/htDig/bin/htnotify -c \
                     /var/www/Domain2/db/htdig.conf
     # Pull guide data for TiVo.  Note that there is a lot of congestion around
     # 8 O'Clock PM so we try a little later on at an odd time.  Once a day, build
     # and push slice files out to the local TiVos.
     05 0,4,8,12,16 * * * root /home/ewilde/TiVo/guidedata-pull \
                               >>/var/log/guidedata 2>&1
     35 20 * * * root /home/ewilde/TiVo/guidedata-pull >>/var/log/guidedata 2>&1
     15 4 * * * root /home/ewilde/TiVo/guidedata-push >>/var/log/guidedata 2>&1

Logrotate

Rotates log files weekly and saves a predetermined number of older logfiles. Deletes expired logfiles.

/etc/logrotate.conf:

Basically, this configuration file should not be changed, except for global options. All additions to logrotate should be done in /etc/logrotate.d. Here is an example of a basic logrotate config file:

     # see "man logrotate" for details
     # rotate log files weekly
     weekly
     # keep 4 weeks worth of backlogs
     rotate 4
     # create new (empty) log files after rotating old ones
     create
     # uncomment this if you want your log files compressed
     #compress
     # RPM packages drop log rotation information into this directory
     include /etc/logrotate.d
     # no packages own lastlog or wtmp -- we'll rotate them here
     /var/log/lastlog {
         monthly
         rotate 1
     }
     /var/log/wtmp {
         monthly
         create 0664 root utmp
         rotate 1
     }
     # system-specific logs may be also be configured here (although, we put them
     # in /etc/logrotate.d).

/etc/logrotate.d/xxx:

You should use individual files, in the same manner that the RPMs do, to describe rotation information for your system-specific log files, instead of monkeying with the top-level logrotate config file. You can create as many as you wish in the directory "/etc/logrotate.d". What follows are some examples of common ones.

/etc/logrotate.d/clamav:

     /var/log/clamav/*.log {
         missingok
         notifempty
     }

/etc/logrotate.d/fetchmail:

     /var/log/fetchmail {
         missingok
         notifempty
     }

/etc/logrotate.d/guidedata:

     /var/log/guidedata {
         missingok
         notifempty
     }
     /var/log/tivo.log {
         missingok
         notifempty
         create 0666 root root
         copytruncate
     }

/etc/logrotate.d/mailrobot:

     /var/log/mailrobot {
         missingok
         notifempty
     }

/etc/logrotate.d/narc:

     /var/log/firewall {
         missingok
         notifempty
     }

/etc/logrotate.d/robot:

     /var/log/robotdb {
         missingok
         notifempty
     }

/etc/logrotate.d/samba:

It appears that on some of the older installations of Samba the installed logrotate file is incorrect (some log files are missing). The incorrect file reads:

     /var/log/samba/*.log {

The corrected line should be:

     /var/log/samba/log.nmbd /var/log/samba/log.smbd /var/log/samba/*.log {

Without this correction, the log will not be rotated properly (it took mine two years to fill up).

However, newer versions of Samba have corrected this problem by getting rid of the log.nmbd and log.smbd files. So, if your version of Samba doesn't use these files, there's nothing to change.

/etc/logrotate.d/sendmailfilter:

     /var/log/sendmailfilter {
         missingok
         notifempty
     }

/etc/logrotate.d/ups:

This file rotates all of the logs associated with the UPS monitoring package NUT. Here are three examples of individual UPS logfiles being rotated:

     /var/log/upslog.Mast {
         notifempty
         missingok
         create 0664 root ups
         copytruncate
         postrotate
             echo HEADER: Mast >/var/log/upslog.Mast
         endscript
     }
     /var/log/upslog.Nova {
         notifempty
         missingok
         create 0664 root ups
         copytruncate
         postrotate
             echo HEADER: Nova >/var/log/upslog.Nova
         endscript
     }
     /var/log/upslog.Tokamak {
         notifempty
         missingok
         create 0664 root ups
         copytruncate
         postrotate
             echo HEADER: Tokamak >/var/log/upslog.Tokamak
         endscript
     }

/etc/logrotate.d/yum:

If either the automatic software update daemon yum-updatesd is used or if yum is invoked manually, it will write a log entry for each package that it installs, updates or deletes to its logfile. Unfortunately, the log entries are only stamped with the month and day, not year, when they are written.

Having log entries that are not stamped with their year of occurrence is not a problem except for the fact that the frequency of software updates is often quite low and the size of the log entries is so small so that the logfile doesn't grow at a very fast rate. In an effort to not create too many tiny-sized logfiles, the author of the yum module in logrotate.d set a minimum size for rotation.

The mimimum size for file rotation probably seemed like a good idea at the time but one must consider that it could be possible for a whole year's worth of updates to fit into a single logfile. Thus, in certain circumstances, part or all of a second year's worth of software updates could be added to the end of the logfile.

Normally, this wouldn't be a problem if only humans were reading the logfile but, if an automatic log reading program such as logwatch is employed, the lack of years in the log entry timestamps, coupled with the fact that the logfile can roll over to more than a year's worth of data, means that, once a year elapses, logwatch can begin to generate reports about last year's software updates.

The typical rotate module, installed along with yum, reads:

     /var/log/yum.log {
       missingok
       notifempty
       size 30k
       create 0600 root root
     }

We prefer an altered module that reads:

     /var/log/yum.log {
       missingok
       notifempty
       yearly
       rotate 100
       create 0600 root root
     }

This will rotate the logfile yearly, unless it is empty. The chances that it will grow too big (e.g. a couple of megabytes) in a year are slim and none. Of course, what we'd really like to do is rotate it yearly or if its size is greater than some limit but you don't get that choice with logrotate.

Logwatch

Logwatch scans log files (usually on a daily basis) and reports on any unusual (according to it) activity that it finds. Later versions of the OS come with Logwatch already installed. Or, you can download the latest source from http://www.logwatch.org/. Place the tar file in an appropriate directory and untar it:

     cd /rpm/logwatch
     tar -xvzf logwatch-7.3.6.tar.gz

Basically, the logwatch install consists of copying files to the install directory. The distribution contains a shell script that can assist with this:

     cd logwatch-7.3.6
     ./install_logwatch.sh

For our purposes, we assume you picked /usr/local/logwatch for the install directory (when asked by the script) and /tmp for the temporary directory.

Once you have run this script, logwatch will be installed in the directory that you pick (for preinstalled versions it is usually /usr/share/logwatch), plus /etc/logwatch. You should make any changes that you need to make in /etc/logwatch (the changes there will override the defaults found in the installed directory). The most common are described below. Also, a symlink from /etc/cron.daily to the logwatch script will be added, called 0logwatch. This will cause cron to run logwatch on a daily basis.

/etc/logwatch/conf/logwatch.conf:

The default logwatch configuration parameters are either found in /usr/local/ or /usr/share/ logwatch/default.conf/logwatch.conf and logwatch/dist.conf/logwatch.conf. Any changes that you make in /etc/logwatch/conf/logwatch.conf override these defaults. The easiest way to override the configuration parameters is to copy default.conf/logwatch.conf to /etc/logwatch/conf/logwatch.conf and then hack it directly.

     ############################################################################
     #
     # Local overrides to the default logwatch configuration.
     #
     # Defaults are in /usr/local/logwatch/default.conf/logwatch.conf and
     # /usr/local/logwatch/dist.conf/logwatch.conf.
     #
     # All of these, and the default, options can be overridden on the logwatch
     # command line.
     #
     ############################################################################
     # You can put comments anywhere you want to.  They are effective for the
     # rest of the line.
     # this is in the format of <name> = <value>.  Whitespace at the beginning
     # and end of the lines is removed.  Whitespace before and after the = sign
     # is removed.  Everything is case insensitive.
     # Yes = True  = On  = 1
     # No  = False = Off = 0
     # Default Log Directory
     # All log-files are assumed to be given relative to this directory.
     # This should be /var/log on just about all systems...
     LogDir = /var/log
     # Default person to mail reports to.  Can be a local account or a
     # complete email address.
     MailTo = root
     # The alternate mailer install possibly messes this up.
     mailer = "/usr/sbin/sendmail -t"
     # If set to 'Yes', the report will be sent to stdout instead of being
     # mailed to above person.
     Print = No
     # The default time range for the report...
     # The current choices are All, Today, Yesterday
     Range = yesterday
     # The default detail level for the report.
     # This can either be Low, Med, High or a number.
     # Low = 0
     # Med = 5
     # High = 10
     Detail = Low
     # The 'Service' option expects either the name of a filter (in
     # /etc/log.d/scripts/services/*) or 'All'.  This should be left as All for
     # most people.
     Service = All
     # By default we assume that all Unix systems have sendmail or a sendmail-like
     # system.  The mailer code Prints a header with "To:", "From:" and
     # "Subject:".  At this point you can change the mailer to any thing else that
     # can handle that output stream.
     mailer = "/usr/sbin/sendmail -t"

/etc/logwatch/conf/logfiles:
/etc/logwatch/conf/services:
/etc/logwatch/scripts/logfiles:
/etc/logwatch/scripts/services:

Any local modifications to logwatch to watch logfiles that it doesn't know about should be made in these directories. You can read the notes in the install directory and/or the source distribution directory to find out how to make up new logfile scanners.

IDE Bus Speed

If your kernel auto detects the IDE bus speed incorrectly, you can change this in lilo.

/etc/lilo.conf:

To change the IDE bus speed, add:

     append="idebus=66"

either in the general section for all kernels or in the section for only a specific kernel.

Also, notes about hard disk tuning can be found at:

     http://www.linuxnetmag.com/en/issue7/printm7hdparm1.html

Netport Express

Support for network-attached printer controllers (Intel Netport Express) requires that the Netport Express (XL or EL) printer controllers must be able to obtain their boot text from a sharepoint on a networked machine. Since the Samba server is usually always up and running, this is a good place to supply the boot text from as it is possible for the Netport Express to request it at any time.

/usr/lib/netport:

This directory must be created and then all of the Netport boot text files copied into it.

First, it should be given permissions such that the whole world can read its contents. For example:

     drwxrwxr-x    2 root     root         4096 Feb 14 09:23 /usr/lib/netport

All of the files placed therein should also be given read permissions for the whole world. For example:

     -rwxrwxr-x    1 root     root       489026 Dec 18  1998 en.npx

The files needed to support RBL are all those that end with the ".npx" extension. The Netport Express Manager will try to copy the files to the shared directory when you select it for RBL. If it cannot do so, you will have to copy them manually. Here is a list of what I copied (you may find more):

     en.npx  enl.npx  tn.npx  tnl.npx

/etc/samba/smb.conf:

Samba must be configured to create a sharepoint for the Netport Express RBL directory. Add the following section to your /etc/samba/smb.conf file at the end in the share definitions area:

     ; Sharepoint where Intel Netport Express print servers can get their RBL
     ; boot text files from.
     [Netport]
         comment = Netport RBL Directory
         path = /usr/lib/netport
         public = yes
         guest ok = yes
         browseable = yes

Configure the Netport Express RBL page to point the primary or alternate RBL path to the sharepoint created (you may have to recycle the Samba service to make the sharepoint visible). The login user can be "guest".

Note that this all works because the sharepoint is created with "guest ok = yes" and the guest user is mapped to some userid (probably "nobody") that is given access to the files in the Netport directory by the permissions "-rwxrwxr-x".

If you wish to use some other scheme to enhance security, you can do so but be advised that "guest ok = yes" must always be used, since the Netport Express cannot supply passwords. Probably the best method to use is to set the guest userid via the guest account parameter in the "netport" sharepoint section of /etc/samba/smb.conf. For example:

     guest account = netport

If you do this, you'll need to create a userid (e.g. "netport") that should have no password (i.e. so nobody can actually login with it) and that has permissions to read the files in the "netport" directory.

PowerChute

The PowerChute initialization file is a complicated file that should be set up by the PowerChute install script /usr/lib/powerchute/Config.sh. However, if you just change the serial port that the UPS is connected to, you could edit the one section in the init file that describes which port to use.

/usr/lib/powerchute/powerchute.ini:

Set the serial port to something else (choices are ttyS0 and ttyS1).

     [ Ups ]
       SignallingType = smart
       PortName = /dev/ttyS0
       AutoUpsRebootEnabled = Yes
       AllowedPortNames = /dev/ttyS0,/dev/ttyS1
       CableType = Normal

Berkeley DB

The Berkeley DB comes preconfigured on many Linux systems. It will be in an RPM called DBn, where 'n' is a number like 3, 4 or 5. If it isn't there already, get a copy from SleepyCat and install it.

To get the Perl support for Berkeley DB, you need DB_File. To install DB_File, execute the following command, while running as root:

     perl -MCPAN -e 'install DB_File'

No-IP

The No-IP notification program should be downloaded from the No-IP Web site and compiled, according to their instructions. The program should be placed somewhere where it can be executed by the PPP up script. A convenient spot is to put it in the /etc/ppp directory with all of the other PPP stuff.

If you wish to set up No-IP to redirect a domain name to a port other than 80, you can do a two-step redirect, as follows:

  1. Define joeblow-webserv.zapto.org as a DNS host and allow wildcards. This is the host whose IP address is updated by the No-IP program.
  2. Define joeblow.zapto.org as a Web server redirect with a port number and possibly initial Web page: joeblow.zapto.org pointing to joeblow-webserv.zapto.org:8180/welcome.html

When you surf over to joeblow.zapto.org, you'll end up at port 8180 of the Web server that is defined by joeblow-webserv.zapto.org. You may also use joeblow-webserv.zapto.org with a port number and starting Web page directly.

Dynamically Addressed Web Servers

To use dynamically addressed Web servers, a pointer to the server must be set up on each permanent Web server that links to the dynamically addressed Web server. This is done through a script that determines the externally visible IP address and uses this information to modify a file (either a link file which can be used by the LoadDynamic.cgi script or a home page HTML file that references the dynamically addressed Web server). Whichever kind of file is used, the PropagateIP script (as it is called) then FTPs the file to the appropriate spot on a statically addressed Web server.

A good spot to call the PropagteIP script is from the ppp ip-up or ip-up.local script. This way, the links to the dynamically addressed Web server will be set every time the ppp connection is brought up and its external IP address changes. Another possibility is to add a line to the cron table to cause the PropagateIP script to be invoked every half hour. This can be done for those servers that reside behind an edge router which handles all of the work involved with the Internet connection and which obscures any changes in the external IP address. This also protects against the case where the PPP link comes up and the FTPs fail. If both methods are employed, the chances are that the files will be FTPed to the permanent Web servers, one way or another, are virtually certain.

Note that, to use this feature, you must have a permanently available Web site somewhere (e.g. your own domain on a site hosting company's servers or one of those small user domains that many ISPs offer). It is on this site that the links to the dynamically addressed Web server are set up.

/etc/dyndns/PropagteIP:

A script that is invoked when the dynamic WAN IP address is assigned (e.g. from PPP's ip-up or ip-up.local script or periodically by cron to handle lease renewals by an edge router).

     #!/bin/sh
     #
     # Shell script that can either be called from PPP's ip-up or ip-up.local
     # script, or executed at regular intervals by cron, to set up all of the
     # links to any dynamically addressed Web servers on this machine.
     #
     # Depending on what kind of Internet connection this machine has, this
     # script is normally called from /etc/ppp/ip-up.local each time the PPP
     # link is brought up, or run from a line in crontab at regular intervals
     # (e.g. every couple of hours) to send the machine's assigned WAN IP
     # address to all of the external sites that need to know it.  In either
     # case this script is called with no parameters.  For example, if it is
     # being called from /etc/ppp/ip-up.local, it is called like this:
     #
     #      /etc/dyndns/PropagateIP
     #
     # On the other hand, if it is being invoked at regular intervals, the
     # crontab entry should look something like this:
     #
     #      19 0,2,4,6,8,10,12,14,16,18,20,22 * * * root /etc/dyndns/PropagateIP
     #
     # Or, if you have a version of cron that allows steps after ranges, you
     # can try this:
     #
     #      19 0-23/2 * * * root /etc/dyndns/PropagateIP
     #
     # Note that this script may be called on its own with a paramter of "force"
     # to force new copies of all the dynamic Web pages to the appropriate
     # servers, if said pages are changed.  Or, it may be called with a
     # parameter of "dump" to cause the dynamic Web pages to be dumped to
     # stdout for debugging porpoises.
     #
     # This script may also be called with a parameter of "noip" to force the
     # assigned WAN IP address to be sent to the dynamic DNS site.  No attempt
     # is made to push any copies of the dynamic Web pages to the appropriate
     # servers.  Typically, this would be invoked by cron every 15 days to
     # prevent the DNS entry from expiring.  For example, this crontab entry
     # might be used:
     #
     #      30 10 10,25 * * root /etc/dyndns/PropagateIP noip
     #
     # And, it may also be called with a paramter of "test" and an IP address
     # to test that everything is working correctly.  The IP address given will
     # be used instead of the assigned WAN IP address.
     #
     # Note that this script uses the WANCONNECTION setting in
     # /etc/sysconfig/clustering to determine whether anything should actually
     # be done or not.  If this machine does not connect directly to the Internet
     # via an edge router, ADSL or Diald, nothing is done.  These are the only
     # type of connections that need to have their WAN address broadcast so there
     # is no point in doing anything unless the system connects this way.  If no
     # setting is found, the default is to assume an ADSL connection.
     #
     # Finally, prior to using this script, if you plan on using the no-ip
     # notification feature, you'll need to install the noip program and
     # no-ip.conf in the directory /etc/dyndns.  If this directory doesn't exist,
     # you'll need to create, then populate it like this:
     #
     #      su
     #      mkdir /etc/dyndns
     #      chown root:root /etc/dyndns
     #      chmod u=rwx,go=rx /etc/dyndns
     #      cp ..../noip /etc/dyndns
     #      cp ..../no-ip.conf /etc/dyndns
     #      chown root:root /etc/dyndns/noip /etc/dyndns/no-ip.conf
     #      chmod u=rwx,go=rx /etc/dyndns/noip
     #      chmod u=rw,go= /etc/dyndns/no-ip.conf
     #
     # Once you've installed the noip program and its config file, you will need
     # to configure it by editing the no-ip.conf file as required.
     #
     #
     # Dynamic page type.  This will determine what type of Web page we build
     # for the appropriate remote Web servers.  We can either build a clone of
     # the home page of each of our dynamic Web servers, or we can just build a
     # redirect page which redirects the user straight to the dynamic Web server.
     #
     # Pick either "clone" or "redirect".
     #
     DynPageType="redirect"
     #
     # Check whether the statically addressed Web server is up and connected.
     #
     IsConnected()
     {
     #
     # Ping the Web server and see if we get a response.  If so, its up.
     #
     if ping -c1 -w120 $1 &> /dev/null; then
         return 0  #Success
     else
         return 1  #Failure
     fi
     }
     #
     # Check to see whether the WAN IP address is new.  When the lease is
     # renewed by the ISP and the edge router or when a new PPP connection is
     # brought up, the IP address doesn't always change.  If it doesn't change,
     # there's no sense in doing FTPs for no reason.
     #
     IsIPNew()
     {
     #
     # Put our current IP address into the current file so that we can compare
     # it and it will be there next time around.
     #
     echo $1 >/etc/dyndns/current_ip
     #
     # Check to see if the current IP address is the same as the previous one.
     #
     if [ -f /etc/dyndns/previous_ip ]; then
         if /usr/bin/diff /etc/dyndns/current_ip /etc/dyndns/previous_ip \
                 >/dev/null; then
             return 1  #The same
         else
             return 0  #New
         fi
     else
         return 0  #New
     fi
     }
     #
     # Make a Web page template and then substitute the dynamic addresses within
     # the template to make a dynamic Web page.
     #
     MakeWebPage()
     {
     /var/www/Scripts/HTMLPreProc.pl --intact \
         /var/www/$1/html/$2 /var/www/$1/html/$1.tplt
     sed "s/::::DynURL::::/$IPAddr:$3/" /var/www/$1/html/$1.tplt | \
         sed "s/::::DynIP::::/$IPAddr/" >/etc/dyndns/web_page
     }
     #
     # Substitute the dynamic address within a redirect page.
     #
     MakeRedirectPage()
     {
     echo "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\">" \
         >/etc/dyndns/web_page
     echo "<html>" >>/etc/dyndns/web_page
     echo "<head>" >>/etc/dyndns/web_page
     echo "<title>Redirect Page</title>" >>/etc/dyndns/web_page
     echo "<meta http-equiv=\"REFRESH\" content=\"0;url=http://${IPAddr}:${1}/\">" \
         >>/etc/dyndns/web_page
     echo "</head>" >>/etc/dyndns/web_page
     echo "<body>" >>/etc/dyndns/web_page
     echo "</body>" >>/etc/dyndns/web_page
     echo "</html>" >>/etc/dyndns/web_page
     }
     #
     # Source the clustering configuration.  Default to ADSL, if there's no
     # WAN connection information.
     #
     # Note that, even though this script can be used to set up non-ADSL
     # connections, all of the other scripts that use the clustering
     # configuration pick ADSL as the default so we also do so here, for
     # consistency sake.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         WANCONNECTION=ADSL
     fi
     #
     # If this machine is the primary server in the cluster and it is connecting
     # to the Internet through an external, gateway/edge router, the user will
     # have specified a tuple consisting of the address of a dedicated network
     # device that connects to the gateway/edge router, an IP address for that
     # local network device, and an IP address for the gateway/edge router.
     # Typically, the gateway/edge router will be an ISP's router (such as an
     # EVDO or FIOS router) that is set to bridge the packets sent to it on one
     # of its ports, to the WAN.
     #
     DEVICE=`echo $WANCONNECTION | grep -e "eth[0-9]\+," -o`
     if [ -n "$DEVICE" ]; then
         DEVICE=${DEVICE%,}
      LOCALADDR=`echo $WANCONNECTION | grep -e ",[^,]\+," -o`
      LOCALADDR=${LOCALADDR#,}
      LOCALADDR=${LOCALADDR%,}
      WANADDR=`echo $WANCONNECTION | grep -e ",[^,]\+\$" -o`
      WANADDR=${WANADDR#,}
  else
      DEVICE=""
      LOCALADDR=""
      WANADDR=$WANCONNECTION

fi
#
# If we're not executing the test or force command, and this system doesn't # connect to the Internet via an edge router or PPP-type connection, we're # all done.
#
if [ "$1" != "test" ] && [ "$1" != "dump" ] && [ -z "$DEVICE" ] \

          && [ x"$WANCONNECTION" != xADSL ] \
          && [ x"$WANCONNECTION" != xDiald ]; then
      exit 0

fi
#
# Get the remote IP address for this machine. #
# If the caller wants to do a test using a particular IP address that they # supply, we'll let them do that now. #
# Otherwise, we either get the external IP address of the PPP connection, # if this is a PPP-type connection, or we call the WANIP Perl script to get # this machine's WAN IP address from the perspective of remote servers on # the Internet, if this machine is connected through an edge router. #
if [ "$1" = "test" ]; then

      IPAddr=$2
  else
      if [ -z "$DEVICE" ]; then
          IPAddr=`/sbin/ifconfig ppp0 | grep "inet addr" | \
              sed "s/ *inet addr://;s/ P-t-P:.//"`
      else
          IPAddr=`/etc/dyndns/WANIP.pl`
      fi

fi
#
# If the caller is just trying to force new copies of the dynamic Web pages # up to the appropriate servers or just wants to dump the dynamic Web pages # for debugging porpoises, do that and exit. #
if [ "$1" = "force" ] || [ "$1" = "dump" ]; then

      #
      # Create either an index page or a redirect page, with our IP address
      # in it.  Either send it to our Web host or dump it on stdout.
      #
      if [ x"$DynPageType" = xclone ]; then
          MakeWebPage xyz123 welcome.html 8180
      else
          MakeRedirectPage 8180
      fi
      if [ "$1" = "dump" ]; then
          cat /etc/dyndns/web_page
      else
          #
          # Note that FTP doesn't set any return code if the connection times
          # out.  Thus, we can't detect any errors by testing the return code.
          # Instead, we capture the output and look for the error string.
          #
          FTPResp=`echo -e "user xyz123 letmein\\nput \
              /etc/dyndns/web_page public_html/welcome.html" | \
              ftp -n members.myisp.net 2>&1`
          echo $FTPResp | grep -q "timed out"
          FTPVal=$?
          if [ $FTPVal = 0 ]; then
              echo "Looks like the FTP to myisp didn't work so good"
          fi
      fi
      exit 0

fi
#
# If the caller just wants to notify no-ip of the new address, do that and # exit.
#
if [ "$1" = "noip" ]; then

      if [ -x /etc/dyndns/noip ]; then
          /etc/dyndns/noip -i $IPAddr -c /etc/dyndns/no-ip.conf
      fi
      exit 0

fi
#
# If the external IP address isn't new, we can get out now. #
if (! IsIPNew $IPAddr); then

      exit 0

fi
#
# FTP either the welcome page or a redirect page, with our IP address in # it, to xyz123 at myisp.
#
FTPMY=""

     if (IsConnected members.myisp.net); then
         if [ x"$DynPageType" = xclone ]; then
             MakeWebPage xyz123 welcome.html 8180
         else
             MakeRedirectPage 8180
         fi
      FTPResp=`echo -e "user xyz123 letmein\\nput \
          /etc/dyndns/web_page public_html/welcome.html" | \
          ftp -n members.myisp.net 2>&1`
      echo $FTPResp | grep -q "timed out"
      FTPVal=$?
      if [ $FTPVal = 0 ]; then
          FTPMY="FTP"
      fi
  else
      FTPMY="not connected"

fi
#
# FTP the IP address to Domain1. This way, we can always find it, even # if no-ip is down.
#
FTPDM1=""

     if (IsConnected ftp ftp.domain1.com); then
         echo $IPAddr:8280 >/etc/dyndns/web_ip
      FTPResp=`echo -e "user xxxxx yyyyyy\\nput \
          /etc/dyndns/web_ip www/mysys.dyn" | \
          ftp -n ftp ftp.domain1.com 2>&1`
      echo $FTPResp | grep -q "timed out"
      FTPVal=$?
      if [ $FTPVal = 0 ]; then
          FTPDM1="FTP"
      fi
  else
      FTPDM1="not connected"

fi
#
# FTP the IP address to Domain2. This way, we can always find it, # even if no-ip is down.
#
FTPDM2=""

     if (IsConnected ftp ftp.domain2.com); then
         echo $IPAddr:8380 >/etc/dyndns/web_ip
      FTPResp=`echo -e "user xxxxx yyyyyy\\nput \
          /etc/dyndns/web_ip www/mysys.dyn" | \
          ftp -n ftp ftp.domain2.com 2>&1`
      echo $FTPResp | grep -q "timed out"
      FTPVal=$?
      if [ $FTPVal = 0 ]; then
          FTPDM2="FTP"
      fi
  else
      FTPDM2="not connected"

fi
#
# Notify no-ip of the new address.
#
if [ -x /etc/dyndns/noip ]; then

      if [ "$1" = "test" ]; then
          /etc/dyndns/noip -l -i $IPAddr -c /etc/dyndns/no-ip.conf
      else
          /etc/dyndns/noip -i $IPAddr -c /etc/dyndns/no-ip.conf
      fi

fi
#
# Check to see if all of the FTPs went well. #
if [ x"$FTPMY" = x ] && [ x"$FTPDM1" = x ] && [ x"$FTPDM2" = x ]; then

      if [ -f /etc/dyndns/current_ip ]; then
          cp -f /etc/dyndns/current_ip /etc/dyndns/previous_ip
      fi
  else
      #
      # Construct a status message.
      #
      FTPStr=""
      if [ x"$FTPMY" != x ]; then
          FTPStr="${FTPStr}FTP of home page to members.myisp.net failed \
              (${FTPMY})"$'\n'
      fi
      if [ x"$FTPDM2" != x ]; then
          FTPStr="${FTPStr}Push of IP address to domain1.com failed \
              (${FTPDM2})"$'\n'
      fi
      if [ x"$FTPDM1" != x ]; then
          FTPStr="${FTPStr}Push of IP address to domain2.com failed \
              (${FTPDM1})"$'\n'
      fi
      HostStr=`hostname -s`
      #
      # Email the message to root-ski.
      #
      /bin/mail -s "Problem with PropagateIP" root <<-ENDMSG
          The PropagateIP script, running on $HostStr, seeems to have
          encountered a problem with one or more of the servers that it
          contacts.  Here is a synopsis:
          ${FTPStr}
          To rerun the script, after you correct any errors, you should
          delete the /etc/dyndns/previous_ip file (if it exists) and then
          run /etc/dyndns/PropagateIP.
          ENDMSG
      exit 1

fi

     exit 0

Note: be sure to use tabs on the beginning of the lines between <<-ENDMSG and ENDMSG (including ENDMSG itself).

/etc/ppp/no-ip.conf:

If you are using no-ip.com to point to your Web site, the config file for the noip program (invoked in the PropagteIP script) should look something like this.

     LOGIN    = user@host.com
     PASSWORD = mypass
     GROUP    = ;
     HOSTNAME = user-webserv
     DOMAIN   = zapto.org
     DAEMON   = N
     PROXY    = Y
     INTERVAL = 10
     NAT      = Y
     DEVICE   = unused

/etc/ppp/ip-up or ip-up.local:

You'll need to hack this script if you use PPP or some other non-permanent link to run the PropagateIP script (above). Add the following lines near the end of the file:

.

       .

#
# Set up all of the dynamically addressed Web server links and advertise # our WAN IP address.
#
/etc/dyndns/PropagateIP

/etc/crontab:

If your system is situated behind an edge router or you just want to ensure that your external address is always up to date (even if the ip-up call fails), you can add something like this to crontab:

     #
     # Every couple of hours, propagate this system's WAN IP address to the
     # external servers that need to know what it is.  Note that this only does
     # something if this system is directly connected to the Internet via an
     # edge router or PPP link.  And, the propagation of the IP address is only
     # carried out if the WAN IP address has changed since the last time.
     # Consequently, it is harmless to run this command every couple of hours,
     # and on systems that are not the primary system in a cluster.
     #
     19 0,2,4,6,8,10,12,14,16,18,20,22 * * * root /etc/dyndns/PropagateIP

HtDig

HtDig is a Web site indexer that works by crawling all of the pages linked from a starting page. It builds an index that can be searched via a search tool, which can be incorporated into a search Web page.

After htDig has been installed, the following setups might be useful.

.../htDig/conf/htdig.conf:

HtDig wants its configuration file to be in a subdirectory under the directory where the binaries and other parts of the product are found. This restriction means that, if you will be indexing multiple Web sites, you may need to put multiple configs in this one directory. I prefer to put the index and config for each Web site in a "db" directory under the subdirectory where the actual Web site is stored. In this case, a symlink should be made from the ".../htDig/conf/" directory to the "db" directory. For example:

     ln -s /var/www/MyWebSite/db/htdig.conf .../htDig/conf/MyWebSite.conf

Here is the config file for a sample Web site:

     #
     # BSM Development config file for ht://Dig.
     #
     # This configuration file is used by all the programs that make up ht://Dig.
     # Please refer to the attribute reference manual for more details on what
     # can be put into this file.  (http://www.htdig.org/confindex.html)
     # Note that most attributes have very reasonable default values so you
     # really only have to add attributes here if you want to change the defaults.
     #
     #
     # Specify where the database files need to go.  Make sure that there is
     # plenty of free disk space available for the databases.  They can get
     # pretty big.
     #
     database_dir: /var/www/MySite2/db
     #
     # This specifies the URL where the robot (htdig) will start.  You can specify
     # multiple URLs here.  Just separate them by some whitespace.
     # The example here will cause the ht://Dig homepage and related pages to be
     # indexed.
     # You could also index all the URLs in a file like so:
     # start_url: `${common_dir}/start.url`
     #
     start_url: http://www.mywebsite.com/
     #
     # If the Web site has a local mirror, define that here so that references to
     # URLs can be resolved through accesses to the local file system.
     #
     local_urls:             http://www.mywebsite.com/=/var/www/MySite2/html/
     local_urls_only:        true
     local_default_doc:      index.html welcome.html
     #
     # This attribute limits the scope of the indexing process.  The default is to
     # set it to the same as the start_url above.  This way only pages that are on
     # the sites specified in the start_url attribute will be indexed and it will
     # reject any URLs that go outside of those sites.
     #
     # Keep in mind that the value for this attribute is just a list of string
     # patterns. As long as URLs contain at least one of the patterns it will be
     # seen as part of the scope of the index.
     #
     limit_urls_to: ${start_url}
     #
     # If there are particular pages that you definitely do NOT want to index, you
     # can use the exclude_urls attribute.  The value is a list of string
     # patterns.  If a URL matches any of the patterns, it will NOT be indexed.
     # This is useful to exclude things like virtual web trees or database
     # accesses.  By default, all CGI URLs will be excluded.  (Note that the
     # /cgi-bin/ convention may not work on your web server.  Check the path
     # prefix used on your web server.)
     #
     exclude_urls: /cgi-bin/ .cgi
     #
     # Since ht://Dig does not (and cannot) parse every document type, this
     # attribute is a list of strings (extensions) that will be ignored during
     # indexing. These are only checked at the end of a URL, whereas
     # exclude_url patterns are matched anywhere.
     #
     bad_extensions: .wav .gz .z .sit .au .zip .tar .hqx .exe .com .gif .jpg \
          .jpeg .aiff .class .map .ram .tgz .bin .rpm .mpg .mov .avi .css .pdf
     #
     # The string htdig will send in every request to identify the robot.  Change
     # this to your email address.
     #
     maintainer: admin@mywebsite.com
     #
     # The excerpts that are displayed in long results rely on stored information
     # in the index databases.  The compiled default only stores 512 characters of
     # text from each document (this excludes any HTML markup...)  If you plan on
     # using the excerpts you probably want to make this larger.  The only concern
     # here is that more disk space is going to be needed to store the additional
     # information.  Since disk space is cheap (! :-)) you might want to set this
     # to a value so that a large percentage of the documents that you are going
     # to be indexing are stored completely in the database.  At SDSU we found
     # that by setting this value to about 50k the index would get 97% of all
     # documents completely and only 3% was cut off at 50k.  You probably want to
     # experiment with this value.
     # Note that if you want to set this value low, you probably want to set the
     # excerpt_show_top attribute to false so that the top excerpt_length
     # characters of the document are always shown.
     #
     max_head_length: 10000
     #
     # To limit network connections, ht://Dig will only pull up to a certain limit
     # of bytes. This prevents the indexing from dying because the server keeps
     # sending information. However, several FAQs happen because people have files
     # bigger than the default limit of 100KB. This sets the default a bit higher.
     # (see <http://www.htdig.org/FAQ.html> for more)
     #
     max_doc_size: 200000
     #
     # Most people expect some sort of excerpt in results. By default, if the
     # search words aren't found in context in the stored excerpt, htsearch shows
     # the text defined in the no_excerpt_text attribute:
     # (None of the search words were found in the top of this document.)
     # This attribute instead will show the top of the excerpt.
     #
     no_excerpt_show_top: true
     #
     # Depending on your needs, you might want to enable some of the fuzzy search
     # algorithms.  There are several to choose from and you can use them in any
     # combination you feel comfortable with.  Each algorithm will get a weight
     # assigned to it so that in combinations of algorithms, certain algorithms
     # get preference over others.  Note that the weights only affect the ranking
     # of the results, not the actual searching.
     # The available algorithms are:
     #     accents
     #     exact
     #     endings
     #     metaphone
     #     prefix
     #     soundex
     #     substring
     #     synonyms
     # By default only the "exact" algorithm is used with weight 1.
     # Note that if you are going to use the endings, metaphone, soundex, accents,
     # or synonyms algorithms, you will need to run htfuzzy to generate
     # the databases they use.
     #
     search_algorithm: exact:1 synonyms:0.5 endings:0.1
     #
     # The following are the templates used in the builtin search results
     # The default is to use compiled versions of these files, which produces
     # slightly faster results. However, uncommenting these lines makes it
     # very easy to change the format of search results.
     # See <http://www.htdig.org/hts_templates.html> for more details.
     #
     # template_map: Long long ${common_dir}/long.html \
     #      Short short ${common_dir}/short.html
     # template_name: long
     #
     # The following are used to change the text for the page index.
     # The defaults are just boring text numbers.  These images spice
     # up the result pages quite a bit.  (Feel free to do whatever, though)
     #
     next_page_text: <img src="/icons/buttonr.gif" border="0" align="middle" \
         width="30" height="30" alt="next">
     no_next_page_text:
     prev_page_text: <img src="/icons/buttonl.gif" border="0" align="middle" \
         width="30" height="30" alt="prev">
     no_prev_page_text:
     page_number_text: '<img src="/icons/button1.gif" border="0" align="middle" \
         width="30" height="30" alt="1">' \
          '<img src="/icons/button2.gif" border="0" align="middle" width="30" \
              height="30" alt="2">' \
          '<img src="/icons/button3.gif" border="0" align="middle" width="30" \
              height="30" alt="3">' \
          '<img src="/icons/button4.gif" border="0" align="middle" width="30" \
              height="30" alt="4">' \
          '<img src="/icons/button5.gif" border="0" align="middle" width="30" \
              height="30" alt="5">' \
          '<img src="/icons/button6.gif" border="0" align="middle" width="30" \
              height="30" alt="6">' \
          '<img src="/icons/button7.gif" border="0" align="middle" width="30" \
              height="30" alt="7">' \
          '<img src="/icons/button8.gif" border="0" align="middle" width="30" \
              height="30" alt="8">' \
          '<img src="/icons/button9.gif" border="0" align="middle" width="30" \
              height="30" alt="9">' \
          '<img src="/icons/button10.gif" border="0" align="middle" width="30" \
              height="30" alt="10">'
     #
     # To make the current page stand out, we will put a border around the
     # image for that page.
     #
     no_page_number_text: '<img src="/icons/button1.gif" border="2" \
         align="middle" width="30" height="30" alt="1">' \
          '<img src="/icons/button2.gif" border="2" align="middle" width="30" \
              height="30" alt="2">' \
          '<img src="/icons/button3.gif" border="2" align="middle" width="30" \
              height="30" alt="3">' \
          '<img src="/icons/button4.gif" border="2" align="middle" width="30" \
              height="30" alt="4">' \
          '<img src="/icons/button5.gif" border="2" align="middle" width="30" \
              height="30" alt="5">' \
          '<img src="/icons/button6.gif" border="2" align="middle" width="30" \
              height="30" alt="6">' \
          '<img src="/icons/button7.gif" border="2" align="middle" width="30" \
              height="30" alt="7">' \
          '<img src="/icons/button8.gif" border="2" align="middle" width="30" \
              height="30" alt="8">' \
          '<img src="/icons/button9.gif" border="2" align="middle" width="30" \
              height="30" alt="9">' \
          '<img src="/icons/button10.gif" border="2" align="middle" width="30" \
              height="30" alt="10">'
     #
     # Templates used to display the search results.  The wrapper template shows
     # the actual search results.  The nothing found template is displayed if the
     # search fails.  The syntax error template is shown if the search has a
     # syntax error.
     #
     search_results_wrapper: ${database_dir}/SearchWrapper.html
     nothing_found_file: ${database_dir}/SearchNoMatch.html
     syntax_error_file: ${database_dir}/SearchSyntax.html
     # local variables:
     # mode: text
     # eval: (if (eq window-system 'x) (progn (setq font-lock-keywords (list
          '("^." . font-lock-keyword-face) '("^[a-zA-Z][^ :]+" .
     #     font-lock-function-name-face) '("[+$]:" . font-lock-comment-face) ))
     #     (font-lock-mode)))
     # end:

.../htDig/bin/rundig:

The shell script that is used to run the indexer (rundig) needs to be modified to point to the individual Web site configuration file. I use one copy of the script for each Web site. Make the following changes:

     DBDIR=/var/www/MyWebSite/db
     COMMONDIR=.../htDig/common
     BINDIR=.../htDig/bin

/var/www/reindex:

Since htDig works by actually pulling the pages off the Web site being indexed, it is only necessary to reindex the site when pages change. This script will take care of reindexing one or more sites. It can be run from cron, at regular intervals, once for each site to be indexed (e.g. "/var/www/reindex MySite1").

     #!/bin/sh
     #
     # Shell script (run by cron) to check whether any Web pages in this directory
     # have changed and reindex them for searching, if so.
     #
     # This script takes one argument, the name of the Web directory under
     # /var/www that is to be indexed (e.g. MyDir).
     #
     #
     # Check to see whether any pages in the Web directory tree that we are given
     # are newer than the last indexed date.
     #
     ArePagesNew()
     {
     #
     # See if the timestamp file exists.  If not, we are all done.  If so, we
     # must look at the Web directory tree.
     #
     if [ -f /var/www/$1/db/previous_index ]
     then
         #
         # Run a find command that traverses the Web directory tree, looking for
         # any HTML files that are newer.
         #
         /usr/bin/find /var/www/$1/html -name \*\.html -type f -newer \
             /var/www/$1/db/previous_index \
             -exec touch -f /var/www/$1/db/current_index \;
         #
         # Compare the current stamp file with the previous stamp file.  If their
         # times are different, there's been a change.
         #
         if [ /var/www/$1/db/current_index -nt /var/www/$1/db/previous_index ]
         then
             return 0  #New
         else
             return 1  #The same
         fi
     else
         touch -f /var/www/$1/db/current_index
         return 0  #New
     fi
     }
     #
     # If there is a reference directory, build an index page that points to all
     # of the reference material.  This material is loaded dynamically and is
     # never directly linked to.  Thus, the crawler will never find it unless
     # there is a pointer to it.  The page we create is secretly linked to by the
     # top level index of this directory so that the crawler can find and index
     # all of the reference pages.
     #
     LinkRefPages()
     {
     #
     # See if the reference directory exists.  If not, we are all done.  If so, we
     # must look at the directory tree for reference documents.  Currently they
     # are all documents that start with:
     #
     #      DocIdx_
     #      Inst_
     #      PR_
     #      Samp_
     #      Tech_
     #
     if ! test -d /var/www/$1/html/Reference; then return 0; fi
     #
     # Run a find command that traverses the reference directory, looking for any
     # HTML files that match the pattern.
     #
     /usr/bin/find /var/www/$1/html/Reference -name DocIdx_\\.html -type f \
         -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \
             >/var/www/$1/html/Reference/refindex_1.html \;
     /usr/bin/find /var/www/$1/html/Reference -name Inst_\*\.html -type f \
         -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \
             >>/var/www/$1/html/Reference/refindex_1.html \;
     /usr/bin/find /var/www/$1/html/Reference -name PR_\*\.html -type f \
         -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \
             >>/var/www/$1/html/Reference/refindex_1.html \;
     /usr/bin/find /var/www/$1/html/Reference -name Samp_\*\.html -type f \
         -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \
             >>/var/www/$1/html/Reference/refindex_1.html \;
     /usr/bin/find /var/www/$1/html/Reference -name Tech_\*\.html -type f \
         -exec echo \<br\>\<a href=\{\}\>@\{\}@\</a\> \
             >>/var/www/$1/html/Reference/refindex_1.html \;
     #
     # If we didn't find any files, we're all done.
     #
     if ! test -s /var/www/$1/html/Reference/refindex_1.html; then return 0; fi
     #
     # Adjust the index entries to be human/robot readable.
     #
     sed "s/@\/var\/www\/$1\/html\/Reference\///" \
         /var/www/$1/html/Reference/refindex_1.html | sed "s/.html@//" \
         | sed "s/\/var\/www\/$1\/html\/Reference/./" \
         >/var/www/$1/html/Reference/refindex_2.html
     #
     # Start the file out with the requisite HTML.
     #
     echo \<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\> \
         >/var/www/$1/html/Reference/refindex.html
     echo \<html\>\<head\> >>/var/www/$1/html/Reference/refindex.html
     echo \<meta name=\"robots\" content=\"All\"\> \
         >>/var/www/$1/html/Reference/refindex.html
     echo \</head\>\<body\> >>/var/www/$1/html/Reference/refindex.html
     #
     # Include the index we generated.
     #
     cat /var/www/$1/html/Reference/refindex_2.html \
         >>/var/www/$1/html/Reference/refindex.html
     #
     # Finish off the HTML page.
     #
     echo \</body\>\</html\> >>/var/www/$1/html/Reference/refindex.html
     #
     # Clean up and make the index visible.
     #
     rm -f /var/www/$1/html/Reference/refindex_1.html \
         /var/www/$1/html/Reference/refindex_2.html
     chgrp webmin /var/www/$1/html/Reference/refindex.html
     #
     # See which Web server we need to send the index to.
     #
     WebServ=""
     if test "$1"x == MySite1x; then
         WebServ="members.myisp.net"
         WebUser="mrcool"
         WebPW="coolpass"
         WebDir="public_html/MyIndex"
     fi
     if test "$1"x == MySite2x; then
         WebServ="ftp ftp.mysite2.com"
         WebUser="mysite2"
         WebPW="letmein"
         WebDir="www/Reference"
     fi
     if test "$1"x == MySite3x; then
         WebServ="ftp ftp.mysite3.net"
         WebUser="joeschmo"
         WebPW="itsasecret"
         WebDir="www"
     fi
     if test "$WebServ"x == x; then return 0; fi
     #
     # Send the generated index to the static Web server.
     #
     #if IsConnected $WebServ; then
     #    echo -e "user $WebUser $WebPW\\nput \
     #    /var/www/$1/html/Reference/refindex.html $WebDir/refindex.html" \
     #    | ftp -n $WebServ >/dev/null
     #fi
     return 0
     }
     #
     # Check whether the statically addressed Web server is up and connected.
     #
     IsConnected()
     {
     #
     # Ping the Web server and see if we get a response.  If so, its up.
     #
     #if ping -c3 $1 2>&1 | grep "0% packet loss" >/dev/null
     if ping -c4 $1 >/dev/null
     then
         return 0  #Success
     else
         return 1  #Failure
     fi
     }
     #
     # Check if any pages are newer than the last index time and reindex, if so.
     #
     if (ArePagesNew $1); then
         LinkRefPages $1
         /var/www/$1/db/rundig
         touch -f /var/www/$1/db/previous_index
     fi

.../htDig/bin/htsearch:

A symbolic link to htsearch from the cgi-bin directory of each Web site that will be doing searches can be used instead of copying htsearch to each of the sites. This will conserve space and allows htsearch to find the dictionary tables, etc. that it needs to run. To creat the link:

     ln -s .../htDig/bin/htsearch /var/www/MyWebSite/cgi-bin/htsearch

Partition Editing

GNU parted works well for editing DOS and Linux partitions. You can download the source from http://ftp.gnu.org/gnu/parted/. To compile and install this program, you need the e2fsprogs_devel RPM (available from RPM Find) and the progsreiserfs library (for Reiser file system support), available from http://reiserfs.osdn.org.ua. Install the RPM first, then the Reiser support library and then parted itself.

This program allows you to extend and manipulate file systems particularly the Reiser file system (which is used by SuSE). If you do extend a file system, be sure to run fsck on it to repair any errors. In the case of the Reiser file system, you may have to run fsck.reiserfs directly on the new partition.

Direct Copying

If you have an existing system drive and you'd like to make a direct copy to a new drive, you can use dd. For example:

     dd if=/dev/hdc of=/dev/hdb bs=64M

This can be used to copy a small drive to a larger drive with identical partitioning as the small drive plus free (unused) space. Once this is done, you can add a new partition to take advantage of increased drive space.

Note that for a typical 1TB drive, dd can take two or three hours to copy. If you'd like a faster alternative, get a copy of sg_dd from:

     http://www.torque.net/sg/#Utilities:%20sg_utils%20and%20sg3_utils

and install the following script (already installed as copydisks):

     #!/bin/bash
     #
     # Copy two disks, from /dev/hdc (master) to /dev/hdb (slave), using raw mode
     # for fast copying.  The "rev" option can be used to copy from /dev/hdb to
     # /dev/hdc.
     #
     # Used to ensure accurate copying and no screw-ups from typos, etc.  Be
     # warned that whatever disk is mounted on /dev/hdb is gone so exercise
     # caution.
     #
     SourceHD=/dev/hdc
     DestHD=/dev/hdb
     if test x"$1" == xrev; then
         echo Reversed!  Copying from /dev/hdb to /dev/hdc!
         SourceHD=/dev/hdb
         DestHD=/dev/hdc
     fi
     Sectors=`/sbin/fdisk -lu $SourceHD 2>&1 | grep total \
         | sed -r "s/^.+?total ([0-9]+) sectors$/\1/"`
     /usr/bin/raw /dev/raw/raw1 $SourceHD
     /usr/bin/raw /dev/raw/raw2 $DestHD
     /sbin/fdisk -lu $SourceHD 2>&1
     /sbin/fdisk -lu $DestHD 2>&1
     echo
     echo Will copy $Sectors sectors from $SourceHD to $DestHD
     read -p "Is this OK? [n|y] " PromptResp
     if test x"$PromptResp" != xy; then
         echo Responded \"$PromptResp\", exiting!
         exit 1
     fi
     echo
     echo Starting copy:
     echo /usr/local/bin/sg_dd bs=512 bpt=16384 count=$Sectors time=1 \
         if=/dev/raw/raw1 of=/dev/raw/raw2
     /usr/local/bin/sg_dd bs=512 bpt=16384 count=$Sectors time=1 \
         if=/dev/raw/raw1 of=/dev/raw/raw2

Resizing

A single disk system usually has two or more partitions on its only disk (at the very least, a swap partition and a system partition). Expanding the storage capacity of the video storage partition (or the system partition, if the videos are stored along with the system files on the system partition) is basically like expanding a single partition volume except that there are a couple of extra steps before you get started, and you must do the appropriate type of copy for each partition, not just a single partition.

The extra steps are necessitated by the fact that the single disk system must have a bootable disk and the system also needs some swap space on the disk.

To illustrate the complete steps necessary to expand a single disk system, let's take the example of a system that has three partitions, one for /boot, one for swap space, and one for root. Furthermore, let's assume that the /boot partition is formatted using ext3 and the root partition is formatted using XFS (as is the case with many of the latest Linux installs).

As we note in the previous section, the duration of the copy operation that is used to expand a volume depends on the amount of data on it (duh) so you may wish to clean up all of the old stuff that you were thinking of deleting, before you proceed. On the other hand, if you could do that, you probably wouldn't need a bigger disk in the first place....

Again, we begin by getting ourselves a copy of Knoppix (the CD works just swell but later versions will only fit on a DVD):

     http://knopper.net/knoppix/index-en.html

And burning the ISO image onto a CD (or DVD, if you took that route).

Next, configure the single disk machine who's disk is to be expanded with the old disk and new disk attached and a CD/DVD drive added, if necessary. In this example, we're presuming the original drive is on SATA0 and the new drive is on SATA1, which should map to /dev/sda and /dev/sdb respectively, when Knoppix is booted.

Put the Knoppix CD/DVD into the CD/DVD drive and boot it. Once Knoppix comes up (you're booting from a CD/DVD, remember, so be patient), open up a console window and become super user. To do this, simply type the "su" command. There's no password (how refreshing).

Before proceeding, you should check that the original and new drives are installed where you expect them to be. You can do this by typing:

     /sbin/fdisk -lu /dev/sda
     /sbin/fdisk -lu /dev/sdb

The first drive should show a valid partition table that looks something like this:

     Disk /dev/sda: 640.1 GB, 640135028736 bytes
     255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors
     Units = sectors of 1 * 512 = 512 bytes
     Disk identifier: 0x63c6c105
     Device Boot      Start         End      Blocks   Id  System
  /dev/sda1   *          63      996029      497983+  83  Linux
  /dev/sda2          996030     4996214     2000092+  82  Linux swap / Solaris
  /dev/sda3         4996215  1250263728   622633756   83  Linux

Be sure that the original drive's size and partition information is correct to ensure that it is mounted where you think it is. Also verify that the new drive has nothing on it (or has the expected layout, if you're reusing it). Since, we're going to be wiping out all data on the new drive in a minute, it pays to make sure you've got the right target in your sights.

You can also check the various partitions to see what type of file systems are on each of them. In our example, we'd do:

     df -T /dev/sda1
     df -T /dev/sda3

We're expecting to see an ext3 file system on /dev/sda1 and an XFS file system on /dev/sda3.

If everything looks good, you can avoid all the trouble of setting up the boot sector with the boot loader by just copying the first few tracks of the source disk directly with dd:

     dd bs=512 count=63 if=/dev/hdc of=/dev/hdb

This will also copy the partition table which probably won't be correct. However, since we are expanding the disk size, usually the only required change to the copied partition table is that the last partition be expanded to fit the new disk. This can be done by deleting the last partition and adding a new partition that uses all of the available space:

     su
     /sbin/fdisk /dev/sdb
       d                         (Delete a partition)
       3                         (Partition 3)
       n                         (To create a new partition)
       p                         (As a primary partition)
       3                         (Partition 3)
       <cr>                      (Accept previous end cylinder + 1 as the start)
       <cr>                      (To accept the last cylinder as the end)
       w                         (Write the partition table to the disk)

Be sure that the appropriate partition is marked as the boot partition, if it hasn't already been marked that way.

Alternately, if you want or need to create the partition table from scratch, you should do so using fdisk like this:

     su
     /sbin/fdisk /dev/sdb
       n                         (To create a new partition)
       p                         (As a primary partition)
       1                         (Partition 1)
       <cr>                      (To accept the first cylinder as the start)
       62                        (To set cylinder 62 as the end)
       n                         (To create a new partition)
       p                         (As a primary partition)
       2                         (Partition 2)
       <cr>                      (Accept previous end cylinder + 1 as the start)
       311                       (To set cylinder 311 as the end)
       n                         (To create a new partition)
       p                         (As a primary partition)
       3                         (Partition 3)
       <cr>                      (Accept previous end cylinder + 1 as the start)
       <cr>                      (To accept the last cylinder as the end)
       t                         (To set the partition type)
       2                         (Partition 2)
       82                        (Partition type 82, swap space)
       w                         (Write the partition table to the disk)

On most older disks (i.e. less than or equal to 1TB), this creates a new partition layout that has approximately 500MB for the boot partition, 2GB for swap space, and the remaining disk space for the root partition. This layout has proven to work well during years of constant use.

After you've created the new partitions, you can check your work with:

     /sbin/fdisk -lu /dev/sdb

You should see something like this:

     Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
     255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
     Units = sectors of 1 * 512 = 512 bytes
     Disk identifier: 0x0002c738
     Device Boot      Start         End      Blocks   Id  System
  /dev/sdb1   *          63      996029      497983+  83  Linux
  /dev/sdb2          996030     4996214     2000092+  82  Linux swap / Solaris
  /dev/sdb3         4996215  1953520064   974261925   83  Linux

Unfortunately, for the advanced format hard drives (e.g. 1.5, 2 TB, or higher drives), one can no longer continue in the same old manner, as described above. If the partitions on the drive are not laid out properly and/or the file systems not constructed carefully, you will suffer from huge performance problems. This all but mandates that you lay the partitions out by hand, set the boot loader up manually, set the file systems up ahead of time, and copy all of the files manually. No use of dd or another direct partition copy program to speed up the process.

Note that, with the later versions of Knoppix that we have used (i.e. 7.2), fdisk knows about advanced format hard drives and handles alignment of the partitions on the proper boundaries, if you let it. It should start the first partition on sector 2048 and, if you use end sector numbers like +512M or +4G, calculate the proper end sectors so that the next partition will always begin on the proper boundary. Of course, once the disk is set up for advanced format use, the next time you do an upgrade (e.g. from the 2TB to 4TB disk), you can revert to the good old techniques but, in the meantime, begin by creating the partition table as follows:

     su
     /sbin/fdisk /dev/sdb
       u                         (All sizes are given in sectors, not
                                   cylinders)
       n                         (To create a new partition)
       p                         (As a primary partition)
       1                         (Partition 1)
       2048                      (Start the partition on a 4K boundary [i.e.
                                   0 mod 8])
       +512M                     (To define a 512 MB partition)
       n                         (To create a new partition)
       p                         (As a primary partition)
       2                         (Partition 2)
       <cr>                      (To accept the next block as the start)
       +2G                       (To define a 2 GB partition [or use +4G])
       t                         (To set the partition type)
       2                         (Partition 2)
       82                        (Partition type 82, swap space)
       n                         (To create a new partition)
       p                         (As a primary partition)
       3                         (Partition 3)
       <cr>                      (To accept the next block as the start)
       <cr>                      (To accept the last cylinder as the end)
       w                         (Write the partition table to the disk)

This creates a new partition layout that has approximately 500MB for the boot partition, 2GB for swap space, and the remaining disk space for the root partition. Just by way of example, if you were interested in upping the swap space to 4GB from 2GB, you might use +4G instead of +2G for the second partition.

After you've created the new partitions, you can check your work with:

     /sbin/fdisk -lu /dev/sdb

You should see something like this (in this example, we used +4G for the swap space):

     Disk /dev/sdb: 2000.0 GB, 1999988850688 bytes
     255 heads, 63 sectors/track, 243151 cylinders, total 3906228224 sectors
     Units = sectors of 1 * 512 = 512 bytes
     Sector size (logical/physical): 512 bytes / 4096 bytes
     I/O size (minimum/optimal): 512 bytes / 4096 bytes
     Disk identifier: 0x0000144d
     Device Boot      Start         End      Blocks   Id  System
  /dev/sdb1   *        2048     1050623      524288   83  Linux
  /dev/sdb2         1050624     9439231     4194304   82  Linux swap / Solaris
  /dev/sdb3         9439232  3906228223  1948394496   83  Linux

If you laid out the partition table using sector numbers instead of cylinders, you'll probably see some whining from fdisk about the partitions not starting on a cylinder boundary. Since all of the cylinder/sector numbers have been fake for many years, and since all sector addressing is done using LBAs, it doesn't much matter where the partitions start/end, as long as they all start on a sector whose address is 0 mod 8. You can safely ignore the warnings from fdisk about cylinder alignment.

However, if you care (which you probably don't), you can carefully chose all of the partitions to end on a sector boundary that is a multiple of the magic number 128520, minus 1. This number of sectors gives a chunk that is 64MB in size, which is the lowest number that is 0 mod 8 and also 0 mod 16065 (16065 is the result of multiplying 255 x 63, which are the values used for the number of heads and sectors on all LBA-addressed disks). Using a multiple of 128520, and then subtracting one, for all of your partition calculations, will ensure that they all end on a virtual cylinder boundary and fdisk will be happy. Mind you, you will have to calculate actual sector numbers instead of using offsets like "+2G", which tedious in the extreme. Best to just forget about this "feature" and move on with your life.

Once all of the partitions have been created, you must install the boot loader (most versions of Linux use GRUB or GRUB2) on the disk. As above, you can avoid the trouble of setting up the boot sector with the boot loader by just copying the first few tracks of the source disk directly with dd. However, since we've already set up the partition table, we must do the copy very carefully to avoid wiping it out:

     dd bs=446 count=1 if=/dev/sda of=/dev/sdb
     dd bs=2 skip=255 seek=255 count=1 if=/dev/sda of=/dev/sdb
     dd bs=512 skip=1 seek=1 count=62 if=/dev/sda of=/dev/sdb

This copies the first 446 bytes of the first sector (a.k.a. the MBR), which contains the boot loader code. It skips over the next 64 bytes, which contain the partition table. Then, it copies the two-byte magic cookie that defines it as the MBR. Finally, it copies the next 62 sectors (a.k.a. the DOS Compat Space), which contain the GRUB Stage 1.5 code. Once that is done, you should have a bootable disk that is partitioned the way you want it.

You might want to test that the system can boot the disk (admittedly, it won't get far but you should at least see GRUB Stage 1.5 running), before you proceed any further, because now is the time to redo the partition table and go with Plan B, if it didn't work, before you copy any data to the new disk.

Plan B is used if you are using GNU Partition Table (i.e. you set the disk up with parted), dd wiped out the partition table, or GRUB wouldn't boot into Stage 1.5. Plan B consists of installing GRUB from scratch, which is best done using the original OS installation disk, in rescue mode. To use Plan B, you must first copy the files to the /boot partition, as outlined below.

#>>>>>>
All of the ext2/ext3 file systems, that must appear in each of the partitions, should be made with mkfs. Similarly, swap space must be initialized with mkswap. However, the XFS file systems need not be created ahead of time as the copy operation will take care of that for us. Here's an example of how we'd set up one ext2 or ext3 file system and one swap space, first, using mkfs.ext2:

     mkfs.ext2 -L /boot -T news /dev/sdb1
     mkswap -v1 /dev/sdb2

or mkfs.ext3:

     mkfs.ext3 -j -L /boot -T news /dev/sdb1
     mkswap -v1 /dev/sdb2

Once you've done this, copy any files needed for the OS boot sequence to the boot partition. Knoppix thoughtfully provides predefined mount points in the "/media" directory for each of the attached devices that it finds (and even adds mount points for any new partitions created using fdisk) so you can do:

     mount -r /dev/sda1 /media/sda1
     mount /dev/sdb1 /media/sdb1
     cp -R --preserve=all /media/sda1/* /media/sdb1
     umount /media/sda1
     umount /media/sdb1

Now is the time, if you are going with GRUB Plan B, to set up the boot loader (as noted above, if you set up the boot sector using one of the dd methods, you can omit this step). There are so many ways of doing this that it is beyond the scope of these notes. Hopefully, you won't have to do it this way but, if you do, there are plenty of notes on the Internet that give the details. Find them and read them first. Just to make sure that you do it correctly, the basic steps should look something like this:

  1. Boot with your system's Installation CD and get into rescue mode.
  2. If you are asked if you want to look for an existing Linux installation, you can skip this step. This will get you to the console prompt a lot quicker.
  3. Once the console prompt appears, type "grub".
  4. Set GRUB's root device to the partition containing the boot directory by typing "root (hd1,0)". In our case, this would be /dev/sdb1. But, you may not want to live dangerously and trust that GRUB (which uses a truly strange numbering scheme for boot devices) will pick the right device to whack. In that case, you could uncable your original disk and leave only the new disk cabled. Then, you'd set GRUB's root device using "root (hd0,0)".
  5. Write GRUB to the hard drive with "setup (hd1)" or "setup (hd0)".
  6. Bail out by typing "quit".

Note that you may be able to do all of this with Knoppix and skip the rescue disk, depending on how compatible your system is with the Knoppix version of GRUB.

The data on the original XFS partition should now be copied to the new drive. This is done with:

     xfs_copy -d /dev/sda3 /dev/sdb3

The copy will take quite a while, depending on the size of the original partition and how full it is (if you were smart and deleted all of the junk shows off the original partition, it will be faster). Once the copy completes, the new disk will contain an exact copy of the original disk and the new XFS file system will be same size as the original (bummer).

The next step requires that you mount the new file system, using the mount points that Knoppix provides in the "/media" directory:

     mount /dev/sdb3 /media/sdb3

You can check that the mounted file system looks like the original with:

     ls -l /media/sdb3

If you're happy with the new file system, you can increase its size to fill the entire partition (nice) with:

     xfs_growfs /media/sdb3

Once that's done, unmount the new file system:

     umount /media/sdb3

Shut down Knoppix and recable the new disk to replace the original. Boot the machine with the new disk installed and see how it works.

#$$$$$$
After a direct copy ... to resize ext3 partitions: (full example at: https://www.howtoforge.com/linux_resizing_ext3_partitions_p2)

     <boot Knoppix>
     (here we assume that the partition to be resized is /dev/sda3)
     su
     (if partition is mounted, unmount it)  umount /media/sda3
     fsck -n /dev/sda3
     tune2fs -O ^has_journal /dev/sda3
     fdisk -u /dev/sda  (yes, we mean /dev/sda)
       p  (note start address of existing partition 3)
       d
       3
       n
       p
       3
       (choose original start address, if it isn't the default)
       (choose the full disk, or your new partition size)
       w
     partprobe
     e2fsck -f /dev/sda3
     resize2fs /dev/sda3
     tune2fs -j /dev/sda3
     tune2fs -c 0 -i 0 /dev/sda3

#<<<<<<

Copying

Should you wish to copy an existing system drive to a drive with different capacity (possibly even smaller), you can do so with pax. In this exercise, we'll assume that the existing system drive is on /dev/hdc and the new target drive is on /dev/sdb.

First, if you wish to avoid all the trouble of setting up the boot sector with one of the boot loaders, you can just copy the first few tracks of the source disk directly with dd:

     dd bs=512 count=1024 if=/dev/hdc of=/dev/sdb

Then, you should partition the new drive as you would like it to appear when you are done. This can be done with fdisk or some other utility such as disk druid. Before you proceed, you should list the partition sizes of the source disk:

     fdisk -lu /dev/hdc

Then, using the information listed, alter the partition table that was copied with dd (above) to reflect the new disk sizes. Usually this only requires that the last partition be expanded or shrunk to fit the new disk. This can be done by deleting the last partition and adding a new partition that uses all of the available space. Be sure to mark the appropriate partition (e.g. first partition) as the boot partition, if it hasn't already been marked that way.

Make the files systems that you wish to appear in each partition with mkswap and mkfs. Pick either an ext2 or ext3 file system and use mkfs.ext2:

     mkfs.ext2 -L /boot -T news /dev/sdb1
     mkswap -v1 /dev/sdb2
     mkfs.ext2 -L / -T news /dev/sdb3

or mkfs.ext3:

     mkfs.ext3 -j -L /boot -T news /dev/sdb1
     mkswap -v1 /dev/sdb2
     mkfs.ext3 -j -L / -T news /dev/sdb3

If you haven't already done so, make a couple of mount points on the system used for copying so that the source and destination partitions can be mounted:

     mkdir /mnt/src
     mkdir /mnt/dst

Once you've done this, copy any files needed for the OS boot sequence to the boot partition and then set up the boot loader with LILO or GRUB (this step is pretty complicated and is omitted here). If you set up the boot sector using the dd method, above, you only need to copy the boot files:

     mount -r /dev/hdc1 /mnt/src
     mount /dev/sdb1 /mnt/dst
     cd /mnt/src
     pax -r -w -X -pe . /mnt/dst
     umount /mnt/src
     umount /mnt/dst

At this point, I'm assuming you have a disk with a master boot record, boot partition and swap space plus an empty third partition where all the system and user files will be copied. Now, mount the source and destination partitions, change to the source partition and do the copy of all files in the source tree:

     mount -r /dev/hdc3 /mnt/src
     mount /dev/sdb3 /mnt/dst
     cd /mnt/src
     pax -r -w -X -pe . /mnt/dst

This will copy all the files, preserving the owner, group and permissions, from the source partition to the destination partition. As long as the destination is large enough to hold the files, the copy will work. Even if it is smaller, it will still work as long as all the files fit. Using dd will not work unless the destination is equal to or larger in size than the source.

Note that, unless you retain the same partitions (i.e. number and order on the disk), you will need to monkey with the copied fstab (i.e. /mnt/dst/etc/fstab) so that the system will mount the file systems and enable the swap space at startup. Here's an example of the hard disk entries in /etc/fstab (note that the copied drive is assumed to be mounted on /dev/sda during booting):

     /dev/sda3               /                       ext3    defaults        1 1
     /dev/sda1               /boot                   ext3    defaults        1 2
     /dev/sda2               swap                    swap    defaults        0 0

Or, if you prefer to live dangerously, try this:

     LABEL=/                 /                       ext3    defaults        1 1
     LABEL=/boot             /boot                   ext3    defaults        1 2
     /dev/sda2               swap                    swap    defaults        0 0

You can tune any of the other files on the copied disk, if you'd like. When you're done, unmount the source and desitination partitions:

     umount /mnt/src
     umount /mnt/dst

An alternative scheme might be to use parted to resize the larger partition on the source disk to one that will fit into the smaller partition on the target disk and then used dd to copy it directly. However, this requires that the source disk be reorganized which may be living too dangerously.

Creating a Local Repository With Rsync

If your Linux distribution is RedHat or one of those that is based on RedHat, such as CentOS, and you have more than one machine, and you keep all of the machines at the same distribution level, you might want to consider creating a local repository to contain a cached copy of the distribution and all of its updates and patches, so that all of your machines need not repeatedly download them off the Internet.

The local repository can be primed using the distribution disk and then all of the updates can be downloaded regularly using rsync to keep the local repository in sync with one of the OS distribution mirrors. Since rsync only downloads differences in the files that it mirrors, it is quite efficient at reducing bandwidth usage.

These notes assume that you are using CentOS and are, therefore, specific to it. However, you should be able to get things working with RedHat by altering some of the file/path names herein, as appropriate.

First, create two directories that will be the local repository for all of the RPM files for the OS and its updates. We like to put this directory in the /var/cache tree so that we know what its for:

     su
     mkdir -p /var/cache/CentOS/6/os/i386
     mkdir -p /var/cache/CentOS/6/updates/i386

This creates the directory structure for the i386 distribution of the OS. If you are using the x86_64 version instead, make these directories:

     su
     mkdir -p /var/cache/CentOS/6/os/x86_64
     mkdir -p /var/cache/CentOS/6/updates/x86_64

Note that, regardless of which point release of the OS you are using, you must only use the major release number in the directory tree. Thus, for CentOS 6.0, 6.1, 6.2, etc., just use 6.

If you'd like to change the permissions on the subdirectory tree, say to give group permissions to one of the worker bees who will maintain it, now's the time to do so. For example:

     su
     chown root:wbee -R /var/cache/CentOS
     chmod g+w -R /var/cache/CentOS

Now, mount the CentOS installation DVD somewhere and copy all of the relevant files (e.g. for the i386 distribution, copy the i386.rpm and noarch.rpm files, as well as the repository files in the mounted i386 .iso directory or on the i386 DVD). You can either create the required subdirectories and copy the files directly into them from another machine on the network, as shown for 32-bit CentOS 5:

     su
     mkdir /var/cache/CentOS/5/os/i386/CentOS
     mkdir /var/cache/CentOS/5/os/i386/repodata
     (copy using FTP, NFS or Samba)

Or, as shown for 32-bit CentOS 6:

     su
     mkdir /var/cache/CentOS/6/os/i386/Packages
     mkdir /var/cache/CentOS/6/os/i386/repodata
     (copy using FTP, NFS or Samba)

Or, you can try mounting the .iso file, in loopback mode, if it is available to you, as shown for 32-bit CentOS 5:

     su
     mount -o loop,unhide -t iso9660 \
           -r /my/iso/images/CentOS-5.6-i386-bin-DVD.iso /mnt
     cp -rp /mnt/CentOS /mnt/repodata /var/cache/CentOS/5/os/i386/
     umount /mnt

Or, as shown for 32-bit CentOS 6:

     su
     mount -o loop,unhide -t iso9660 \
           -r /my/iso/images/CentOS-6.3-i386-bin-DVD1.iso /mnt
     cp -rp /mnt/Packages /mnt/repodata /var/cache/CentOS/6/os/i386/
     umount /mnt
     mount -o loop,unhide -t iso9660 \
           -r /my/iso/images/CentOS-6.3-i386-bin-DVD2.iso /mnt
     cp -rp /mnt/Packages /var/cache/CentOS/6/os/i386/
     umount /mnt

Or, you can actually mount the DVD directly on the repository machine and do the copy from it, as shown for 32-bit CentOS 5, like this:

     su
     mount /dev/cdrom /mnt
     cp -rp /mnt/CentOS /mnt/repodata /var/cache/CentOS/5/os/i386/
     umount /mnt

Or, as shown for 32-bit CentOS 6, like this:

     su
     mount /dev/cdrom /mnt  (the first DVD)
     cp -rp /mnt/Packages /mnt/repodata /var/cache/CentOS/6/os/i386/
     umount /mnt
     mount /dev/cdrom /mnt  (the second DVD)
     cp -rp /mnt/Packages /var/cache/CentOS/6/os/i386/
     umount /mnt

Again, if you don't like the permissions that got applied to the copied files or preserved from the DVD, you can set them to your liking. For example:

     su
     chown root:wbee -R /var/cache/CentOS/6/os/i386/
     chmod ug=rw,o=r -R /var/cache/CentOS/6/os/i386/
     chmod ugo+x /var/cache/CentOS/6/os/i386/*

In order for yum (the package manager) to access the local repository cache, it must be set up as a Web service. We presume that you don't want to set it up on the main Web service port (i.e. port 80) for a number of reasons, not the least of which is that you are probably using that one for a real Web service. Another reason is so that you can block access, with your firewall, to the local repository port and not make your CentOS archive available to the entire world. Even if you do nothing, the port we've picked herein is not exposed to the outside world by most firewalls, whereas port 80 probably is.

We'll add a virtual host that listens on port 3142 (the same port as is used by apt-cacher) to the Apache configuration on your system. The following lines should be added to your Apache config file, in the virtual hosts section:

/etc/httpd/conf/httpd.conf:

.

       .

##
## CentOS repository Virtual Host Context ##
Listen 3142

     <VirtualHost default:3142>
     #
     #  Document root directory for the CentOS repository.  This overrides the
     #  main Web server's document root.
     #
     DocumentRoot "/var/cache/CentOS/html"
     <Directory "/var/cache/CentOS/html">
         AllowOverride None
         Require all granted
     </Directory>
     #
     #  The centos directory is how yum gets its files.
     #
     Alias /centos "/var/cache/CentOS"
     <Directory "/var/cache/CentOS">
         Options Indexes
         AllowOverride None
         Require all granted
     </Directory>
     #
     #  Directories defined in the main server that we don't want people to see
     #  under this port.
     #
     Alias /manual "/var/cache/CentOS/limbo"
     Alias /doc "/var/cache/CentOS/limbo"
     </VirtualHost>
          .
          .
          .

Note that we must provide a document root directory that points somewhere or Apache will thoughtfully show you the index.html file from your main Web service (how convenient). In this case, we've aimed it at /var/cache/CentOS/html and we'll address that problem in the next step. You could aim the document root at a non-existant directory, let's say /var/cache/CentOS/limbo, if you want, providing you don't mind Apache whining that the directory doesn't exist, every time it starts.

Also note that the directory permissions for /var/cache/CentOS apply equally to the document root directory, since it is a subdirectory thereof. Finally, most default Apache config files come with /manual and /doc defined. We alias these to limbo so that you don't see what's in those directories, by default.

Presuming that you chose to aim document root at /var/cache/CentOS/html, we now need to create an index file that will be shown whenever the user hits the bare URL (otherwise, Apache thoughtfully shows the one from the main Web service). Start with:

     su
     mkdir /var/cache/CentOS/html
     chown root:wbee /var/cache/CentOS/html
     chmod ug=rwx,o=rx /var/cache/CentOS/html

Then create an index.html file in this directory that will be shown by Apache whenever someone hits the bare URL. We find it convenient to give a little blurb and link to the actual repository:

/var/cache/CentOS/html/index.html:

     <html>
     <head>
     <title>CentOS Repository</title>
     </head>
     <body>
     <p> Welcome to the local CentOS repository cache.  The links will take you
     to the various distributions that are cached.
     <p><a href="/centos/6">CentOS 6</a>
     </body>

Note that you can update this page, as more distributions of the OS are added to the repository.

Recycle Apache so that the changes to its config file take effect:

     su
     /etc/init.d/httpd restart

At this point you have a local mirror for the distribution´s installation media set up. To verify that it is working, launch your Web browser and open this URL:

     http://myrepository:3142/centos/6/os/i386

or

     http://myrepository:3142/centos/6/os/x86_64

Replace "myrepository" with the name or IP address of your repository server. You should see a directory listing containing two directories, repodata and CentOS. There must not be a "File not found message".

Now, on to the updates. Unlike the installation media, updates change often. This being the case, we need a way to keep our local repository in sync with the distribution updates on the update servers. Thanks to Tridge, rsync is the tool for the job. It compares a remote directory tree with a local one, scaning for any changes and then applying the changes to the local directory. Since it transfers only the deltas between the remote and local files, it is very efficient and economical on bandwith.

To keep our repository up to date, we'll set up rsync to run once per day and resynch the local copy of the repository by pulling all of the updates since yesterday. This can be done when cron runs the daily jobs, in the wee hours of the morning. To do so, we create a new daily job file (as root) for the resync and put the following content into it:

/etc/cron.daily/yum-repos-update:

     #!/bin/sh
     #
     # Source the clustering configuration.
     #
     if [ -f /etc/sysconfig/clustering ] ; then
         . /etc/sysconfig/clustering
     else
         SERVERROLE=Standalone
     fi
     if [ x"$SERVERROLE" == x ]; then
         SERVERROLE=Standalone
     fi
     #
     # Run rsync to synchronize the current CentOS 6 local repository with the
     # remote repository.
     #
     if [ ${SERVERROLE} != "Secondary" ] ; then
         /usr/local/bin/rsync -av --delete \
             --exclude-from=/var/cache/CentOS/6/CentOS-6.3.excludes \
             rsync://your.rsync.mirror.server/centos/6.3/updates/i386 \
             --exclude=debug/ /var/cache/CentOS/6/updates/ > /dev/null
         chown root:wbee -R /var/cache/CentOS/6/updates/
         chmod g+w -R /var/cache/CentOS/6/updates/
     fi

Replace "rsync://your.rsync.mirror.server/centos" with an rsync server (such as "rsync://mirror.trouble-free.net/centos/") that's near you. You can choose the correct URL from the list at:

     http://www.centos.org/modules/tinycontent/index.php?id=30

Also, replace "6.3" with the version of the OS that you are actually mirroring. And, replace "i386" with "x86_64", if the archive that you're mirroring is the 64-bit archive. And, if you don't care about (or are happy with) the permissions set on the retrieved packages, you can remove the two lines that set the owner/group and permissions after the rsync.

Next, make the file executable:

     su
     chmod u=rwx,go=rx /etc/cron.daily/yum-repos-update

Now, before we go any further, you'll note that the rsync command line, in the yum-repos-update script has an --exclude-from file name supplied. This is where you put file name patterns that specify all of the crap that's in the repository. If you put the names of packages that you never install in this file, you'll save a lot of bandwidth by not downloading them.

You should create this file, before you run the script, as follows:

     su
     touch /var/cache/CentOS/6/CentOS-6.3.excludes
     chown root:wbee /var/cache/CentOS/6/CentOS-6.3.excludes
     chmod ug=rw,o=r /var/cache/CentOS/6/CentOS-6.3.excludes

You can leave the file empty, if you wish, or use your favorite text editor to edit it and add all of the packages you'd like to exclude from the local repository. Look at the remote HTML directory that corresponds to the rsync server's updates directory for the mirrored version of the OS and follow the directory tree down to either the RPMS or Packages/drpms directories. There, you'll see all of the files that will be mirrored. Pick the ones that you don't care about and make up file name patterns to exclude them. Here's our suggestions:

/var/cache/CentOS/6/CentOS-6.3.excludes:

     kde
     libreoffice

Note that the rules for the patterns that are used, etc., can be found in the rsync man page under the FILTER RULES section.

You can also look a your local repository in the CentOS/Packages directory under the os/i386 or os/x86_64 tree to see the file names of all of the packages that can be installed. This will give you an idea of other possibilities for file name patterns.

OK, to prime the repository and ensure that the script works properly, run the update once, manually:

     su
     /etc/cron.daily/yum-repos-update

It will probably take several hours and afterwards there should be quite a few files in your updates directory that, for CentOS 5, will look like this:

     /var/cache/CentOS/5/updates/i386/repodata
                                     /repodata/filelists.sqlite.bz2
                                               filelists.xml.gz
                                               other.sqlite.bz2
                                                    .
                                                    .
                                                    .
                                     /RPMS
                                     /RPMS/acpid-1.0.4-9.el5_4.1.i386.rpm
                                           acpid-1.0.4-9.el5_4.2.i386.rpm
                                           autofs-5.0.1-0.rc2.131.el5_4.1.i386.rpm
                                                .
                                                .
                                                .

For CentOS 6, your updates directory will look like this:

     /var/cache/CentOS/6/updates/i386/drpms
                                     /drpms/389-ds-base-1.2.10.2...i686.drpm
                                            apr-1.3.9-3...i686.drpm
                                            bind-chroot-9.7.0...i686.drpm
                                                 .
                                                 .
                                                 .
                                     /Packages
                                     /Packages/389-ds-base-1.2.10.2...i686.rpm
                                               apr-1.3.9-5.el6_2.i686.rpm
                                               bind-9.8.2-0.10...i686.rpm
                                                    .
                                                    .
                                                    .
                                     /repodata
                                     /repodata/0041b...lists.sqlite.bz2
                                               00446...-prestodelta.xml.gz 00787...-primary.xml.gz
                                                    .
                                                    .
                                                    .

You can visually check that the files were mirrored properly by looking at the remote HTML directory that corresponds to the rsync directory. As before, if you follow the directory tree down to the repodata and RPMS or Packages/drpms directories, you'll see all of the files that should have been mirrored. Inspect a few names to see if they all appear to be present except for those that you've specifically excluded in your excludes file.

Note that, as usual, if your repository is 64-bit, you should replace "i386" wherever it appears, above, with "x86_64".

To begin actually using your new repository, all of the individual clients will have to be modified so that they point to the local repository, instead of the standard, default repositories. The list of repositories that the CentOS package manager (yum) uses is maintained in files under the /etc/yum.repos.d directory.

In the files in that directory, you'll see either a parameter named "baseurl" or "mirrorlist". The former points directly at a repository whereas the latter uses the yum mirror plugin to point at a mirror list server, thereby allowing yum to automatically select the optimal repository server to use. Depending on how your default repositories are set up, you could see either of these two parameters. Regardless of which one is employed, you need to comment them out and replace them with a baseurl that points to the local repository. Do that with your favorite text editor:

/etc/yum.repos.d/CentOS-Base.repo:

.

       .

[base]
name=CentOS-$releasever - Base
#lm mirrorlist=http://mirrorlist.centos.org/?release=$releasever\ #lm &arch=$basearch&repo=os
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ baseurl=http://myrepository:3142/centos/$releasever/os/$basearch/ gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

     #released updates 
     [updates]
     name=CentOS-$releasever - Updates
     #lm mirrorlist=http://mirrorlist.centos.org/?release=$releasever\
     #lm    &arch=$basearch&repo=updates
     #baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
     baseurl=http://myrepository:3142/centos/$releasever/updates/$basearch/
     gpgcheck=1
     gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
          .
          .
          .

In the above changes, we commented out the mirrorlist parameter with "#lm". The baseurl parameter was already commented out by the package installer when yum was upgraded to use the mirror list plugin. We added the baseurl parameter that points to our local repository. The server name "myrepository" should be replaced with your actual server's name or IP address and the port "3142" should be replaced with the port number that you used for the repository virtual host in the Apache config file (above). Note that the changes are the same, regardless of what release of CentOS you are mirroring (or even if you have mirror repositories for multiple versions of CentOS, for that matter).

Finally, we can run yum on the modified machine to see if the new repository works:

     su
     yum clean all
     yum update

If all runs well, you have properly set up a local CentOS repository. You can apply the repository list changes to all of the client machines that are to use the local repository for their updates.

Moving The Local Rsync Repository To The Next Revision

If your Linux distribution is RedHat or one of those that is based on RedHat, such as CentOS, and you set up a yum mirror repository to keep all of the machines at the same distribution level, you will eventually need to move the local repository up to the next revision. Your first clue, that you need to do this, is that the nightly rsync job (that was set up in the "Creating a Local Repository With Rsync" section) fails, probably like this:

     rsync: change_dir "/6.2/updates" (in centos) failed: No such file or
       directory (2)

Not to worry. You can move your mirror repository to the next release quite easily.

But first, before you do, you might want to go around to any systems that you wish to keep at the previous revision and apply any of the updates that you want to apply using the unchanged mirror repository. Once you move the mirror repository up to the next revision, all of the systems that use it will see the new package updates for the next revision and try to apply them. There seems to be no way of going back easily.

Of course, you can revert the /etc/yum.repos.d/CentOS-Base.repo file back to the way it was before you modified it and the standard CentOS repository for your old release will continue to be used as long as that release is supported. Mind you, we recommend that you move up to the latest release and then you won't have any problems but that's up to you. To do that, revert the /etc/yum.repos.d/CentOS-Base.repo file, upgrade to the latest release, upgrade the local repository as outlined below, and then put the changes enabling the local repository back into the CentOS-Base.repo file.

Anyway, the first step in moving the mirror repository up to the next revision is to move the current mirror directory aside so that we can create a new mirror directory for the new release (as above, these notes assume that you are using CentOS and are, therefore, specific to it):

     su
     mv /var/cache/CentOS/6  /var/cache/CentOS/6.2

Replace the numbers 6 and 6.2 with your current revisions of the OS.

Now, create two new directories that will be the local repository for all of the RPM files for the OS and its updates, for the new revison:

     su
     mkdir -p /var/cache/CentOS/6/os/i386
     mkdir -p /var/cache/CentOS/6/updates/i386

This creates the directory structure for the i386 distribution of the OS. If you are using the x86_64 version instead, make these directories:

     su
     mkdir -p /var/cache/CentOS/6/os/x86_64
     mkdir -p /var/cache/CentOS/6/updates/x86_64

Note that, regardless of which point release of the OS you are using, you must only use the major release number in the directory tree. Thus, for CentOS 6.1, 6.2, 6.3, etc., just use 6.

If you'd like to change the permissions on the subdirectory tree, say to give group permissions to one of the worker bees who will maintain it, now's the time to do so. For example:

     su
     chown root:wbee -R /var/cache/CentOS
     chmod g+w -R /var/cache/CentOS

Now, mount the CentOS installation DVD for the new release somewhere and copy all of the relevant files (i.e. for the i386 distribution, copy the i386.rpm and noarch.rpm files, as well as the other files in the i386 .iso or on the i386 DVD). You can either create the two subdirectories and copy the files directly into them from another machine on the network:

     su
     mkdir /var/cache/CentOS/6/os/i386/Packages
     mkdir /var/cache/CentOS/6/os/i386/repodata
     (copy using FTP, NFS or Samba)

Or, you can try mounting the .iso file, in loopback mode, if it is available to you:

     su
     mount -o loop,unhide -t iso9660 \
           -r /my/iso/images/CentOS-6.3-i386-bin-DVD1.iso /mnt
     cp -rp /mnt/Packages /mnt/repodata /var/cache/CentOS/6/os/i386/
     umount /mnt
     mount -o loop,unhide -t iso9660 \
           -r /my/iso/images/CentOS-6.3-i386-bin-DVD2.iso /mnt
     cp -rp /mnt/Packages /var/cache/CentOS/6/os/i386/
     umount /mnt

Or, you can actually mount the DVD directly on the repository machine and do the copy from it, like this:

     su
     mount /dev/cdrom /mnt  (the first DVD)
     cp -rp /mnt/Packages /mnt/repodata /var/cache/CentOS/6/os/i386/
     umount /mnt
     mount /dev/cdrom /mnt  (the second DVD)
     cp -rp /mnt/Packages /var/cache/CentOS/6/os/i386/
     umount /mnt

Again, if you don't like the permissions that got applied to the copied files or preserved from the DVD, you can set them to your liking. For example:

     su
     chown root:wbee -R /var/cache/CentOS/6/os/i386/
     chmod ug=rw,o=r -R /var/cache/CentOS/6/os/i386/
     chmod ugo+x /var/cache/CentOS/6/os/i386/*

The script that keeps the repository up to date, needs to be changed to use the new directory for the updated revision level. Typically, this merely requires that you edit the existing file with your text editor and change the revision level:

/etc/cron.daily/yum-repos-update:

.

       .

#
# Run rsync to synchronize the current CentOS 6 local repository with the # remote repository.
#
if [ ${SERVERROLE} != "Secondary" ] ; then

      /usr/local/bin/rsync -av --delete \
          --exclude-from=/var/cache/CentOS/6/CentOS-6.3.excludes \
          rsync://your.rsync.mirror.server/centos/6.3/updates/i386 \
          --exclude=debug/ /var/cache/CentOS/6/updates/ > /dev/null
      chown root:wbee -R /var/cache/CentOS/6/updates/
      chmod g+w -R /var/cache/CentOS/6/updates/

fi

Since we already created the exclude file when we originally set up the yum mirror repository, we simply need to copy it to the new mirror directory:

     su
     cp /var/cache/CentOS/6.2/CentOS-6.2.excludes \
        /var/cache/CentOS/6/CentOS-6.3.excludes
     chown root:wbee /var/cache/CentOS/6/CentOS-6.3.excludes
     chmod ug=rw,o=r /var/cache/CentOS/6/CentOS-6.3.excludes

Next, you should prime the new repository by running the update once, manually:

     su
     /etc/cron.daily/yum-repos-update

It will probably take several hours and afterwards there should be quite a few files in your updates directory that look like this:

     /var/cache/CentOS/6/updates/i386/drpms
                                     /drpms/389-ds-base-1.2.10.2...i686.drpm
                                            apr-1.3.9-3...i686.drpm
                                            bind-chroot-9.7.0...i686.drpm
                                                 .
                                                 .
                                                 .
                                     /Packages
                                     /Packages/389-ds-base-1.2.10.2...i686.rpm
                                               apr-1.3.9-5.el6_2.i686.rpm
                                               bind-9.8.2-0.10...i686.rpm
                                                    .
                                                    .
                                                    .
                                     /repodata
                                     /repodata/0041b...lists.sqlite.bz2
                                               00446...-prestodelta.xml.gz 00787...-primary.xml.gz
                                                    .
                                                    .
                                                    .

You can visually check that the files were mirrored properly by looking at the remote HTML directory that corresponds to the rsync directory. As before, if you follow the directory tree down to the repodata and RPMS directories, you'll see all of the files that should have been mirrored. Inspect a few names to see if they all appear to be present.

Note that, as usual, if your repository is 64-bit, you should replace "i386" wherever it appears, above, with "x86_64".

At this point, all of the machines that were using the original mirror repository will begin using the new one and any updates that are applied on them will move the system up to the next revision level. On any of these machines, we can run yum to see if the new repository works:

     su
     yum clean all
     yum update

If all runs well, you have properly upgraded the local CentOS repository.

Caching Updates With Apt-Cacher

If your Linux distribution is Debian or one of those that is based on Debian, such as Ubuntu, and you have more than one machine, and you keep all of the machines at the same distribution level, you might want to consider using apt-cacher to cache a local copy of distribution updates and patches, so that all of your machines need not repeatedly download them off the Internet.

You should choose a machine which will hold the cache repository and set up apt-cacher there. We usually pick a machine with enough spare disk space to support the cache, that is visible to all of the other systems and is running at all times.

Begin by installing the apt-cacher package thusly:

     sudo apt-get install apt-cacher

Once this is done, edit the apt-cacher configuration file (/etc/apt-cacher/apt-cacher.conf) to set up your specific options. Although you shouldn't have to change much, here are a few things for your consideration.

The default port that apt-cacher runs on is port 3142. You might want to change it if it collides with something else on your system (we can't think what, but ya never know).

By default, all hosts are allowed to use the repository cache. You probably want to lock this down so that only hosts on your local subnet can use it. For example, if your local subnet is 192.168.1, and you want to allow all machines on it and the local host to use the cache, you might set:

     allowed_hosts=192.168.1.0/24, 127.0.1.1

The traditional local host (127.0.0.1) is allowed by default so it is not necessary to add it. Apparently, on Ubuntu boxes, 127.0.1.1 is also considered to be the local host (if this is really true, nice going boneheads -- just what we need is another way of doing the same old thing differently) so, just in case somebody has a go at the cache with that IP address we allow it too.

By default, apt-cacher creates a report on a daily basis on how efficient your cache was. You might think, what's the point, who needs this? If so, you can turn it off:

     generate_reports=0

The part of the configuration that you do have to set up is the path map. This tells apt-cacher where to look for the packages that are to be fetched for its clients and stored in the cache. You'll probably want to include all of the repositories, that are going to be used by the clients, in this list.

We begin making up the list by examining the original /etc/apt/sources.list files on one or more, perhaps all, of the client machines. Here's a sample:

/etc/apt/sources.list:

     deb http://us.archive.ubuntu.com/ubuntu jaunty \
         main restricted universe multiverse
     deb http://us.archive.ubuntu.com/ubuntu jaunty-updates \
         main restricted universe multiverse
     deb http://security.ubuntu.com/ubuntu jaunty-security \
         main restricted universe multiverse
     deb http://archive.canonical.com/ubuntu jaunty partner
     deb-src http://us.archive.ubuntu.com/ubuntu jaunty-updates \
         main restricted universe multiverse
     deb-src http://us.archive.ubuntu.com/ubuntu jaunty \
         main restricted universe multiverse
     deb-src http://security.ubuntu.com/ubuntu jaunty-security \
         main restricted universe multiverse
     deb-src http://archive.canonical.com/ubuntu jaunty partner

There's a lot of commonality in the URLs that are found in this sample file. Essentially, there are three types: the Ubuntu archives; the security archives; and the archives of the bonus packages from Canonical. In the apt-cacher path list, we need to map these URLs to an alias name that the clients can use to reference those archives. The names chosen for the aliases don't matter but it is better to choose something that makes sense to you. Based on the URLs in the original /etc/apt/sources.list, we might set the path list up like this:

     path_map = ubuntu us.archive.ubuntu.com/ubuntu ; \
                security security.ubuntu.com/ubuntu ; \
                canonical archive.canonical.com/ubuntu

Normally, its OK to use the URLs that your system installation chose. But, if you are experiencing performance delays or other problems, now is a good time to choose mirrors that actually work for you. If you need help deciding which ones to use, the list can be found at:

     https://launchpad.net/ubuntu/+archivemirrors

According to the Ubuntu documentation, "if you are unsure which mirror to select the best option is [iso-country-code].archive.ubuntu.com where iso-country-code is the two character country abbreviation of a country near you. For example, if you live in the United States you could choose us.archive.ubuntu.com".

Once you've set up aliases to all of the URLs in the path list, you can access them on the client machines like this:

     repository_cache_machine:port/alias_name

So, for instance, we can access the ubuntu and security repositories through:

     http://myrepository:3142/ubuntu
     http://myrepository:3142/security

Now that we've chosen all the local options, here is a complete listing of a sample apt-cacher configuration file:

/etc/apt-cacher/apt-cacher.conf:

     #################################################################
     # This is the config file for apt-cacher. On most Debian systems
     # you can safely leave the defaults alone.
     #################################################################
     # cache_dir is used to set the location of the local cache. This can
     # become quite large, so make sure it is somewhere with plenty of space.
     cache_dir=/var/cache/apt-cacher
     # The email address of the administrator is displayed in the info page
     # and traffic reports.
     admin_email=root@localhost
     # For the daemon startup settings please edit the file
     # /etc/default/apt-cacher.
     # Daemon port setting, only useful in stand-alone mode. You need to run the
     # daemon as root to use privileged ports (<1024).
     daemon_port=3142
     # optional settings, user and group to run the daemon as. Make sure they have
     # sufficient permissions on the cache and log directories. Comment the
     # settings to run apt-cacher as the native user.
     group=www-data
     user=www-data
     # optional setting, binds the listening daemon to specified IP(s). Use IP
     # ranges for more advanced configuration, see below.
     # daemon_addr=localhost
     # If your apt-cacher machine is directly exposed to the Internet and you are
     # worried about unauthorised machines fetching packages through it, you can
     # specify a list of IPv4 addresses which are allowed to use it and another
     # list of IPv4 addresses which aren't.
     # Localhost (127.0.0.1) is always allowed. Other addresses must be matched
     # by allowed_hosts and not by denied_hosts to be permitted to use the cache.
     # Setting allowed_hosts to "*" means "allow all".
     # Otherwise the format is a comma-separated list containing addresses,
     # optionally with masks (like 10.0.0.0/22), or ranges of addresses (two
     # addresses separated by a hyphen, no masks, like '10.100.0.3-10.100.0.56').
     allowed_hosts=192.168.1.0/24, 127.0.1.1
     denied_hosts=
     # And similarly for IPv6 with allowed_hosts_6 and denied_hosts_6.
     # Note that IPv4-mapped IPv6 addresses (::ffff:w.x.y.z) are truncated to
     # w.x.y.z and are handled as IPv4.
     allowed_hosts_6=fec0::/16
     denied_hosts_6=
     # This thing can be done by Apache but is much simpler here - limit access
     # to Debian mirrors based on server names in the URLs
     #allowed_locations=ftp ftp.uni-kl.de,ftp ftp.nerim.net,debian.tu-bs.de
     # Apt-cacher can generate usage reports every 24 hours if you set this
     # directive to 1. You can view the reports in a web browser by pointing
     # to your cache machine with '/apt-cacher/report' on the end, like this:
     #      http://yourcache.example.com/apt-cacher/report
     # Generating reports is very fast even with many thousands of logfile
     # lines, so you can safely turn this on without creating much 
     # additional system load.
     generate_reports=1
     # Apt-cacher can clean up its cache directory every 24 hours if you set
     # this directive to 1. Cleaning the cache can take some time to run
     # (generally in the order of a few minutes) and removes all package
     # files that are not mentioned in any existing 'Packages' lists. This
     # has the effect of deleting packages that have been superseded by an
     # updated 'Packages' list.
     clean_cache=1
     # Apt-cacher can be used in offline mode which just uses files already
     # cached, but doesn't make any new outgoing connections by setting this to 1.
     offline_mode=0
     # The directory to use for apt-cacher access and error logs.
     # The access log records every request in the format:
     # date-time|client ip address|HIT/MISS/EXPIRED|object size|object name
     # The error log is slightly more free-form, and is also used for debug
     # messages if debug mode is turned on.
     # Note that the old 'logfile' and 'errorfile' directives are
     # deprecated: if you set them explicitly they will be honoured, but it's
     # better to just get rid of them from old config files.
     logdir=/var/log/apt-cacher
     # apt-cacher can use different methods to decide whether package lists need
     # to be updated,
     # A) looking at the age of the cached files
     # B) getting HTTP header from server and comparing that with cached data.
     # This method is more reliable and avoids desynchronisation of data and index
     # files but needs to transfer a few bytes from the server every time somebody
     # requests the files ("apt-get update")
     # Set the following value to the maximum age (in hours) for method A or to 0
     # for method B
     expire_hours=0
     # Apt-cacher can pass all its requests to an external http proxy like
     # Squid, which could be very useful if you are using an ISP that blocks
     # port 80 and requires all web traffic to go through its proxy. The
     # format is 'hostname:port', eg: 'proxy.example.com:8080'.
     #http_proxy=proxy.example.com:8080
     # Use of an external proxy can be turned on or off with this flag.
     # Value should be either 0 (off) or 1 (on).
     use_proxy=0
     # External http proxy sometimes need authentication to get full access. The
     # format is 'username:password'.
     #http_proxy_auth=proxyuser:proxypass
     # Use of external proxy authentication can be turned on or off with this
     # flag.  Value should be either 0 (off) or 1 (on).
     use_proxy_auth=0
     # This sets the interface to use for the upstream connection.
     # Specify an interface name, an IP address or a host name.
     # If unset, the default route is used.
     #interface=
     # Rate limiting sets the maximum bandwidth in bytes per second to use
     # for fetching packages. Syntax is fully defined in 'man wget'.
     # Use 'k' or 'm' to use kilobits or megabits / second: eg, 'limit=25k'.
     # Use 0 or a negative value for no rate limiting.
     limit=0
     # Debug mode makes apt-cacher spew a lot of extra debug junk to the
     # error log (whose location is defined with the 'logdir' directive).
     # Leave this off unless you need it, or your error log will get very
     # big. Acceptable values are 0 or 1.
     debug=0
     # To enable data checksumming, install libberkeleydb-perl and set this option
     # to 1. Then wait until the Packages/Sources files have been refreshed once
     # (and so the database has been built up). You can also nuke them in the
     # cache to trigger the update.  
     # checksum=1
     # Print a 410 (Gone) HTTP message with the specified text when accessed via
     # CGI. Useful to tell users to adapt their sources.list files when the
     # apt-cacher server is being relocated (via apt-get's error messages while
     # running "update")
     #cgi_advise_to_use = Please use http://cacheserver:3142/ as apt-cacher \
     #                    access URL
     #cgi_advise_to_use = Server relocated. To change sources.list, run \
     #                    perl -pe "s,/apt-cacher\??,:3142," \
     #                    -i /etc/apt/sources.list
     # Server mapping - this allows to hide real server names behind virtual paths
     # that appear in the access URL. This method is known from apt-proxy. This is
     # also the only method to use FTP access to the target hosts. The syntax is
     # simple, the part of the beginning to replace, followed by a list of mirror
     # urls, all space separated. Multiple profile are separated by semicolons
     # Note that you need to specify all target servers in the allowed_locations
     # options if you make use of it. Also note that the paths should not overlap
     # each other. FTP access method not supported yet, maybe in the future.
     # path_map = debian ftp ftp.uni-kl.de/pub/linux/debian \
     #            ftp ftp2.de.debian.org/debian ; ubuntu archive.ubuntu.com/ubuntu ; \
     #            security security.debian.org/debian-security \
     #            ftp ftp2.de.debian.org/debian-security
     path_map = ubuntu us.archive.ubuntu.com/ubuntu ; \
                security security.ubuntu.com/ubuntu ; \
                canonical archive.canonical.com/ubuntu
     # Permitted package files - this is a perl regular expression which matches
     # all package-type files (files that are uniquely identified by their
     # filename).  The default is: 
     #package_files_regexp = (?:\.deb|\.rpm|\.dsc|\.tar\.gz|\.diff\.gz|\.udeb|\
     #                       index\.db-.+\.gz|\.jigdo|\.template)$
     # Permitted Index files - this is the perl regular expression which matches
     # all index-type files (files that are uniquely identified by their full path
     # and need to be checked for freshness). 
     # The default is:
     #index_files_regexp = (?:Index|Packages\.gz|Packages\.bz2|Release|\
     #                     Release\.gpg|Sources\.gz|Sources\.bz2|\
     #                     Contents-.+\.gz|pkglist.\.bz2|release|release\..|\
     #                     srclist.*\.bz2|Translation-.+\.bz2)$

Note that, at this point in time, there is a bug that occurs with Translation files. Essentially, it is OK for the client to ask for a translation file that does not exist but when it does so and the file is not found, apt-cacher returns an error code that is incorrect. It doesn't appear to cause any real problems (and there is a fix in the works) but you will notice the errors on your repository cache client machines. You may be tempted to try and fix it by altering the index_files_regexp (in the apt-cacher configuration file) but don't bother. Its a bug in apt-cacher so you won't be able to do so.

We never could understand the reasoning behind this, but, when you install apt-cacher, it is installed deactivated. The startup script is put in the proper place in /etc/init.d and symlinks added for the right run levels. None the less, you still need to activate it manually:

/etc/default/apt-cacher:

     AUTOSTART=1

Once you've done this, you can restart apt-cacher:

     sudo /etc/init.d/apt-cacher restart

The next time you boot the system, it should come up all on its own.

After apt-cacher is successfully set up and running, its time to update all of the repository cache clients so that they will use the local cache instead of the WAN-based repositories. This is done by editing their source list so that it points to the local repository cache. And, the first candidate for this treatment might as well be the machine where the repository cache actually resides so that any packages that it requests will prime the cache.

If the sample sources list, that was shown above, is altered to use the local repository cache, it should now be changed to look something like this:

/etc/apt/sources.list:

     deb http://myrepository:3142/ubuntu jaunty \
         main restricted universe multiverse
     deb http://myrepository:3142/ubuntu jaunty-updates \
         main restricted universe multiverse
     deb http://myrepository:3142/security jaunty-security \
         main restricted universe multiverse
     deb http://myrepository:3142/canonical jaunty partner
     deb-src http://myrepository:3142/ubuntu jaunty-updates \
         main restricted universe multiverse
     deb-src http://myrepository:3142/ubuntu jaunty \
         main restricted universe multiverse
     deb-src http://myrepository:3142/security jaunty-security \
         main restricted universe multiverse
     deb-src http://myrepository:3142/canonical jaunty partner

Go through the configuration files on all of the client machines and update them in a similar fashion. Once that is done, you should be able to apply updates to them, via the local repository cache, in your usual fashion.

If you set up apt-cacher after you've already applied packages to the machine that has the repository cache on it, it is possible that it already has those package files cached in its local repository. If you wish, you can import them to the apt-cacher repository, where they will be available to everyone. One of the ancilliary scripts installed in the /usr/share/apt-cacher directory, that was created when you installed apt-cacher, is apt-cacher-import.pl, which handles this task.

In a misguided attempt at security, the writer of the script forces it to run as the user/group set in /etc/apt-cacher/apt-cacher.conf. Unfortunately, this prevents it from doing anything useful with the local repository (it wants to move all of the packages that it finds from the local repository to the cache), since all of the files in the local repository are owned by whoever installed them (typically root).

Once could change the user/group in /etc/apt-cacher/apt-cacher.conf to root/root temporarily, we suppose, but this defeats the purpose of using a separate user for the cache. If you'd like to keep the separate user/group (www-data, by default) for the cache files, just comment out the code at about line 75 that reads:

     setup_ownership($cfg);

Change it to read:

     # setup_ownership($cfg);

People go way too far overboard with security. If the guy is running as super user, he should be able to do whatever he wants. Peeling his permissions back to something less ain't going to cut it. He's just going to edit your stupid code and do what he wants. Think about that....

Anyway, once you've fixed the code, to import the package files from the system's local repository (/var/cache/apt/archives) to the apt-cacher repository, run:

     sudo /usr/share/apt-cacher/apt-cacher-import.pl /var/cache/apt/archives

The apt-cacher directory (/var/cache/apt-cacher/packages) should now be filled up with all of the packages that were in the apt local repository. At the same time, the local repository should now be empty, since apt-cacher-import.pl moves the packages that it finds to the repository cache.

If you left the directive generate_reports set to 1 in the apt-cacher config file, apt-cacher will generate a report on cache usage every day. You can view the report by pointing your Web browser at the URL:

     http://myrepository:3142/report

Where "myrepository" is the name of the machine that runs apt-cacher.

If you need to regenerate the report at any time other than when it is normally done, run:

     sudo /usr/share/apt-cacher/apt-cacher-report.pl