Install Two Versions of PHP on Ubuntu Lucid 10.04 – PHP 5.3 and PHP 5.2

OK – so after my last long post on how to downgrade Ubuntu Lucid 10.04 back to PHP 5.2 (which happened to be version 5.2.10), I wasn’t to satisfied with this plan.  Afterall, what is the point of going backwards for every site if you don’t need to?

Well, after I reverted back to PHP 5.2 on Ubuntu 10.04 Lucid, I wanted to research this topic a bit further.

Alas, I read several HOW-TO pages and found out how to install both PHP 5.2 and 5.3 on Ubuntu Lucid 10.04 so that they both work at the SAME TIME!.  Yes, the same time!  No need to remove one and start the other when needing to use it.

The catch is – one of them has to be run in fastcgi mode.  No big deal though.

So, if you followed my other post on how to revert back to PHP 5.2 on Ubuntu 10.04, you will need to do the following before continuing with the next instructions.  If you did not revert back to PHP 5.2, you do not need this.

1. Simply go into the /etc/apt/sources.list.d folder and delete that file you created that pulled items from the Karmic repositories.  My file was just called karmic.list.
2. Now go into the /etc/apt/preference.d folder and delete the file you created which listed all of your preferences for packages you wanted only pulled from Karmic.  Mine was just called “php” so I deleted this file.
3. Now, just perform the following command and it will download PHP 5.3.2 included with Ubuntu Lucid 10.04 and any other updates that you may need:

sudo apt-get update && sudo apt-get upgrade

Now install all of those items!


OK, so if you did not follow a guide on how to downgrade to PHP 5.2 in Ubuntu 10.04 Lucid, you start here!

At the time of this writing, the newest version of PHP 5.2 is PHP 5.2.14.  Heck, even that is an upgrade over the Karmic version of PHP 5.2.10 that was installed previously!

Now do not let these instructions get the better of you.  They are pretty simple to follow.  You have to install some development packages to allow PHP 5.2.14 to build and be used.  I tried to remove these after I finished compiling and installing PHP 5.2.14, but then I began receiving Internal Server Error messages on the one site that needs the old version.

While you may not need all of these packages below, I did in order to replicate the same modules and configuration that I was using.  Note that this DOES NOT include imagick – which I could not get installed.  But, I haven’t noticed any problems with the site without this installed.

First, install the needed dependencies.  Some of these may already be installed:

sudo apt-get install curl libcurl4-openssl-dev libcurl3 libcurl3-gnutls zlib1g zlib1g-dev libzip-dev libzip1 libxml2 libsnmp-base libsnmp15 libxml2-dev libsnmp-dev libjpeg62 libjpeg62-dev libpng12-0 libpng12-dev zlib1g zlib1g-dev libfreetype6 libfreetype6-dev libbz2-dev libmcrypt-dev libmcrypt4 libc-www007e libmysqlclient15-dev

This will install all of the dependencies you need to install PHP 5.2.14 on Ubuntu Lucid 10.04.

OK, now that those are installed, you need to head over to the php page and download the version of PHP you want to use.  Make sure you download the TAR.GZ version and NOT the TAR.BZ2 if you are following my tutorial.

Now after you have it downloaded, you need to move it to the proper directory.  Now because I didn’t want all kinds of files scattered throughout the server drive, I installed everything in the /opt location.  So, move your newly-downloaded file to that directory:

sudo mv php-5* /opt

OK, so that is moved. Now you need to go to that directory and untar the file:

cd /opt
sudo tar xvf php-5*

That should then create a new directory. If you are installing PHP 5.2.14, it will be the php-5.2.14 directory. So change the directory (cd) into that folder.

cd php-5.2.*

OPTIONAL:  If you want to install the Suhosin Hardening Patch (which is installed be default with Ubuntu releases), you need to download this patch file:

sudo wget http://download.suhosin.org/suhosin-patch-5.2.14-0.9.7.patch.gz
sudo gunzip suhosin*
patch -p 1 -i suhosin*

Now, this is the part where most people take a big gulp and think they cannot compile PHP from source. This is how you compile PHP 5.2.14 from source on Ubuntu 10.04 Lucid. Of course, it really helps when you know what to type in! So, copy or type in the following large command. Copying from this site will eliminate any typos or errors:

sudo ./configure –prefix=/opt/php5.2.14 –with-mysql=/usr –with-mysqli=/usr/bin/mysql_config –with-curl=/usr/bin –with-curlwrappers –with-openssl-dir=/usr –with-zlib-dir=/usr –enable-mbstring –with-pdo-mysql=/usr –with-ldap –with-xmlrpc –with-iconv-dir=/usr –with-snmp=/usr –enable-exif –enable-calendar –with-bz2=/usr –with-mcrypt=/usr –with-gd –with-jpeg-dir=/usr –with-png-dir=/usr –with-zlib-dir=/usr –with-freetype-dir=/usr –enable-mbstring –enable-zip –with-pear –enable-cli –enable-fastcgi –enable-discard-path –enable-force-cgi-redirect –with-imap –with-imap-ssl –enable-bcmath –enable-ctype –enable-dba –enable-dom –enable-filter –enable-ftp –with-gettext –enable-hash –enable-json –enable-libxml –with-mime-magic=/usr/share/file/magic.mime –with-pcre-regex –enable-sockets –enable-soap –with-kerberos –enable-shmop –enable-wddx –with-openssl –without-sqlite

That is a HUGE line isn’t it? Well, basically what that is doing is telling the PHP installer what kinds of modules and functions you want installed. Now, you may not need some of those to be installed, but it really isn’t going to hurt if you do – just some extra hard drive space.

Now, do this command – and it will install PHP for you into the /opt/php5.2.14 directory (notice how that is specified in the huge command above where it says “–prefix=/opt/php5.2.14”. By adding this line, it ensures all files are placed here instead of scattered throughout your disk drive).

sudo make
sudo make install

Believe it or not, you have not compiled and installed PHP 5.2.14 on Ubuntu 10.04 now!  One more step.  What I would suggest is you copying your php.ini file from the /etc/php5 directory (which is the one for the PHP 5.3.2) since you will also need a configuration file for PHP 5.2.14.  That way you can also change any config for the specific PHP installs as well:

sudo cp /etc/php5/apache2/php.ini /opt/php5.2.14/php.ini

Alright, now you are not done yet! Now that you have PHP 5.2 compiled, you need to setup the Apache web server (I’m assuming you already have this installed too) to use both PHP 5.3 and PHP 5.2 simultaneously. To do so, issue this command – which enables the “actions” modules for Apache:

sudo a2enmod actions

You need to run that because of a file that will use the “Action” directive just in a few moments. Let’s make that file now. Copy what I have below and create a new file in the /etc/apache2 directory called “php5-2-14.conf” – makes it easy to understand that the file is the configuration data to allow PHP 5.2 and PHP 5.3 to both be installed on Ubuntu 10.04.

ScriptAlias /cgi-bin-php/ “/opt/php5.2.14/bin/”
SetEnv PHP_INI_SCAN_DIR “/opt/php5.2.14/”
AddHandler php-script .php
Action php-script /cgi-bin-php/php-cgi
<FilesMatch “\.php”>
SetHandler php-script
</FilesMatch>

Now, save that file. What does that file do? Well, it points an alias of “cgi-bin-php” to the location of your newly installed PHP 5.2.14 – “/opt/php5.2.14/”. Then it tells Apache where the php.ini file is at (which you copied to the directory per the above instructions, right?) Then, there is the “FilesMatch” area which basically says – match all files that end with a .php extension. It then tells Apache to open all php files (SetHandler) with the php-script Action that is just above the “FilesMatch”.

OK, enough with providing information on what the file does. Believe it or not, you have two more things to do now!

Go into your /etc/apache2/httpd.conf file (or wherever you have your VirtualHost settings), and add the following line somewhere within the VirtualHost. By adding this line, you are telling Apache to load that file and use the settings from the file (which was just made) for configuring the VirtualHost to use PHP 5.2.14. You only need to add this line to the VirtualHost areas that you want to use the older version of PHP (the one you just installed). If you want the other VirtualHost domains to use the PHP 5.3.2 version which is installed with Ubuntu 10.04, do not add this line!

Include /etc/apache2/php5-2-14.conf

Now, restart apache and you should be all done!

sudo apache2ctl restart

And there it is! You have just installed both PHP 5.2 and PHP 5.3 on Ubuntu Lucid 10.04 and they will both work at the SAME time! Now, if you so-desire, you may want to delete the PHP TAR file that you downloaded and remove the directory that was untarred to save space. These two items take up about 120 MB when the compiled PHP only takes about 60 MB.

sudo rm /opt/php-5.2.14.tar.gz
sudo rm -r /opt/php-5.2.14

Export Groups and Group Types from Active Directory

A quick post on something I had to do at work today.  Luckily it didn’t take too long to research how to do it, but the information wasn’t clearly on one page.

So, are you looking to get a full export of groups and group types from Active Directory?  Look no further!  This is what you need to do in order to get all of the group names and group types from active directory – whether they be built-in groups, domain local security groups, global security groups, universal security groups, domain local distribution groups, global distribution groups, or universal distribution groups.

First of all, you need a Microsoft utility called CSVDE.EXE.  This is a program that will allow you to query the Active Directory LDAP database and return results you need.

Next, here is the command you will use.  If you have multiple domains, you will need to run this query on one server in each of your Active Directory domains to get all groups in Active Directory for each domain:

csvde.exe -s <server> -f <export-filename> -l groupType,name -r “(objectCategory=group)”

This will then export the file to a CSV file with two fields – the name of the group and the type of group. Now, when you open the file, you are going to see a bunch of numbers in the groupType column. Here is the translation for each of those numbers:

Built-In-2147483643
Domain Local Security-2147483644
Global Security-2147483646
Universal Security-2147483640
Domain Local Distribution4
Global Distribution2
Universal Distribution8

In Excel, you can quickly replace each of those by hitting Ctrl+H (which opens the Replace window). Start at the top with the Built-In numbers above and then just replace it with the text you want to change it to!

Downgrade From PHP 5.2 to PHP 5.3 on Ubuntu Lucid 10.04

After the Decatur server was upgraded to the newest long-term-support release of Ubuntu 10.04 (Lucid), there were a few issues that occurred.  This is because Ubuntu 10.04 Lucid includes the PHP 5.3.2 version with the release.  To me, this was good news because I would prefer to stay ahead on these kinds of things that are facing the Internet and could be prone to security issues.  Before the upgrade, I was using PHP 5.2.4 on the servers.

The graphic utility that I use to graph network status, e-mail status, and other server status items had some graph issues.  That is because of some “Deprecated” errors that were messing with the graphic utility’s stuff.  That was easily fixed by turning off error-reporting for the command-line interface (cli) version of the php.ini file found in the /etcphp5/cli folder.

Then I came across another problem.  Joomla 1.0.x sites.  I have Several Joomla 1.0.15 sites running on my systems.  Well, unfortunately, Joomla no longer supports the 1.0 version.  This is going to be pretty hairy if there are problems in the future.

The problems with Joomla 1.0.15 sites were that the Contacts page would be completely blank.  If you try to go into the Administrator portal and look at the contacts, it was also a white page as well.  In addition, the content area of all of the web pages were blank!  The modules and other aspects of the template would load fine, but the main body content was completely empty.

Searching through some forums on this issue, I was able to fix the Joomla 1.0.15 blank content problems by doing the following.

Open the file “includes/Cache/Lite/Function.php” file.  Search for this line:

$arguments = func_get_args();

Now, you need to replace it with this code:

$arguments = func_get_args();
$numargs = func_num_args();
for($i=1; $i < $numargs; $i++)
{ $arguments[$i] = &$arguments[$i]; }

Now, one more update.

Open the file “includes/vcard.class.php” and before line 38, you need to add this.  Note that the brackets need to be around code that is already there – which is lines 38 to 77:

if(!function_exists(‘quoted_printable_encode’))
{
/* line 38 to 77 */
}

Your site should now work.  If you have difficulties with that, download the Joomla 1.0.15 patch for PHP 5.3 and above here.  The files are text files so rename them with a “.php” extension and then place them in the proper directories (as I outlined above):

Function.php
vcard.class.php

OK, so all the of the Joomla 1.0.x sites were back up and running after running those changes on PHP 5.3.2!

Well, the next morning, I saw a few e-mails about another website that was not working; the main page was working and other aspects, but the areas where users logged in registered, or did other activities – was a blank page!

I messed with this for hours upon hours and tried to fix what I could before it was time to quit.  The individual had a developer create the website for them – and they used a program called CakePHP Framework.  The version they used was old as the hills – version 1.2.1.8004.  Unfortunately, CakePHP does not support PHP 5.3 until CakePHP 1.3 versions.  Well, there was still ways to fix it and as many other folks online posted, they got their site working with PHP 5.3 using CakePHP 1.2 without trouble.

However, the pages that were not working were completely white – nothing; no debug information or help to be provided at all.  I did manage to get some errors out of the page by trying other methods – which were:

Deprecated: Assigning the return value of new by reference is deprecated in /<path>/core/cake/libs/inflector.php on line 131
Deprecated: Assigning the return value of new by reference is deprecated in /<path>/core/cake/libs/configure.php on line 136
Deprecated: Assigning the return value of new by reference is deprecated in /<path>/cake/libs/configure.php on line 226
Deprecated: Assigning the return value of new by reference is deprecated in /<path>/core/cake/libs/configure.php on line 911
Deprecated: Assigning the return value of new by reference is deprecated in /<path>/core/cake/libs/configure.php on line 951
Deprecated: Assigning the return value of new by reference is deprecated in /<path>/core/cake/libs/cache.php on line 71
Deprecated: Assigning the return value of new by reference is deprecated in /<path>/core/cake/libs/cache.php on line 155

So, trying to fix it like the other folks did with CakePHP 1.2 on PHP 5.3 or older, I went into those files and replaced some text on those lines.  PHP doesn’t support clone code in such a way.  So, on the lines above, I changed:

=& new

To this code:

= clone new

This completely got rid of the errors and there was nothing reported.  I even went into the config/core.php file and changed the debug mode to 1, 2, and 3 at different times to try to get something out of the site – but NOTHING would give!  It still continued to be a blank white page with no warnings or errors.

So, i gave up.  The developers indicated that the site would not work if the PHP version was 5.3.2 or greater.

So, I then had no other choice but to downgrade Ubuntu 10.04 Lucid from PHP 5.3.2 to 5.2 – which happens to be 5.2.10 that was installed (hey, at least it is newer than the 5.2.4 – but it still is a year old on the build date).

So more work and stress later, I am back to PHP 5.2.10 on the server without any troubles (so far!).  Here is how PHP was downgraded from 5.3 to 5.2 on Ubuntu Lucid 10.04.

You first need to find out what PHP items you have installed – which can be done by:

sudo dpkg -l | grep “php”

It will list all of the PHP packages that are installed on your server. Write all of them down! You will need to remove all of those. So, do that here:

sudo apt-get remove php5 php-date php-http-request php-log php-mail-mime php-mimedecode php-net-socket php-net-url php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-imagick php5-imap php5-mcrypt php5-mysql php5-snmp libapache2-mod-php5 libphp-adodb

Your list WILL be different than what I have above – but ensure that when you remove those items, it may throw in a few other packages that need to be removed – due to dependencies. Ensure that you write all of them down so you can re-install them after this process.

Good, now you have all of those removed. Now, a few more steps to go. First, you need to make a copy of your /etc/apt/sources.list file and point it to Karmic – which was the previous release that has PHP 5.2.10 on it. To do so, simply issue this command at the prompt:

sudo sed s/lucid/karmic/g /etc/apt/sources.list | sudo tee /etc/apt/sources.list.d/karmic.list

The above code makes a coupy of your /etc/apt/sources.list and places it in the /etc/apt/sources.list.d folder and calls it karmic.list. Basically it replaces all “lucid” occurrences in the file with “karmic” and copies it. Now, do a:

sudo apt-get update

This will then allow apt to refresh itself and download all of the headers for the Ubuntu Karmic files (since it is reading that new file now).

OK – now you need to create one file that you will basically use to tell the package manager to NOT upgrade and to only use the versions from the Karmic release. You can name this file whatever you want and place it in the /etc/apt/preferences.d folder (I just called mine php). In this file, you need to list EVERY package that you want to be downloaded from the Karmic repositories and NEVER from the Lucid repositories. Below is my list. You should have EVERY package that you just removed listed in here.

Package: php-date
Pin: release a=karmic
Pin-Priority: 991

Package: php-http-request
Pin: release a=karmic
Pin-Priority: 991

Package: php-log
Pin: release a=karmic
Pin-Priority: 991

Package: php-mail-mime
Pin: release a=karmic
Pin-Priority: 991

Package: php-mimedecode
Pin: release a=karmic
Pin-Priority: 991

Package: php-net-socket
Pin: release a=karmic
Pin-Priority: 991

Package: php-net-url
Pin: release a=karmic
Pin-Priority: 991

Package: php-pear
Pin: release a=karmic
Pin-Priority: 991

Package: php5
Pin: release a=karmic
Pin-Priority: 991

Package: php5-cli
Pin: release a=karmic
Pin-Priority: 991

Package: php5-common
Pin: release a=karmic
Pin-Priority: 991

Package: php5-curl
Pin: release a=karmic
Pin-Priority: 991

Package: php5-dev
Pin: release a=karmic
Pin-Priority: 991

Package: php5-gd
Pin: release a=karmic
Pin-Priority: 991

Package: php5-imagick
Pin: release a=karmic
Pin-Priority: 991

Package: php5-imap
Pin: release a=karmic
Pin-Priority: 991

Package: php5-mcrypt
Pin: release a=karmic
Pin-Priority: 991

Package: php5-mysql
Pin: release a=karmic
Pin-Priority: 991

Package: php5-snmp
Pin: release a=karmic
Pin-Priority: 991

Package: libapache2-mod-php5
Pin: release a=karmic
Pin-Priority: 991

Package: libphp-adodb
Pin: release a=karmic
Pin-Priority: 991

Now save that file. Two more commands. The first one – install the PHP version 5.2.10 from the Karmic repositories. Again, install only those that you removed from the previous remove command.

sudo apt-get install php5 php-date php-http-request php-log php-mail-mime php-mimedecode php-net-socket php-net-url php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-imagick php5-imap php5-mcrypt php5-mysql php5-snmp libapache2-mod-php5 libphp-adodb

Now, just restart Apache!

sudo apache2ctl restart

Apache should now be using PHP 5.2.10. After running this, the website that was having all the problems jumped back to life and was fixed.

Cannot Boot After Installing DMRAID

As I am getting ready to upgrade my hosting servers, I’ve been doing some research on how to get a mirrored RAID-1 array set back up.

Back when the Paris server was the main server and it was built, it was setup with two hard drives in a RAID-1 array using the BIOS RAID – which was on an nVidia RAID motherboard.

Well, this kind of RAID array is not a hardware RAID array – but it is a software RAID array.  It does have the firmware in the BIOS that allows for a nVidia RAID Configuration Utility by pressing F10 after the BIOS screen is done.  That is handy – and that is where you can setup your RAID array.

Of course, the drivers for this are made for Windows – and is called the nVidia MediaShield.  You have to install these drivers to make use of the RAID array.  This is because this type of nVidia RAID array built onto the motherboard requires both the software to be installed and the functions in the BIOS.  Hence, most people call this a FakeRAID or SofRAID setup.  Unless you shell out a large sum of money to get a real hardware RAID card with onboard processing and memory, you are stuck with this low-cost alternative.

FakeRAID setups require the use of the main processor on the motherboard to do most of it’s tasks.  That is fine in today’s world of 3.0 GHZ dual-core and quad-core processors.  That is also fine if you are running Windows with the nVidia Media Shield installed where you can manage your array and ensure it is in good health.

But, it is a little different in the Linux world.  Originally when the below issues happened – it was when using Ubuntu 8.04 – which had the older version of “dmraid” (the 2006 version – you can check your version of dmraid by typing ‘dmraid –version’ at a terminal).

Well, I already have a full server install on my PC.  All of the documentation on how to setup a FakeRAID array in Ubuntu is all for folks that want to install Ubuntu fresh – not after the fact like what I need done.  I’m sure I’ll have another post on this issue soon once my new hardware arrives and I go through the process – as I have a fairly good idea how I can setup a FakeRAID on Ubuntu after installing.

So, I just wanted to see what ‘dmraid’ was all about so I did the usual ‘sudo apt-get install dmraid’ at the command prompt.

Very strange – after I installed it, I issued a ‘dmraid -r’ command it it showed that i had one drive in the mirror (the only drive in the system currently) and it was in a mirror mode.  Hmm – I never setup a mirror on this drive – so how could this be possible?  Well, it may be possible because when I setup the Decatur server, I took the mirrored drive from the Paris server to put in the Decatur server – therefore it stopped the RAID-1 array on the Paris server.  Because of that, the RAID metadata was probably still on the hard drive that I put in the Decatur server.

Well, I didn’t think this would be a problem.  I was cleaning the server and getting ready for the new RAID cage and hard drives to show up and shut the server down.  Upon reboot, the server would not boot!  It would go to the Grub menu and dmraid would show that there was an ERROR – degraded array.  Then when Grub tried to continue the boot process, it would not boot.  It said ‘device or resource busy’ after installing dmraid.

I tried to search all that I could on this issue but there just wasn’t a solution.  In my Grub menu.lst file, the boot sequence is done by using the UUID of the hard drive.  So the Grub menu.lst was not changed after installing dmraid – but the computer still would not boot.

I went into my BIOS and checked things.  Sure enough, I had the SATA in Normal mode and not in RAID mode.  I was confused.

So, it was decided to go ahead and change the mode to RAID mode and ensure that SATA-1 was enabled for use in RAID.  Upon reboot, I got the nVidia RAID Configuration notice to press F10 along with blinking red letters indicating my RAID array was degraded.  Hmm.. maybe I am on to something now.

In the nVidia RAID Configuration Utility, it showed that the drive was part of a mirror.  Again – this may have been caused from this drive being used in the Paris server in a RAID array and it still had the RAID meta-data on the drive.  So, I deleted the RAID array but left the MBR and data intact and rebooted.

Success!  Now when Grub loaded, dmraid said ‘NO RAID DISKS’ and then the server continued to boot up!

So what happened here?  It seems that dmraid will take over any disks and assign them to a /dev/mapper/……. location.  So since dmraid was loaded before Grub started the full boot-up process, the hard drive was already in use – hence the ‘device or resource busy’ error message.

Today I have upgraded the Decatur server to Ubuntu Lucid 10.04 from 8.04.  While there were a few things that needed fixed (like websites running Joomla 1.0.x versions because of incompatibilities with PHP 5.3) and other small configuration changes here and there, the upgrade process went smooth.  And Lucid 10.04 includes the most updated version of dmraid – version 1.0.0.rc16 (2009.09.16).  I’ve heard that the old version I was running had problems trying to rebuild RAID arrays using the nVidia FakeRAID and this issue is supposedly fixed in the new version – although I’ve not been able to confirm that from anyone.

While a hardware RAID solution is by far the best solution, I’m not going to throw $100+ into a RAID card on each server to do so.  I’m just hoping for the best with using the FakeRAID in Ubuntu and hoping that all of these little glitches and problems are taken care of.  The whole point of using a RAID-1 array is to have that backup mirror.  But if you cannot rebuild that mirror after a drive fails, what is the point?  I guess that will protect you from one hard drive failure.

How To Setup Exim to allow User-Customized Spam Threshold Settings

Over the past month, I have been working to write my own panel-like application for clients that use my hosting services.  This “one-stop-shop” portal login will allow users to create/edit/delete accounts for their domain, access the file manager for their website, create/delete a database or access their database if they have one created, setup mail forwarding for e-mail addresses to their own mailbox or to an external address, and many other features.

One thing that I had a very hard time figuring out was how to allow users to modify and setup their own spam threshold settings.  Now, this works if you use Spamassassin with Exim4.  In addition, you will need some kind of database with at least three new fields added.  Now, since I use Horde GroupWare, I simply added on three fields to the “horde_users” table which contains the following:

  • If the user has opted to have spam checking on or off
  • User-specified threshold where the subject is tagged as spam
  • User-specified threshold when the message to prevent delivery

I was trying to figure it out by mailing the Exim-users newsletter but the responses I received indicated it would be quite difficult to do.  Scouring the Internet, I finally uncovered a solution that works!

The issue is this – when the mail server does the acl_check_data routine, you cannot use the $local_part and $domain variables – which would hold the recipient’s e-mail address.  However, you can use this in the acl_check_rcpt routine because it is checked for each recipient.  The acl_check_data routine is checked after – but just before- the mail is delivered to the user’s inbox.  The reason it doesn’t work in the acl_check_data routine is because there may be several e-mail addresses that the message is to be delivered to by this point.

So, the trick is to check for the user-specified spam settings in the acl_check_rcpt routine.  However – another issue with this.  If a message is sent to multiple individuals and the users all have different user-specified settings, you must defer the message to that individual.  So as long as the first e-mail user has the same settings as the second, third, or other recipients – the mail will be delivered to all of them at once.  But users that have different settings from the first e-mail address, the message will be deferred to that individual and the sending mail server will retry to send to that individual within the sending mail server’s retry time period.

OK, so here is the code that was used.  This was added to the acl_check_rcpt routine.  Note that order IS IMPORTANT when it comes to adding this to the acl_check_rcpt routine – so ensure that you put this entire block of code in the correct sequence to your other ACLs.

# Lines below sets the recipient e-mail address to acl_m0
require set acl_m0 = ${local_part}@${domain}
require set acl_m7 = ${lookup mysql{SELECT login FROM aliases WHERE alias=”${acl_m0}”}}
require set acl_m0 = ${if !eq{$acl_m7}{}{$acl_m7}{$acl_m0}}

# Lines below gets the spam_flag setting from the user for checking
require set acl_m1 = ${lookup mysql{SELECT spam_flag FROM horde_users WHERE user_uid=”${acl_m0}”}}

defer
message = Spam Threshold Mismatch
condition = ${if and{{def:acl_m2}{!={$acl_m1}{$acl_m2}}}}

require set acl_m2 = $acl_m1

# Lines below gets the spam_delete setting from the user for checking
require set acl_m3 = ${lookup mysql{SELECT spam_delete FROM horde_users WHERE user_uid=”${acl_m0}”}}

defer
message = Spam Delete Mismatch
condition = ${if and{{def:acl_m4}{!={$acl_m3}{$acl_m4}}}}

require set acl_m4 = $acl_m3

# Lines below will automatically accept if the user has spam filtering disabled
require set acl_m5 = ${lookup mysql{SELECT spam_enable FROM horde_users WHERE user_uid=”${acl_m0}”}}

defer
message = Spam Checking Mismatch
condition = ${if and{{def:acl_m6}{!eq{$acl_m5}{$acl_m6}}}}

require set acl_m6 = $acl_m5

# Lastly, set the values to the default if the values cannot be looked up.
require set acl_m2 = ${if eq{$acl_m2}{}{45}{$acl_m2}}
require set acl_m4 = ${if eq{$acl_m4}{}{53}{$acl_m4}}
require set acl_m6 = ${if eq{$acl_m6}{}{Y}{$acl_m6}}

accept
condition = ${if eq{$acl_m6}{N}}

OK, now that you have the above added, hopefully I can explain what is going on.

In the first three lines (the three “require set” lines), the acl_m0 variable will be set to the actual e-mail address that the message should be delivered to.  For instance, if a user setup e-mail account me@bob.com to be forwarded to their real mailbox of bob@bob.com, this will ensure to grab bob@bob.com – because the “user_uid” field in the “horde_users” table is bob@bob.com – not me@bob.com.  If the REAL e-mail address isn’t captured, then the code above will not grab any row from the database (because there won’t be a row in the “horde_users” table with a “user_uid” of me@bob.com).

So, acl_m0 is first set to the e-mail address that the sender wants to send it to.  Next, acl_m7 is a variable that will hold the alias e-mail account.  For instance – me@bob.com.  A MySQL lookup is then done on the “aliases” table (which contains two fields – login and alias).  It will capture the REAL e-mail address – which is in the “login” field by looking up the alias.  Now realize that by doing this check, the result may be empty.  If the result is empty, that then means that the e-mail address the sender wants to send to is the REAL user account e-mail address (and their login ID that is in the user_uid field).

So the next line will then set acl_m0 to the REAL e-mail address (the “login” field) IF and ONLY IF acl_m7 is not empty.

Moving on to the next part.  Variable acl_m1 is then set to the user’s subject flagging threshold.  In this case, the field in the “horde_users” database is called “spam_flag”.  The lookup will lookup the row where the “user_uid” field matches the user’s login (or the REAL e-mail address).

Now for the two lines under the “defer” ACL.  The first one is the message – which just will simply echo to the sending mail server that the message must be deferred – so the mail server needs to try back again.  Why do this?  Well, this brings me to the condition.  The condition basically says that IF acl_m2 is defined and acl_m2 and acl_m1 do NOT match, then defer.  This is the point where the configuration is set to defer the message for more than one recipient IF the recipients do not have the same spam subject tagging threshold set.  If there are 50 recipients and all of them have the exact same settings, all 50 will get the message at the same time.  If there are 50 recipients and all except two have the exact same settings as the first user, then all users (except those two) will get the message delivered to them at the same time.  The other two will be “deferred” so the sending mail server will then retry – just those two users – at a retry period (usually five minutes or maybe 15 minutes on some systems).

Now to the next line of code – the “require set acl_m2 = $acl_m1”.  This line will set the $acl_m1 code that holds the threshold value and moves it into acl_m2.  At this point, this is how the defer code works.  This is because the first user WILL get past the defer code – and therefore acl_m2 will be set with their subject tagging threshold.  Subsequent users will go through the same process – but if acl_m2 (the previous user’s threshold) does not match acl_m1 (the current user’s threshold), then of course – the “defer” ACL runs.

Now all of that instruction above basically is done three times.  The first time sets the subject tagging threshold (uses variables acl_m1 and acl_m2).  The second time will set the e-mail reject threshold (uses variables acl_m3 and acl_m4).  And lastly, the third bit of code will check to see if the user even has spam checking turned on (uses variables acl_m5 and acl_m6).

After that is run – as noted in the code above – I then set default values IF the user data cannot be looked up.  When would the user data fail to be looked up?  Well, if a forwarding account is set to another provider.  Using the example with me@bob.com – if that was a forwarding address to me@yahoo.com, clearly I do not host Yahoo so I won’t have a REAL user in my system for “me@yahoo.com” because the me@bob.com address is nothing but a forwarder to the Yahoo account.  This is why I set default values; the default values are what I have found are good, reliable settings for Spamassassin (at least on my system).  Each of the “require set acl_mX” lines check the data.  If the acl_mX variable is empty, then it will change that acl_mX variable to equal the default amount.  For subject tagging, that is 45.  For rejecting, that is 53.  For spam checking, it is set to “Y” which means checking is enabled.

And now lastly – the “accept” code at the bottom.  This code is run IF and only IF the user has decided that they do not want spam-checking enabled.  So if the acl_m6 variable is equal to “N”, then the mail will be automatically accepted in the acl_check_rcpt routine  This is et because I have “deny” conditions below the above configuration.  But I don’t want those to run if the user has spam-checking disabled.


Setting the acl_check_data Routine to Use The User-Specified Spam Values in Exim

OK, now you need to use those variables in the acl_check_data routine where Spamassassin actually checks the e-mail.  Again, ensure you have these set in the proper location because order is important in each of the routines.

# Will accept if the user has disabled spam checking
accept
condition = ${if eq{$acl_m6}{N}}

# Deny/Check for Spam
warn
spam = Debian-exim:true
message = X-Spam_score:  $spam_score\n\
X-Spam_report: $spam_report
condition = ${if <{$spam_score_int}{$acl_m4}}

warn
spam = Debian-exim:true
message = Subject: SPAM SCORE: $spam_score $h_Subject
condition = ${if >{$spam_score_int}{$acl_m2}}

deny
spam = Debian-exim:true
message = E-mail cannot be delivered: $spam_score spam points. $h_Subject
condition = ${if >{$spam_score_int}{$acl_m4}}

In the code above, it first checks to see if the user has disabled spam-checking.  Again, if variable acl_m6 is equal to “N” (spam-checking disabled), it will automatically accept the message and skip the other three conditions.

The next condition – the “warn” condition – will add the spam score and spam report to the header of the e-mail message if the spam score is less than the reject threshold.  Now, the user-specified spam reject threshold is stored in variable “acl_m4” – hence why you see that in the code.

The next “warn” condition is set to only tag the subject line.  It will add “SPAM SCORE: <score>” to the subject line of the e-mail.  Now, “acl_m2” was set to the subject-tagging threshold – so this is why the condition contains that.  If the spam score is higher than the “acl_m2” variable, the e-mail subject will be tagged.

Lastly – the “deny” condition.  This simply rejects the e-mail at SMTP time to prevent the message from even getting to the user’s mailbox.  In the condition, you see that “acl_m4” is specified.  Again, this variable holds the user-specified reject threshold.  So if the spam score is higher than the “acl_m4” variable, the message is rejected and will not be delivered.

So there you have it.  Not a very easily solution to all user-specified spam settings in Exim, but it does work.  The only issue that may cause trouble is the “defer” conditions if the spam settings are not the same for each of the users that the message should be sent to.  If there are ten users and all ten of them have different settings, the message will only be delivered one at a time and the sending mail server will be deferred nine times until all users get the message.  Deferring nine times may be a substantial period of time depending upon the sending mail server’s retry settings.

Migrating Links from Joomla 1.0 to Joomla 1.5

Recently I just upgraded my Joomla installation from 1.0 to the 1.5 release.  Yes, late I know – but the site just “worked” and there is no reason to fix what isn’t broke right?

During the upgrade, I used the Migrator component – which takes a snapshot of all your content items, menu items, modules, and other “main features” from Joomla.  However, it does not migrate over third-party plugins.

On my site, I have two plugins that are used.  One of them is to allow comments to content items (like this one) and the other is the Zoom Media Gallery.

After installing Joomla 1.5, I re-installed the old comment plugin – and then did an export of those specific tables for that plugin – and then imported them in the new database.  Perfect – worked like a dream.  Just ensure you turn on the Legacy plugin if you are using old templates or non-native components for Joomla 1.5.

I had a little more difficulty with Zoom Media Gallery.  I was able to install the component just fine – then exported the tables from the old database and imported them into the new – no problem there.  However, when I tried to view the gallery or even the configuration, it was nothing but a white screen.

Luckily someone made a “patch” for this.  I uninstalled the Zoom Media Gallery component and installed the patch – which makes very little changes to the original component – but it worked.  Re-imported the tables again – and I was in business!

So the site was converted over to Joomla 1.5 in a day.

Next problem – woops!  I posted many links on forums around the Internet with the old way that Joomla made use of the mod_rewrite Apache module.

It went from links like this:

  • http://www.bsntech.com/content/blogcategory/41/281/
  • http://www.bsntech.com/content/view/1301/281
  • http://www.bsntech.com/component/option,com_zoom/Itemid,261/
  • http://www.bsntech.com/component/option,com_zoom/Itemid,261/catid,2/

To links like this:

  • http://www.bsntech.com/bsntech-blog-mainmenu-321/computers-mainmenu-281
  • http://www.bsntech.com/bsntech-blog-mainmenu-321/computers-mainmenu-281/1301/
  • http://www.bsntech.com/index.php?option=com_zoom&Itemid=261
  • http://www.bsntech.com/picture-gallery-mainmenu-261?catid=2

See the big difference there?  Wow – I quickly made a backup of the new Joomla 1.5 installation, zipped it up, and reverted back to Joomla 1.0 until this problem was fixed.  Why?  I don’t want to lose my Google Page Ranks – and I certainly don’t want to lose all of the visitors on those forums where I posted direct-links to my blogs or specific content-items!  This would have been dreadful and would have left all of my users upset that the links no longer worked – and would reduce my visitors by at least half (if not more)!

What I needed was a Joomla 1.0 to Joomla 1.5 Link Converter.  Yeah – no such thing available.

Instead, I remembered that there was a file in the root directory called “.htaccess” – which allows the Apache mod_rewrite module to function.  So, this was the key to getting all of those links to work and automatically redirect to the correct site.

After probably 12 hours of attempting to work on this, I just opted to go to the Apache mod_rewrite documentation and look through this.

While not all of the pages are going to direct using this function (unless you make a rule for every single specific site on your main page), I only redirected the pages that are most-used through search engine hits and the like.

So here is the glory that was added to my .htaccess file.  It was added just under the line that says:

########## Begin – Joomla! core SEF Section

And for the code that was added to allow those links above to work:

RewriteRule ^content/blogcategory/51/301/$ /bsntech-blog-mainmenu-321/gardening-mainmenu-301 [R,L]
RewriteRule ^content/blogcategory/41/281/$ /bsntech-blog-mainmenu-321/computers-mainmenu-281 [R,L]
RewriteRule ^content/view/(.+)/301/$ /bsntech-blog-mainmenu-321/gardening-mainmenu-301/$1 [R,L]
RewriteRule ^content/view/(.+)/281/$ /bsntech-blog-mainmenu-321/computers-mainmenu-281/$1 [R,L]
RewriteRule ^component/option,com_zoom/Itemid,261/$ /picture-gallery-mainmenu-261 [R,L]
RewriteRule ^component/option,com_zoom/Itemid,261/catid,(.+)/$ /picture-gallery-mainmenu-261?catid=$1 [R,L]

After testing for a while, this fully took care of the most-used content on my site.

The first two lines above simply point directly to a blog category – such as when you click “Gardening Blog” or “Computer Blog” on the top of the main page.  With these two lines, it was an immediate conversion – there were not any dynamic numbers or anything to throw in.  You’ll noticed that it says “^content……/$”.  The caret in front indicates that this is the beginning of the line and the $ sign indicates the end of the line.  It will only accept this as input – any deviation from it and the redirect won’t work.  In addition, at the very end, there is [R,L] – which means Redirect (so the address bar in a user’s browser changes to the correct link) – and L – for Last.  This tells the mod_rewrite module that if this line item matches, do not process any other lines below it for comparison.

In the other lines, you’ll see a “(.+) inserted.  This tells the module that any number of characters can be inserted in this portion of the URL and is stored in the $1 variable.  Hence – that is why there is a $1 at the end of the redirect link on those lines.  Now of course if someone just types in junk in that area, it clearly will get a 404 – because it passes it directly to the way the new links are setup.  You can’t enter number 1500 if a content item with that ID doesn’t match.

Well, that takes care of the dynamic conversion of links from Joomla 1.0 to Joomla 1.5.  The site is functioning well and I believe it is very close to being fully supportive of the old link style in Joomla 1.0

64 Bit rtl8185 Driver for Linux Issues

While I’ve written previously about some trouble I’ve had with the RealTek 8185 wireless card, there are some additional ones that I came across this past weekend.

This past weekend I installed the newest version of Ubuntu – 10.04 Lucid onto the laptop.  The previous install of 9.10 Karmic was getting pretty sluggish and slow.  I’m a bit surprised that Ubuntu seems to slow down over time – because there isn’t a registry like there is in Windows.  But, it did and it was time to upgrade to the LTS (Long Term Support) version of Ubuntu anyways.

So I first downloaded the 64-bit version of Ubuntu 10.04 Lucid and installed it.  Immediately Ubuntu used the rtl8180 driver on the laptop – and it was stuck in an operating mode of 1 mbps.  Even if I was right next to the router, it still was only at 1 mbps.

I tried three different versions of the RealTek 8185 driver – none of them worked.  I had to look at the /var/log/messages log to find out that there was a lot of “unknown symbol” errors.  I tried both the Vista and XP driver on RealTek’s site – and another one I found at a place like driverguide.  None of them worked.  I also tried the 32-bit version, and of course – I was then greeted with the “bad magic” error in the messages log.

Defeated, I moved back to 32-bit Ubuntu on the Gateway MT3422 laptop.  I had to install the Windows Wireless Drivers (ndiswrapper) GUI (because it makes it easier).

The Windows Wireless Drivers (ndiswrapper) can be installed by opening the Ubuntu Software Center from the menu and then typing in “ndis” in the search box.

After installed, it will appear under the menu – System – Administration –  Windows Wireless Drivers.

Once there, click to install the driver – which is the rtl8185.inf file.  As with my previous post on this issue,  you can find the 32-bit drivers here .

After I installed the drivers, networks were being detected and I joined the wireless network, everything was working fine.  After I began installing updates, I noticed that the wireless kept disconnecting. I checked the /var/log/messages log and found:

ndiswrapper (iw_set_auth:1602): invalid cmd 12

I started doing a search on it and wasn’t happy with what I saw.  Folks were indicating that there may have been some compilation issues with ndiswrapper causing this.  Ubuntu Lucid 10.04 comes with ndiswrapper 1.55.

Trying over and over again, the wireless card kept trying to reconnect to the wireless network.  Every now and then it would re-prompt me to enter the key, try again, and fail.  Sometimes it would connect – but just for only a few moments – and then disconnect again.

I thought – this was just working without any problem for a few hours before I began installing updates – what could have caused it?  I didn’t want to go back to Karmic 9.10.

I took a long shot and just did a reboot on the wireless router.  Amazing – problem fixed.  I just rebooted the wireless router and now the RealTek 8185 wireless card with the appropriate rtl8185 32-bit driver is working just fine.

How to Remotely Shut Down Windows Computers

Something I’ve been working on lately is how to reduce carbon footprints and reduce electric bills at work.  At work, I manage over 600 computers.  Even if each of these took only 50 watts of power an hour, 20 computers would use a kilowatt of power every hour (50 x 20 = 1,000 watts).  At 600 computers, that means that the computers alone use (600 / 20) 30 kilowatts of power every hour.  This is 720 kilowatts per day or an average of 21,900 kilowatts per month.

That is a lot of power!  Now, the question is – how often are all of these computers used?  Well, they are not used for at least eight hours per day (overnight hours when we are closed).  In addition, not all employees are using their computers.  Some come in during evening shifts – and in those cases, the computers can be off until they arrive to work.

After doing some number crunching and based upon hours of operation, it is possible to reduce power consumption by more than 50% for these computers.  On weekends, there is 11 hours of time when the facility is closed.  On weekdays, there is 7 hours of time when the facility is closed.  Out of 168 hours in a week, this immediately is 57 hours of time the computers are not used – 34%.  Since the average start time for employees is about noon, this then gives us another six hours a day during the weekdays.  The average start time on the weekends is 10 am – which gives another two hours.  This adds up to an additional 34 hours – a total of 91 out of 168 hours in a week – or 54% unused usage.

I tried an experiment over a year ago where software was loaded on all of the computers.  The software would basically shut down the computer after a period of two hours of non-use.  However, this method didn’t work all that well – and it threw registry errors and other issues.  In addition, if a user locked their computer (which most individuals do), the program would not shut down the computer.  So while it had a nice affect of allowing you to set the computer to shut down after a period of non-use, it was only marginally effective.

After more research, I found a nice tool called psshutdown.exe (Download Here ).  PSShutdown is a nice utility that allows you to remotely shut down computers.  However, you need to ensure that you have Administrator access on the computers that you are going to remotely shutdown.  You also have to ensure that the file sharing service is installed (default in Windows domains) and that the Windows Firewall is either off or you have allowed file sharing through it.

PSShutdown connects to the admin$ share on the remote computer – so this is why sharing is needed and the Windows Firewall needs to allow for it.  The admin$ share points to the Windows folder on the remote computer.

PSShutdown only works with clients Windows XP or higher – it will not work with Windows 2000 or lower (95, 98, Me, NT 4).

Now that you have the utility, what can you do with it?  Well, you can call it one by one and shutdown or reboot the computers.  But the great thing is the ability to use it in batch files and automate it by using a Windows Scheduled Task.

Originally I made a batch file that would call “PSShutdown @textfile.txt”.  txtfile.txt would be a list of computers that needed to be shutdown – with one computer on each line.  However – bit problem with this as I discovered.  Computers that are already off would dramatically slow down the process.  If PSShutdown couldn’t connect to one computer, it would take over a minute for the program to move on to the next computer.  After looking at log files, I discovered that the script fully completed six hours later!  Well, that won’t do and that didn’t make a very big dent in the power bill.

Just recently I created a new script.  Ingenious I think.  This script will first try to ping the remote computer once.  If the remote computer replies, then the computer name is placed into another text file.  PSShutdown is then called to shutdown all computers in this text file.  How long does it take now?  Well, to go through 600 computers, it takes no more than ten minutes.  Yes, big difference – six hours versus ten minutes.

At this point, you are probably screaming – just show me the script!  I want it!  Well, alright.. here it is.

@ECHO OFF
del poweroff.txt
SetLocal EnableDelayedExpansion
FOR /F %%1 in (shutdown.txt) do (
C:\Windows\System32\ping.exe -n 1 -w 100 %%1
IF !ERRORLEVEL! == 0 echo %%1>> poweroff.txt
)
EndLocal
del shutdown.log
psshutdown @poweroff.txt -f >> shutdown.log

Let’s go through the script.

-> @ECHO OFF – This is used so that the script works without echoing the commands to a terminal

-> del poweroff.txt – The poweroff.txt is the file that is populated by the successful ping response

-> SetLocal EnableDelayedExpansion – This is a module for the command shell that will allow short delays to be properly interpreted in the script

-> FOR /F %%1 in (shutdown.txt) do ( – This is the logic to start the loop.  The shutdown.txt file contains a list of all of the computers in the facility.  %%1 is the variable used to hold each computer name drawn from the shutdown.txt file.

-> C:\Windows\System32\ping.exe -n 1 -w 100 %%1 – This calls the ping program, sets the number of pings to 1, sets the size of the ping to 100, and then the %%1 is filled in by the script with the computer name.  I tried to just use “ping.exe” instead of the full path, but the script simply would not work.

-> IF !ERRORLEVEL! == 0 echo %%1>> poweroff.txt – this line will write the computer name to the poweroff.txt file if the ping was successful.  If the ping is not successful, then the computer name is not written.  Notice that there IS NOT a space between the %%1 and the >> marks – if you put a space here, a space is echoed into the poweroff.txt file after the computer name – which throws psshutdown.exe off.

-> ) – closing of the FOR loop

-> EndLocal – Ends the special EnableDelayedExpansion module

-> del shutdown.log – this is the log file that was previously created with the last run.  I want it to be deleted so it doesn’t continue to get larger.

-> psshutdown @poweroff.txt -f >> shutdown.log – This is the actual meat of the shutdown procedure.  It calls the psshutdown program and feeds in the poweroff.txt file – which contains the computer names.  The -f switch tells psshutdown to force the computers to shutdown – even if users locked their PCs (Yay – fixes problem of the other software I used).  Then the >> shutdown.log tells psshutdown to put all of the log data into the shutdown.log file

So there it is.  A small bit of technology that can save huge on the power bill.  How much does it cost to set this up?  Nothing.  Free.  Zip.  Provided you are set up as an administrator on the computers and file sharing is turned on, the client computers do not need any kind of configuration changes or software installation.

You only need to ensure psshutdown.exe is on a computer it will be run from, a text file of all of the computers in your environment, the batch script made above (make sure to copy/paste it and name it with a .bat extension), and a Windows Scheduled Task.

I hope others will find this useful and this script has the power of generating countless numbers of kilowatts to be reduced.

Creating a Linux Backup Solution on Ubuntu

Before today, my backup solution for my Ubuntu servers was pretty simple – two tar files; one for the web page folders and another for mailbox backups.  This was done by the following command:

rm /<location>/daily-www-backup.tgz > /dev/null
rm /<location>/daily-mailbox-backup.tgz > /dev/null
tar pzcf /<location>/daily-www-backup.tgz /<location-to-backup> > /dev/null
tar pzcf /<location>/daily-mailbox-backup.tgz /<location-to-backup> > /dev/null

However, there is a problem with this situation.  The above will basically first delete the backup files and then re-create them each day.  This is in essence a “full backup” technique every day.  But, what happens if a file is overwritten and it isn’t noticed until a few days later?  Woops!  Too late now.

So I began doing a little research on how to perform differential backups in Ubuntu.  I came across “dar” – a program that will do both a full backup and a differential backup based upon the full backup.

I also installed Kdar on the server – which is a GUI front-end of dar from KDE.  It seems that this was no longer supported in the archives after the Ubuntu dapper release, so in order to get Kdar on my current systems, I had to put the following line in the bottom of the /etc/apt/sources.list file:

deb http://gb.archive.ubuntu.com/ubuntu dapper universe

I then performed the update so this new location was cataloged:

sudo apt-get update

Then I installed kdar and dar:

sudo apt-get install kdar dar

Afterwards, I then opened up kdar (had to run as root as well to backup files that my user account didn’t have) and setup a backup job.  It gave me the option to Export the dar command to a shell script, which looked like the following:

dar -c “/<location>/SundayFullBackup” -R “/home/” -w -D -y -m 150 -P “Folder1” -P “Folder2” -P “Folder3” -P “Folder4”

Here is how the command works:

-c – This option tells dar the name of the file that will contain the backup
-R – This option tells dar the location that should be backed up
-w – This option tells dar not to warn when overwriting files
-D – This option tells dar to store excluded directories as empty directories in the backup file (see -P for excluded directories)
-y – This option tells dar to use the bzip2 compression technique (instead of -z which uses gzip; bzip compresses more)
-m 150 – This option tells dar not to compress files less than 150 bytes in size
-P – This option tells dar to exclude a directory from the archive.  In my case above (which I’ve changed the folder names), there are four folders that I’ve excluded from the backup

By using the -P command, this allowed me to backup both the mailboxes and web data at once instead of having two backup files and two separate processes.

With this command alone over the tar command, it saved about 17 megabytes of space.  Tar uses the gzip compression technique with using the “z” option.  So the two combined files using tar was 880 megabytes.  The one file made by dar is 863 megabytes.  While this isn’t much of a savings, it still is an improvement over tar.

Another improvement over tar (and the main reason I installed the Kdar GUI) is that you can extract specific files and folders from a dar backup file.  Tar requires you to unpack and unzip the entire archive to a directory and then pick and choose what needs restored.

Now, how is it that you create a differential backup?  Let me know show you the command that Kdar made to create a differential backup:

 dar -v -c “/<location>/MondayDiffBackup” -R “/home/” -A “/<location>/SundayFullBackup” -w -D -y -m 150 -P “Folder1” -P “Folder2” -P “Folder3” -P “Folder4”

While the command looks quite similar to the full backup command, there are a few extra options on this that I’ll go over here.

-v – Verbose output – This will output a list each day the differential is run to show what files have changed since the last full backup.
-A – This option tells dar the location of the full backup that the differential should be based on.  This is how dar can tell what files have been changed/modified and need to be backed up to the newest copy.

That is all there is to it!  However, I had a problem when trying to run a differential.  Since I am going to set these all up as cron jobs, I needed them to run without any intervention.  The full backup worked fine when I ran the shell script, but unfortunately the differential backups would not.  I kept getting this message when I would try the differential backup:

Warning, SundayFullBackup.1.dar seems more to be a slice name than a base name. Do you want to replace it by SundayFullBackup ? [return = OK | Esc = cancel]

I searched online and could not find a solution to the problem.

Originally, I created the full backup using Kdar.  So I pondered if maybe Kdar did something different with backing up the original file.  Therefore, the original full backup file was deleted and then I re-created it using the shell script that Kdar.  When I then run the differential backup – poof!  It worked without any intervention required and it worked well.

So, a good solution for performing differential backups in Linux would be to use a combination of dar and Kdar.  Kdar is best used only as the restore program so you can pick and choose what files you want to restore – and dar is needed as the command-line program so you can create a cron job and have these run.

How to Get Page Title in Joomla 1.0.x

Search terms:
How to get the page title in Joomla 1.0.x
How to get the content title in Joomla 1.0.x
How to get the article title in Joomla 1.0.x
How to show the page title in Joomla 1.0.x
How to show the content title in Joomla 1.0.x
How to show the article title in Joomla 1.0.x

After several long hours of trying many methods of obtaining the page title of a content item, I finally stumbled across the solution.

 In my blogs, I wanted to have the pagination and the results of additional blog entries at the bottom of the page (as you can see on this page where it allows you to click next/previous/etc).  If I turned off pagination, the actual title of the content item would appear in the title.  However, if I turn on pagination, it does not do so IF you follow the links using the pagination below.  You can try this yourself – if you click on “next” below, you will see the title of the page at the top of your browser say “BsnTech Networks – BsnTech Blog – Computers” instead of the actual name of the content item.

Why do I want the content item title so bad?  Well, I use a tracker on my site to see what pages visitors are viewing.  If I can’t get the correct title, every Computer blog entry (or any other Blog I have) will show as the very same page – because the actual title never changes!

So I FINALLY found a solution that will allow me to keep the pagination feature on and to log the REAL content item title that each visitor views.

I tried many methods and none worked.  I tried to get the Itemid, id, Contentid, and many other options.  With the pagination feature on, none of the IDs returned were the actual ID of the content item in question – it was some other ID.  I was going to couple this with a database query to pull the title from the content table where the id equaled the page’s ID.  No-Go.

I then tried to use the mosMainBody() feature and store it to a PHP variable – where I would then get the string after the “contentheading” TD Class.  When I tried this, the whole mosMainBody actually appeared on the webpage and this wasn’t succesful either!

So, alas, I have the solution to finally allow you to get your page title in Joomla 1.0.x and store it to a PHP variable.  Here’s the trick:

You need to open up the content.html.php file in the components/com_content directory.  You then need to go down to around line 624 – I cannot tell exactly what line yours may say because I’ve done other tweaking to this file, but the line will look like this:

function Title( &$row,  &$params, &$access ) {

Just under this line, I added the following lines.  The following lines will basically take the page title and store it in a GLOBAL PHP variable called “myTitle”.  It has to be a GLOBAL variable so that it can be accessed in your template.

//below are two lines added to get the page title and store it in myTitle
global $myTitle;
$myTitle = $row->title;

Now, go into your template index.php file and you can use the $myTitle variable anywhere that you want the actual title of the content item to show up.

So now I have the both of best worlds on my blogging sites – I can allow my visitors to use the pagination features at the bottom of the blog pages to follow my blog entries in order, list the blog entries on the right-hand side so visitors can click specifically on an entry that interests them (this was other PHP code I wrote to pull the data from the database and construct the link – which actually does show the correct title in the browser’s title bar), and also see what pages visitors view on my statistics information pages.