sakuramboo.com

Wildcard Search Strings

by on Jul.07, 2011, under Linux, Programming

I came across this while updating my companies web based sales tool. I had to create lookup files for all of our sales files. The format of the files are as follows.

(order number):(account name):(sales rep):(ad start date)

We have an advanced search feature where you can search for specific criteria such as all order from a particular rep or of a specific account name. The issue I had was when dealing with the ad start date. To fix this issue, I had to rewrite all dates into a YYYYMMDD format. This helped, but the real problem was that the user did not have to input all information. How I figured out how to do this was pretty slick, if I say so myself.

I initially created two different methods to achieve this search function. The first method was to take each search criteria and search each line of each lookup file. If the string was in it, it would push the line to an array. And then every other search criteria would then push those lines from that array into another array.

I think you can see the problem with that one.

The next method I came up with was to to find the search criteria from each line and each selection would have its own array and then find all similar lines in each array and pass that to another array. Again, way too much work for something so simple.

If we look at the actual string, this is what we are looking for.

(.*):(.*):(.*):(.*)

Since .* is a wild card for any character any number of times, any time that a selection is left blank (or, in this case, had “Any” selected), the section of the string would be (.*):. This is where I figured out that the search criteria should make a search string and try to find any line that matches that search string. This way, we are just reading the lookup files once and instead of passing the matched lines into an array, we just print it to the page. This is the bit of code that does all of this.

foreach $num (reverse 4000..$prefix){
     open(BOOK, "$location/$num-list.txt") or die $!;
     foreach $line (<BOOK>){
          $searchfor = "(.*):";
          ($io, $account, $salesrep, $startingdate) = split(/:/, $line);
          if($acctname eq "All"){
               $searchfor = $searchfor . "(.*):";
               } else {
               $searchfor = $searchfor . "$acctname:";
               }
          if($repname eq "All"){
               $searchfor = $searchfor . "(.*):";
               } else {
               $searchfor = $searchfor . "$repname:";
               }
          if($startdate eq "00000000" and $enddate eq "00000000"){
               $searchfor = $searchfor . "(.*)";
               } elsif($startdate le $startingdate and $enddate ge $startingdate){
               $searchfor = $searchfor . "$startingdate";
               } else {
               $searchfor = $searchfor . "00000000";
               }
          if($line =~ $searchfor){
               print qq(<tr><td><a href=io-edit.cgi?iovar=$io&user=$user&type=$type&id=$id target="_blank">$io</a> </td></tr><td></td><td>$account </td><td></td><td></td><td></td><td>$salesrep </td><td></td><td></td><td></td><td>$startingdate</td></tr><br \/>);
                }
          $startingdate = "";
          }
     close(BOOK);
     }

Simplicity is really much easier than it looks.

Now, there are some bugs in here, but nothing that can’t be fixed rather quickly.

The thing with this method is, each section between the colon’s have a wildcard by default. Since the first section does not have a value, it automatically starts with (.*): and each “Any” value gets (.*): appended to it. This makes comparing each line easy since we are looking for characters between the colon’s.

Leave a Comment more...

Linux news sites

by on Jun.11, 2011, under Bored, Linux

There are a few Linux News sites that I frequently check out. One day, after reading yet another “Top 5” blog posts, I asked myself, why do I still go here? I am not going to answer that question here, but what I will be doing is going over some of the issues I have with the majority of Linux related blogs and news sites.

There are to many “Top 5” or “10 Must Have” posts that showcase programs that, really, anyone who has been using Linux for even 6 months should already be familiar with. Just looking at the front page of lxer.com, they have 4 such blog posts.

12 killer apps for linux
11 of the Best Free Linux Chemistry Tools
5 Useful Unity Lenses You Can Install Right Now!
5 Links for Developers and IT Pros

Out of those 4, the only one that is rather legitimate is the one for Free Chemistry Tools. This is a list that linuxlinks.com does from time to time where they showcase some of the best of the best of various free software and once in a while, commercial-ware. All of the other ones are either just the author trying to cater to people who have just started using Linux yesterday or someone just compiling a list of links for people to check out. And I feel that these types of posts are useless.

Here are a list of other links from the front page of lxer.com, take a guess at what they all have in common.

Wineskin Pro 2.3 released
Why I Love Bodhi Linux
Ubuntu 11.04 (Natty Narwhal), Reviewed In Depth
Sabayon 5.5 Xfce Review
Review: Pinuy OS 11.04 Mini
Firefox 5 Release: new speed, same illness
Kde 4.6.4 has been released! With installation instructions for Ubuntu
Wine 1.3.22 Released
Peppermint OS Two Review
KDE SC 4.6.4 Is Available for Download

And there are a few more, I got bored continueing on with this list.

So, what do the all have in common? If you said that they are either reviews about a distribution or that they are press releases of the latest version of some piece of software, you are correct. I, for one, do not care of either of those. I will pick a distribution and use it, if I don’t like it, I will try something else. I do not need a review to tell me to try something. Furthermore, newly released software only matters to me if I do not follow releases of said programs on their project pages and I compile everything myself. For example, I wouldn’t have known of Firefox 5 being released because I don’t follow their releases page, but I also don’t care because when my distribution adds it to their repository is when I will upgrade. If I was compiling all of my software from source, then I would care deeply, but if I cared that deeply, I would be subscribed to their mailing list, apply the patches myself and compile everything on my own. To me, this is just wasted space on the internet.

The last group of sites that I have a large problem with are those that really cater to people new to Linux. I have been using Linux for 7 years now and I work with Linux professionally as a Sys Admin. I do not need How-Tos on installing Chrome 12 or how to change my wallpaper. I need posts that go over things like building your own lenses in Unity or setting up your own software repository for Debian or CentOS.

This was one of the reasons why I started my blog, because this kind of information is hard to find. I wanted to make my blog catered more towards those that have been using Linux for a while, but may not know a lot about one particular area, much like my posts on IPTables. I showed a collegue of mine the post on dropping brute force packets with IPTables and was really impressed with it. So much so that he implemented it on his machine. The reason I showed him that was because he was looking at alternative solutions that worked by blacklisting on the software level and not on the hardware level. But, I digress.

The point of this post was more of a rant on the current state of Linux related blogs and news agrigators. I want to see more detailed and technical posts and less posts to hold the hands of people new to Linux.

And before anyone points out the hypocracy of this post considering I made a post about gaming in Linux, I just want to say, that post was more of a rant on those that claim there are no games on Linux and that many of the bigger named development companies are wrong in their thinking.

Leave a Comment more...

Online Dating Part Two

by on Jan.05, 2011, under Bored

The last time I posted about this, I talked about the word “fun” and how it is such a bogus term used for what one enjoys doing. This time, I want to go over some of the commonalities found within the female profiles found on match.com. The following are things that I have noticed most, if not all, women post in their profiles.

1)They like to have fun.

Well, of course they do. But, more interestingly is the fact that they rarely say what fun actually is. I talked about this in greater detail in my previous post, so let’s move on to the next one.

2)They enjoy going out and staying home.

This is another one where they really say something without actually saying anything at all. So, what they are saying by this is, they enjoy everything. That is very misleading. I’m sure that when the work day is over, there is an activity that they will more times than not participate in. I guess they leave that part out to let the viewer determine what that activity is.

3)They are sports fans.

Why are all the female sports fans single? What character traits do they have that links sports to their inability to maintain a steady relationship? Typical guys want their significant others liking sports, so why are they still single?

4)They love to laugh.

Laughing seems to be another popular theme among online profiles. However, I feel that this is like the word “fun”, it should be understood, an unwritten rule. Laughter means enjoyment, parallel to pleasure, so it should go without saying that they like to laugh, everyone likes to laugh.

5)They are looking for kind, caring, loving guys.

And all the different similar words used to describe decency. Most will not include other characteristics that they are looking for in a partner. This leads me to believe that other than a decent person, they don’t really know what they want or they don’t really care. And given my list of rejections, I’m more inclined to believe it is the former of the two.

There was one more commonality that I noticed but was inclined not to include it in the list because these words did not show up as often as I expected, but was still pretty common. And that is their need to not play games. Yet, with such little original information in their profiles, they are playing a game, the game of Figure-Me-Out-On-Your-Own-If-I-Let-You.

With all of this being said, where does that leave us? In short, hit or miss, expect a lot of missing.

2 Comments more...

Blocking Script Kiddies

by on Dec.08, 2010, under Computers, Linux

This is a continuation to my previous post on Dealing With Korean Hackers.

When I last left off, I started banning all IP addresses with iptables because the mod-rewrite utility in apache was not giving me the result I wanted. The problem I had with mod-rewrite was that the request was still being made, which meant that my log files were still getting flooded with page requests. I wanted to remove those entries from the logs entirely and to do that, I needed to stop them at the network level instead of the application level. So, back I go to iptables.

There is one little known feature with iptables that lets you scan the packets for a given string and allows you to determine what you want your firewall to do with it. Since I started modifying the ipchains, I decided to make it a little more modular and systematic. To achieve this, we will create two new chains.

iptables -N BANNED
iptables -N IPBAN

I will be using the BANNED chain to match all strings in the incoming packets. IPBAN will be used to ban specific offenders by IP address. Now, we need to apply these chains to the INPUT chain.

iptables -I INPUT -j IPBAN
iptables -I INPUT -j BANNED

This will check the packets against the BANNED chain and then the IPBAN chain. If both check out, it will proceed with the other checks (if there are any) and if it passes that, then they will go on to their final destination.

Now we need to populate the chains with the proper rules. Here is the list of rules I created for the BANNED chain. These will block the top six website scanners.

iptables -A BANNED -m string --string "wantsfly" --algo bm -j DROP
iptables -A BANNED -m string --string "ZmEu" --algo bm -j DROP
iptables -A BANNED -m string --string "w00tw00t" --algo bm -j DROP
iptables -A BANNED -m string --string "Toata" --algo bm -j DROP
iptables -A BANNED -m string --string "proxyjudge" --algo bm -j DROP
iptables -A BANNED -m string --string "Morfeus" --algo bm -j DROP

The only option there that might not make much sense is “–algo bm”. This tells iptables which algorythm to use when scanning the packet for the given string. There are two different options for this, but “bm” will work just fine for our needs. The rest should make sense by looking at it.

The following is a list of specific offenders that I have gathered from my logs. These are offenders because of their actions of trying to find a web page that does not exist and should be known to not exist by anyone on our network.

iptables -A IPBAN -s 222.186.24.74 -j DROP
iptables -A IPBAN -s 61.128.121.138 -j DROP
iptables -A IPBAN -s 207.234.184.149 -j DROP
iptables -A IPBAN -s 210.127.253.99 -j DROP
iptables -A IPBAN -s 196.40.74.18 -j DROP
iptables -A IPBAN -s 174.142.38.185 -j DROP
iptables -A IPBAN -s 72.167.203.63 -j DROP
iptables -A IPBAN -s 202.194.15.192 -j DROP
iptables -A IPBAN -s 123.65.246.154 -j DROP
iptables -A IPBAN -s 173.203.240.14 -j DROP
iptables -A IPBAN -s 188.65.51.246 -j DROP
iptables -A IPBAN -s 206.223.157.244 -j DROP
iptables -A IPBAN -s 180.211.129.38 -j DROP
iptables -A IPBAN -s 83.242.145.34 -j DROP
iptables -A IPBAN -s 94.23.63.40 -j DROP
iptables -A IPBAN -s 218.38.12.0/24 -j DROP
iptables -A IPBAN -s 67.212.67.7 -j DROP
iptables -A IPBAN -s 123.182.6.214 -j DROP
iptables -A IPBAN -s 61.183.15.9 -j DROP
iptables -A IPBAN -s 221.192.199.35 -j DROP
iptables -A IPBAN -s 62.193.225.80 -j DROP
iptables -A IPBAN -s 221.1.220.185 -j DROP

With these chains and rules in place, I have yet to see any malicious activity on our servers since I put them in place two weeks ago. Of course, I will continue to monitor the logs to see if there are any other automatic scanners attacking our servers, but for the time being, things seem to be flowing smoothly.

Leave a Comment more...

Dealing With Korean Hackers

by on Oct.05, 2010, under Computers, Linux

One day, at work, I took a look through the logs and noticed that one of our servers was being attacked by a whole bunch of different IP addresses. They were not continuous attacks, really, only happening once a day at a certain time. My guess is, it is an automated script just doing its thing. Even though, the attacks they were doing were not going to do anything to the system, It would be best if I prevented their scripts from even returning any positive error codes. So, I decided to set up some security up in various places.

The first step I did was I created two rules in iptables that would drop all packets if more then 8 NEW state packets are sent within 10 seconds. This was a very common thing, in fact, for about 5 minutes, there were more then 100 page requests from their scanner.

iptables -I INPUT -i eth0 -p tcp –dport 80 -m state –state NEW -m recent –set –name DEAULT
iptables -I INPUT -i eth0 -p tcp –dport 80 -m state –state NEW –m recent –update –seconds 10 –hitcount 8 –rttl –name DEFAULT -j DROP

This seemed to slowed their scanning down, however, did not prevent it. On to the second method.

After looking in the httpd access.log file, The scanner they are using has its own HTTP_USER_AGENT variable. Since we have htaccess in place to restrict access to certain directories, I figured I would place some rules in the htaccess file, using the mod_rewrite module to restrict their access.

RewriteCond %{HTTP_USER_AGENT} ^ZmEu
RewriteRule ^.*$ - [F]
RewriteCond %{HTTP_USER_AGENT} ^Morfeus
RewriteRule ^.*$ - [F]
RewriteCond %{HTTP_USER_AGENT} ^Toata
RewriteRule ^.*$ - [F]

This seemed to get rid of the ZmEu, Morfeus and Toata scripts from accessing the site, however, there was another problem that came up. Someone was using our site as a sort of proxy, accessing certain files from some other website. Since there seems to be a lot of traffic from Korea, I decided to just block the entire subnet. My companies handles local news papers, not really something people in Korea would be interested in.

iptables -I INPUT -i eth0 -s 218.38.12.0/24 -j DROP

This works, but if I restart the server, the rules get flushed, so I need to create a way for the rules to get inserted on start up.

mkdir /etc/iptables
iptables-save > /etc/iptables/iptables.rules
echo “iptables-restore < /etc/iptables/iptables.rules” >> /etc/rc.d/rc.local

After a few days of watching the logs, there does not seem to be any more scripted attacks coming from Korea anymore.

Leave a Comment more...

Software Patents

by on Oct.02, 2010, under Computers, Technology

There have been two stories recently that really got my nerve on end. Both of them are dealing with Microsoft. The first is that Microsoft has banded together with a giant group of other technology companies to hopes of bringing to the attention to congress of the problems facing the US patent system, in hopes of fixing this. I think this is a very noble and much needed action on the part of the technology industry as a whole. The second story is Microsoft suing Motorola for their use of the Android Operating System in some of their phones, claiming that they have violated nine of their patents. Does anyone else feel that this is counter productive? This brings me to my next blog post, software patents and why their should not be any.

The idea of a patent system can be used as a great tool to help innovation. Allowing someone with an idea to make money off said idea, without having the resources to build said idea, market it, mass produce it and deliver it to the masses. It creates a 15 year monopoly for the creator to make their millions. This has its own flaws and merits, but in general, I do not have a problem with patents, when they deal with products of substance, things that are tangible. But, once we cross over to the world of technology, most specifically, software, we now enter a world where nothing is new anymore. The problem arises when you take something that does not have any substance, does not have any physical form or shape, a concept, a way of doing something, and claiming that you are the creator of said way of doing something.

To give you an example, allow me to describe something…

A method of changing, adjusting, altering, modifying a border or frame that encases or surrounds a body of information, audio or video.

What I just described are window borders and being able to change the thickness of them. Look at the window that this website is in. Do you see how there is a border around the entire page? With a status bar at the bottom, some buttons on top, well, if you wanted to make that border thicker or thinner, you would have to contact the person who owns the patent to that concept and pay them a royalty. If you wanted to add a transparency to the boarder, you would have to contact them. And all of this is assuming that they are willing to do business with you.

This is not creating innovation, in fact, it is hindering it. If you put one million people in a room and ask them to come up with a way of doing something, the huge majority of them will come up with the same exact way. How can a method of doing something be patentable? In short, it shouldn’t be, but with how the US handles patents, it most certainly is.

This is how the system needs to be fixed. First, methods and concepts can NOT be allowed to be patentable. Second, software can NOT be patentable, in fact, if it is not tangible, it can NOT have a patent. Now, there is something that can be done to retain ownership and that rests on the same thing that artists do to retain ownership, copyrights. How can you copyright software? You copyright the source code. But, this also has some problems, such as, using API’s that are owned by another company. Say, for example, you wrote a piece of software for Windows using the WinAPI from within the VC++ development environment. You are writing code that uses functions and routines that were created by Microsoft to achieve what it is your program does. Microsoft, there for, is entitled to royalties for your creation, either in a one time payment for their devkit or on a yearly license deal like they currently offer with their operating systems. But, they already do charge to use their API’s, you might be saying right about now. And you would be correct. But, this prohibits Microsoft from now owning your creating.

To give you another example of what I am talking about…

I decided to write some music. In my song, I have a part that says “Happy birthday to you.” In order for me to use that, I need to contact the owners of the Happy Birthday song, Mildred J. Hill and Patty Smith Hill. Since they are dead, I need to contact their next of kin or who ever inherited their copyright. Now, just apply the same logic to the software industry.

I know there are going to be some major problems, such as, how can someone claim ownership of “hello World!”? This is where we would use the same idea from the patent system in that if there is known prior art, the copyright would be null and void. The same is already true with the copyright system to a point. If some try to copyright something that is already copyrighted, then you get rejected. I would extend that to, if there is prior art, it becomes public domain.

Now, I know that the patent system has something known as prior art and also an obviousness clause, where by denying a patent is the item in question has already been created or is so obvious that it can’t be done. But, in there is a problem as well, not with the idea or method, but in who checks the patents. It has gotten to the point now that if you throw enough techologic and terminology in the patent, it will get passed, even though it’s something as obvious as changing the width of the border of a window.

The easiest thing to do is to just get rid of software patents all together and I am all for that. However, that is not going to happen. There needs to be a slow progression. Human do not like drastic change. Therefore, I say take baby steps and change software patents to copyright of source code.

This will also do one other thing, remove patent trolls. Now, this makes it so for you to own a copyright to something, means that you actually have source code to prove that you created something. However, this will also create a problem, what about revisions? Well… no system is perfect.

Leave a Comment more...

Upgrades and Solutions

by on Jun.17, 2010, under Linux

Let’s face it, upgrading a distribution sucks. In my entire career in Linux, I have had to upgrade my distribution of choice at least once (usually because I’m changing distributions and upgrades are done every 6 months). Out of each and every distribution upgrade I have done, every single one broke my system. Not always to the point that I lost all my data, that only happened maybe once or twice. But, stability and speed usually would take a nose dive every time. There is a simple and easy solution to this that I want to go over.

Installation.

That’s right, do not upgrade, just install the latest version. This will give you a fresh install every 6 months and if your system is set up just right, will seem more like an upgrade because none of your data or settings will change. First, I need to go over partitioning.

If you are running your operating system on one sole partition, you need to be shot. There are so many benefits to partitioning your hard drive that it only makes sense to. Let me go over some of the benefits.

  1. Speed. There is a slight increase in seek times if there is a smaller partition that needs to be scanned for a file.

  2. Security. Easily specify which partitions should have noexec permissions or read-only file systems mounted, cutting down on a lot of maliciousness.

  3. Organization. Knowing where everything is and should be makes life a whole lot easier.

And now the list of down sides to partitioning.

  1. Takes slightly more time to set up.

Okay, now that is out of the way, next up is what to do with your brand new hard drive. Think of it in this manner.

[Operating system]|[User space]

Now, those two should be jailed off from each other. This will make sure that all user data will stay in one location, untouched by any system operations (unless specified to). Now, within the [Operating system] space, there are more splits needed for things like SWAP space, logs and tmp files. Here is my current partition scheme on my main desktop.

[~]$ df -ht ext4
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda6              47G  7.2G   38G  17% /
/dev/sda5             4.7G  138M  4.4G   4% /tmp
/dev/sda1             958M   58M  851M   7% /boot
/dev/sda8             861G  360G  457G  45% /home

Now, the thing you have to see is, I have 4 partitions on my hard drive. Really, for my main desktop, that is all I need. Now, of course, if this was a production server, well, it wouldn’t be a production server. But, I will go over a little bit on this.

/home is my home directory. This is where all my user data and user installed games and applications go. By “user installed” I am talking about games and programs which had a binary already in the tarball or zip and did not require installation into /usr to run. This partition will remain untouched by our installation process so we will never have to deal with it.

/boot is always supposed to be the first partition and placed in the beginning, I don’t feel like I should have to explain why.

/tmp is, of course, the temp directory. It has it’s own partition because of security reasons (remember I mentioned that before?). This partition will get mounted with noexec at boot time. This will prevent someone from running arbitrary code from there.

And the last partition is /. Nothing else really needs to be said about that. (I could have put /var/log on its own partition, but, like I said, this is a desktop, not a server.) The only other partition that is not displayed in the output of df is the SWAP partition. And that is because, really, you shouldn’t have direct access to that.

Now, out of those 4 partitions, there is only one that will ever get touched when doing an upgrade, that is the root(/) partition. All of your settings and configurations are stored in ~/.config (for the most part). So, even if you uninstall an application, the users configuration file will still be there, just in case if they decide to reinstall it later.

Now, when it comes time to upgrade your distribution, you just need to download the ISO, burn it to CD, boot off it and when it gets to the part of the installation where it asks to partition your hard drive, you select to do it manually and all you have to do is tell it to format /dev/sda6, set its mount point to root(/) and tell it to mount the other partitions without touching them. And now, you have the latest version of your distribution and all of your data is still in tack. But, there are some other things that will require you truly get your system back to the way it was. And for this, it requires a lot of leg work early on, but makes it so much more simpler later on. You need to create a bash script to handle downloading and installing all third party repositories and installing all other applications that were installed after the initial installation. I will show you my script to give you an example of what I had to do.

# Install packages from Ubuntu
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y a7xpg a7xpg-data abuse abuse-lib abuse-sfx audacity audacity-data eclipse eclipse-jdt eclipse-pde eclipse-platform eclipse-platform-data eclipse-plugin-cvs eclipse-rcp gimp gimp-data gimp-data-extras gunroar gunroar-data hwinfo kobodeluxe kobodeluxe-data moc mplayer mu-cade mu-cade-data nautilus-open-terminal noiz2sa noiz2sa-data parsec47 parsec47-data pavucontrol pidgin pidgin-data pidgin-libnotify rrootage rrootage-data sound-juicer titanion titanion-data torus-trooper torus-trooper-data ubuntu-restricted-extras val-and-rick val-and-rick-data xchat xchat-common virtualbox-ose virtualbox-guest-additions thunderbird chromium-browser build-essential dpkg-dev checkinstall xz-utils libsdl1.2-dev cmake compizconfig-settings-manager php5-cli
if [ $? != 0 ]; then
echo "The Install failed!"
else
echo "The install finished. Moving on to step two."
fi
# Add Wine PPA and install
sudo add-apt-repository ppa:ubuntu-wine/ppa
if [ $? != 0 ]; then
echo "Could not add the Wine ppa!"
else
echo "Added the Wine ppa."
fi
sudo apt-get update
sudo apt-get install -y wine
if [ $? != 0 ]; then
echo "Installation of Wine failed!"
else
echo "Installed Wine."
fi # Add PlayDeb and install games
sudo echo "deb http://archive.getdeb.net/ubuntu lucid-getdeb games" | sudo tee /etc/apt/sources.list.d/getdeb-games.list
if [ $? != 0 ]; then
echo "Failed installing the playdeb repo!"
else
echo "Added GetDeb Repository"
fi
wget -q -O- http://archive.getdeb.net/getdeb-archive.key | sudo apt-key add -
if [ $? != 0 ]; then
echo "Failed to install the PGP key for Playdeb!"
else
echo "Installed the PGP key for Playdeb."
fi
sudo apt-get update
sudo apt-get install -y astromenace astromenace-data vavoom autodownloader nazghul nazghul-data soulfu soulfu-data violetland violetland-data warzone2100 warzone2100-data urbanterror urbanterror-data teeworlds teeworlds-data
if [ $? != 0 ]; then
echo "Failed to install the games!"
else
echo "Installed the games."
fi

Yes, I did put checks at almost every step of the script and there is one part that I will have to manually edit every time, but once I change the location of the getdeb repository, all I have to do is run that script and everything I installed will be reinstalled on the new version. The only problem is, writing this script took a really long time all because I had to generate the list of applications AFTER having them all installed already. Here is an easy method if this is the first time you installed Ubuntu. After your fresh install, run the following command…

dpkg -l | tr -s “ “ | awk '{ print $2 }' > fresh_install_package_list&lt

This will place a list of all packages installed. Then, when you are ready for the upgrade, run the same command to a different file, then use diff to make a list of all names that are in the second list but not in the first to get a complete list of all programs you installed after the initial installation.

diff –normal <first file> <second file>

This will produce an output something like this.

[~]$ diff --normal list1 list2
7a8,10
> abuse
> abuse-frabs
> abuse-sfx

The “7a8,10” can be omitted as well as the greater-than symbols. But, there you have it. Another method to handling distribution upgrades. Like I said, yes, this is a slight pain in the ass to set up, but once you are done, it makes life all that much better.

Leave a Comment more...

Windows Is Still To Blame For Spam

by on Apr.29, 2010, under Computers, Linux, Technology

Recently, there was an article that came out that stated that Linux has a higher ratio of spam computers running Linux than Windows. Even though the ratio is higher, to which the ratio is always supposed to be looked at as appose to the total number, there is a reason for this that many people are not bothering to look at.

If we were to look at the actual number of desktops verse servers, the ratio of Linux based desktops would be close to non-existent like the OSX counter part. Security on the desktop in Windows is a joke. Not that it can’t be done. It very well can be done right and made secure enough to never get any piece of malware ever. But, the problem with this is, almost every Windows user knows close to nothing of computer security. They, for the most part, run as administrator and will run just about any unsigned binary just because they want some free piece of software to copy DVDs or to illegally download games and other pieces of commercial-ware. Malware writers know that and they take advantage of this. Now, can this be done in Linux? Sure. But Linux does not have the market share in the desktop market for them to actually gain anything substantial.

One of the articles I read said that one of the reasons why the ratio is higher is because of the fact that many ISPs run Linux mail servers and that will act like a proxy when sending out spam from someones infected Windows desktop. To a point, this makes a bit of sense when explained, but this is the wrong reason.

The real reason why the Linux ratio is higher is because of the server market. Spammers require two things to be considered successful, high bandwidth and high uptime. That is the definition of a Linux server. This is further multiplied because a lot of Linux administrators think that because they are running Linux that they are secured by default. This is one of the biggest reasons why Linux servers are highly attacked and become infected with spam servers.

The original article posted by MessageLabs also hinted that the reason for the higher Linux ratio is because of the ISP mail redirect. So, let us look at this logically.

  1. Windows makes up the largest number of spammers.
  2. Linux has a higher ratio as seen in the mail headers by the received field in the mail headers.
  3. This means that many of the computers behind the Linux email redirects could, in fact, be Windows based.
  4. This means that the Windows ratio is actually much higher and the Linux ratio is much lower.
  5. Many of the Linux numbers are actually Linux-based servers and not desktops.
  6. Email traffic was analyzed but the original sender was not.
  7. They used desktop market share only.
  8. Server traffic was included in the article but was not included in the market share.
  9. This article is flawed.

So, in a nut shell, MessageLabs are posting articles that are bogus.

Leave a Comment more...

Ubuntu and The Community

by on Mar.23, 2010, under Linux, Technology

Ubuntu is owned by a company and not by the community. This is the statement that a lot of Ubuntu lovers do not seem to understand. The underlined plan of Canonical is to make money, not to make a free operating system. This means that even if there is something that the community as a whole does not like doing, if it means the company will make money, guess what? The community is out of luck and will have to deal with it.

There have been a few changes in Ubuntu that the community seemed to get all up in arms about recently, that to me, makes perfect sense. One of the changes is the placement of the window buttons, moving from the right side to the left side. All that was commented about was that there is a plan for this. Yet, the community seemed to get all bent out of shape about this. Who knows what the ultimate goal of this is? Maybe there is some new button or feature that is to be placed on the upper right corner of the windows. Maybe there is supposed to be a new status button that is supposed to go there. Maybe there is supposed to be a new window tabbing feature that is going there. Maybe that tabbing feature was originally a Gnome extension that defaulted to the right side and to incorporate it, it would be easier to just move the window buttons than to recode the extension. Who knows. How about instead of bitching and complaining, just sit back and see what will happen.

Another problem that people seem to be having is how Ubuntu will support the 7digital to provide an online music store for fast and easy music purchases and downloads. The problem is not with the store, it is with the music only being made available in mp3 format. I’m sorry to burst peoples bubbles here, but Canonical does not really have a say in the matter. If you have a problem with it, take it up with 7digital. But, there was a comment made Canonical will be trying to find a way to offer better quality compression and possibly file formats at a later date.

Personally, I still say, buy the CD and rip it yourself. Odds are, if 7digital were to start offering alternate file formats, they will most likely just convert all their mp3s into ogg or flac and say “Here you go.” Which, for anyone who deals with audio, knows that is not a good way to do it. In fact, converting file formats will degrade the quality of the file over time.

The thing is, Ubuntu back in the day is not the same Ubuntu of today. When Ubuntu first started out, it was looked at as a fully community based distribution. This worked for the longest time. But then, something happened, Canonical realized that if they are dumping all of their money into it, they might as well make some money in return from it. And this means that Ubuntu is now owned by Canonical, as apposed to just financially supporting them. And with ownership means a removal of a community voice.

That is just the way it is.

3 Comments more...

Pitfalls Of A Release Schedule

by on Dec.03, 2009, under Linux, Music

I have a problem with Ubuntu Studio. There, I said it. I think that Ubuntu Studio is a wonderful distribution, packed with all the programs needed for my music composing hobby. But, there is a problem. Ubuntu Studio follows the same exact release schedule as Ubuntu, meaning, when there is a new version of Ubuntu, there is a new version of Ubuntu Studio. This means that if you want the latest software updates and latest programs, you must do a dist-upgrade every 6 months. Why is that? With an audio workstation, the last thing I want to worry about is wondering if my upgrades will take, that all my software upgrades flawlessly and that all of my data and projects will continue to work.

A workstation should not be subject to upgrades every 6 months. Each upgrade will force the user to run a risk of something not working properly. To give you an example, when 9.04 shipped, there was a bug in the real-time kernel that made the Ubuntu Studio maintainers not include the rt kernel. They recommended using 8.10 if you required the rt kernel. If you were using 8.10, you could do the upgrade to 9.04, but just keep on using the older rt kernel. Then, you would just have to wait until 9.10 comes out to get the updated rt kernel or compile it yourself. And this is just a problem with the kernel, let alone any number of the various other bugs introduced from doing a distribution upgrade.

If Ubuntu Studio wants to be thought of as a leading operating system for media creation, they should follow only the LTS releases of Ubuntu. This will make sure that the core of Ubuntu Studio will be secure and stable. The packages should then be offered in a rolling distribution fashion, meaning, instead of back porting only security and major bug fixes, just release the newer version. I honestly do not understand the difference of grabbing the latest source tar ball and packaging that or patching the older version and recompile that. Both will produce the same output, only difference is by patching, you can maintain the same version number.

If anything, I would suggest that Ubuntu Studio should treat their operating system maintenance in the same fashion as Suse by providing service packs that will update the core of the operating system, meanwhile the packages are constantly updated to stay current. When a new LTS comes out, treat that as a whole new version of the operating system.

I bring this up because I have Ubuntu Studio 9.04 installed on a second hard drive that was upgraded from 8.10. Now that 9.10 is released, the only reason why I would want to do the upgrade is to get the latest kernel, which I would have to recompile the ALSA modules anyway after installation. Why should I go through all of this work? I want to write music, not be an administrator of my workstation.

Leave a Comment more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...