Thursday, May 19, 2011

Cloudist: Simple, scalable job queue for Ruby powered by AMQP and Event Machine

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

Rubyists seeking to move processing to the background have long relied on projects like Delayed Job and Resque. Now, Ivan Vanderbyl offers another option. Cloudist is powered by AMQP and EventMachine and aims to provide a simple yet highly scalable job queue for Ruby apps.

Cloudist workers can be in the form of a block:

Cloudist.start { log.info("Started Worker") job('make.sandwich') { log.info("JOB (#{id}) Make sandwich with #{data[:bread]} bread") job.started! (1..20).each do |i| job.progress(i * 5) sleep(1) raise ArgumentError, "NOT GOOD!" if i == 4 end job.finished! }}

… or a Ruby class:

class SandwichWorker < Cloudist::Worker def process log.info("Processing queue: #{queue.name}") log.info(data.inspect) job.started! (1..5).each do |i| job.progress(i * 20) # sleep(1) # raise ArgumentError, "NOT GOOD!" if i == 4 end job.finished! endendCloudist.signal_trap!Cloudist.start(:heartbeat => 10, :logging => false) { Cloudist.handle('make.sandwich').with(SandwichWorker)}

For usage, configuration, and more examples, check out the project repo on GitHub.

[Source on GitHub]

rel: Arel-inspired SQL query builder for Node.js

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

Arguably, Arel was one of the biggest new features introduced in Rails 3. Arel simplifies building complex SQL statements using idiomatic Ruby.

With Rel, Carl Woodward brings the power of Arel to Node.js. Written in CoffeeScript, Rel makes quick work of building SQL statements for a variety of relational databases.

Rel can be installed via npm:

npm install rel

We can then begin building a query:

users = new Rel.Table 'users'

If we want all users in our CMS we could use the star method:

users.project(Rel.star()).toSql()

Rel really shines, however, when using several chained operators, including joins:

users.join(photos).on(users.column('id').eq(photos.column('user_id')))# => SELECT * FROM users INNER JOIN photos ON users.id = photos.user_id

For a complete list of features, check out the very readable Vows-based specs in the repo on GitHub.

[Source on GitHub]

Large Hadron Migrator: Update huge SQL tables without going offline

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

With all the NoSQL hotness out there, believe it or not, some people are still using relational databases. (I know, right?).

When it comes to dealing with schema changes, the Active Record Migrations in Rails make schema changes so easy, developers often take them for granted. However, for extremely large sets of data, running an ALTER TABLE might mean taking your database offline for hours. After considering other projects Rany Keddo and the smart folks at Soundcloud developed their own solution.

Large Hadron Migrator, named for CERN’s high energy particle accelerator, uses a combination of copy table, triggers, and a journal table to move data bit by bit into a new table while capturing everything still coming into the source table in the live application.

To install, configure the gem in your Gemfile:

gem 'large-hadron-migrator'

… and run bundle install.

Next, write your migration as you normally would, using the LargeHadronMigration class instead:

class AddIndexToEmails < LargeHadronMigration def self.up large_hadron_migrate :emails, :wait => 0.2 do |table_name| execute %Q{ alter table %s add index index_emails_on_hashed_address (hashed_address) } % table_name end endend

Be sure to check out the project repo or blog post for advanced usage and caveats.

[Source on GitHub]

Wednesday, May 18, 2011

Node.js on your (jailbroken) iPhone

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

Nathan “Too Tall” Rajlich has gotten Node.js to run his jailbroken iPhone 4. If you’ve got SSH access on a jailbroken phone, simply extract the .deb package:

dpkg -i node-v0.4.5-ios-arm-1.deb

Now you can see if Node is running:

$ node> require('os').cpus()[ { model: 'N90AP', speed: 0, times: { user: 9209240, nice: 0, sys: 6997410, idle: 255377220, irq: 0 } } ]

Nate has created node-iOS to play with iOS-specific functionality with Node bindings:

var iOS = require('iOS');iOS.vibrate();// Quick vibrate, like when you receive text messageif (iOS.compass.available) { // true if the iDevice has a digitalcompass iOS.compass.on('heading', function(heading) { console.log(heading.magneticHeading); // Degrees relative tomagnetic north });}

Of course if you want to play with Node on mobile without jailbreaking your phone, Node.js powers the JavaScript services in WebOS.

[Source on GitHub] [Blog post]

async.js - asynchronous control flow in node.js and the browser

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

async.js, written by Caolan McMahon, is a fresh take on asynchronous control flow for node.js and the browser. It offers a simple API for executing some of the more difficult asynchronous control flow patterns which can cause even the most seasoned JavaScript developer to give up in an asynchronous rage.

async.js provides around 20 functions that include several async functional methods (map, reduce, filter, forEach, …) as well as common patterns for control flow (parallel, series, waterfall, whilst, queue, until, …). All these functions assume you follow the JavaScript convention of providing a single callback as the last argument of your async function.

Here is a simple example of using some of the basic asynchronous iterators. There are a lot more examples on the Github page.

var async = require('async'), fs = require('fs');var files = ['file1.txt', 'file2.txt', 'file3.txt'];var writeFile = function(file, callback) { console.log('Attempting to write file ' + file); fs.writeFile(file, 'foo', callback);};// Writes files in parallelasync.forEach(files, writeFile, function(err){ if (err) { console.log(err); return; } console.log('all operations complete without any errors');});// Writes files in seriesasync.forEachSeries(files, writeFile, function(err){ if (err) { console.log(err); return; } console.log('all operations complete without any errors');});

This is a contrived example, but it is paramount to understanding the power of the async.js library. Without the async.js library or a similar control flow library, you might get stuck writing brittle code with lots of nested callbacks.

Raphters: A web framework for C

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

For those that thought C had been delegated to the internals of your mobile devices or favorite database engine, Daniel Waterworth wants to string you up by the Raphters.

Raphters is a web framework written in C. Yes you heard that right, a shiny new framework for the web written in everybody’s favorite close-to-the-metal programming language. The project gets its name from RAPHT, a pattern that extends MVC that aims for greater security and flexibility:

Resources include things served up to clients like a database or API.Actions provide ways to interact with a Resource.Processors transform data.Handlers provide the entry point for a request.Templates render data.

A simple Hello World example to demostrate the patter might look something like:

#include "raphters.h"START_HANDLER (simple, GET, "simple", res, 0, matches) { response_add_header(res, "content-type", "text/html"); response_write(res, "hello world");} END_HANDLERSTART_HANDLER (default_handler, GET, "", res, 0, matches) { response_add_header(res, "content-type", "text/html"); response_write(res, "default page");} END_HANDLERint main() { add_handler(simple); add_handler(default_handler); serve_forever(); return 0;}

If you’re a C developer looking for speed (and security) you might give Raphters a look for your next web project.

[Source on GitHub]

Sass 3.1 released, now with functions, lists, and @media bubbling

Wynn sat down with Nick Quaranto at Red Dirt Ruby Conference to talk about Gemcutter, RubyGems.org, and how to get started creating your own Ruby gem.

Sass continues to provide innovative new ways to DRY up our CSS. Version 3.1 is out and offers many new language features, compilation performance improvements, and some new command line options.

Rubyists have long had the ability to extend Sass, but now anyone can create powerful functions using only Sass:

$grid-width: 40px;$gutter-width: 10px;@function grid-width($n) { @return $n * $grid-width + ($n - 1) * $gutter-width;}

Sass now includes some handy functions to work with lists introduced in version 3.0 including nth, append, join, and length:

$ sass -i>> nth(1px 2px 10px, 2)2px>> append(1px 2px, 5px)(1px 2px 5px)>> length(5px 10px)2

There is also a new @each directive to iterate over lists:

@each $animal in puma, sea-slug, egret, salamander { .#{$animal}-icon { background-image: url('/images/#{$animal}.png'); }}

Sass 3.1 brings changes to command line tools and some breaking changes:

There is a new scss command line utility to compile stylesheets, defaulting to the SCSS syntax.@import used with a path without a file extension will now throw an errorOld-style ! variable support has been removedThe css2sass command line utility has been removed in favor of sass-convert.

Check out the Changelog for complete release details. For a deeper look at Sass and Compass, check out our upcoming book Sass and Compass in Action from Manning, now in early access.

[Source on GitHub]

Using a USB modem for wireless 3G internet with Ubuntu 10.04 “Lucid Lynx”


Many of these devices contain their own software to work with Windows.
If you use them with a Windows machine, they act like a USB flash key containing the software and will install the necessary drivers.
Once this is done, they switch into a different mode of operation and act like a modem from then on.
This is not required in Ubuntu so we just need to make sure the modem skips that first stage. It’s a very simple fix actually but will involve getting an internet connection through another means first i.e. home internet, wifi at a cafe, a friend’s place etc

Open a Terminal window (under the Applications menu, Accessories).
Type the following to install what you need

sudo apt-get install usb-modeswitch

This will ask you for your password in order to install the software.
Once it is completed, you can just insert the USB modem again and connect using the Network Manager applet near the top right hand corner of the screen.
From here it depends on the internet provider that the USB modem relates to but it should get you a good deal closer to getting it working.

Problem with running Logitech webcam working in Skype on Ubuntu 10.04 “Lucid Lynx”


Here is a pretty simple fix that I found when using the Logitech QuickCam Communicate STX under Ubuntu 10.04 “Lucid Lynx” that may also apply to older versions of Ubuntu.

If the webcam does not show your video, the trick is to force it to use an older V4l library

Under Lucid Lynx, right click on the menus in the top corner and choose “Edit Menus”

Then go to “Internet”, select “Skype” and then “Properties” button.

You have an option of using one of the following as the Command

skype-wrapper

or for a 32-bit system

bash -c ‘export LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so; skype’

or for a 64-bit system

bash -c ‘export LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so; skype’

Then close Skype and restart with the shortcut under “Applications” menu, “Internet”, “Skype”.

To test if the change worked, choose “Options” from the menu in Skype, then “Video Devices”, and then the “Test” button. You should see yourself in the test window.

To find out what camera you are using, run the following command in a terminal window…

Go to the “Applications” Menu, “Accessories”, and “Terminal”.

Type the following command and then hit Enter to see all of the USB devices connected to your PC, one of which will be the USB webcam.

lsusb

Mine shows

Bus 003 Device 004: ID 046d:08d7 Logitech, Inc. QuickCam Communicate STX

Tuesday, May 17, 2011

Downloading repo keys from behind a corporate firewall


Corporate firewalls commonly block port 11371 which launchpad PPA’s use for their keys.

It is possible though to get these through the normal port 80 for web traffic using the format below (replace the last reference to reflect the key you want to download)…
gpg --keyserver hkp://keyserver.ubuntu.com:80 %%recv-keys 0A5174AF

Installing Evolution 2.29.3 with mapi plugin under Ubuntu 9.10 Karmic


UPDATE: 2.29.5 is available.

Just change the version number in the wget lines below in Step 2 and follow the remaining steps as below taking care to update the new version number where appropriate.

You should not need to uninstall anything in advance.

Download, compile and install the following 4 files…

gtkhtml-3.29.5.tar.bz2

evolution-data-server-2.29.5.tar.bz2

evolution-2.29.5.tar.bz2

evolution-mapi-0.29.5.tar.bz2

ALERT: This posting relates to an “unstable release” of Evolution.  Although you may improve functionality against an Exchange 2007 server, you may also suffer from degraded performance. Install at your own risk.

These are the steps I followed to install the very latest unstable development version of Gnome Evolution.  It requires you to upgrade a few components over what is included in Ubuntu 9.10 Karmic Koala by default.

1. Run Applications menu-> Accessories-> terminal

2. Get the latest code tarballs by typing the following commands into the terminal window (note that we get two versions of the evolution code as the newest one appears to be missing a required file)

mkdir ~/evolution

cd ~/evolution

wget http://ftp.gnome.org/pub/GNOME/sources/gtkhtml/3.29/gtkhtml-3.29.3.tar.bz2

wget http://ftp.gnome.org/pub/GNOME/sources/evolution-data-server/2.29/evolution-data-server-2.29.3.tar.bz2

wget http://ftp.gnome.org/pub/GNOME/sources/evolution/2.29/evolution-2.29.3.tar.bz2

wget http://ftp.gnome.org/pub/GNOME/sources/evolution/2.29/evolution-2.29.3.2.tar.bz2

wget http://ftp.gnome.org/pub/GNOME/sources/evolution-mapi/0.29/evolution-mapi-0.29.3.tar.bz2

3. Get prereqs for building each of the packages by typing the following commands into the terminal window

sudo apt-get install libdb-dev libnspr4-dev libnss3-dev libical-dev libsqlite3-dev

sudo apt-get install bison intltool gnome-core-devel evolution-data-server-dev libcanberra-gtk-dev

sudo apt-get install libgtkhtml3.8-dev network-manager-dev libunique-dev libhal-dev

sudo apt-get install libgtkimageview-dev libpst-dev libnotify-dev

sudo apt-get install libmapi-dev samba4-dev libglib2.0-dev

4. Extract the source code from the tarballs with the following commands

tar xjvf gtkhtml-3.29.3.tar.bz2

tar xjvf evolution-data-server-2.29.3.tar.bz2

tar xjvf evolution-2.29.3.tar.bz2

tar xjvf evolution-2.29.3.2.tar.gz

tar xjvf evolution-mapi-0.29.3.tar.bz2

5. Now we should have a folder for each of the components under our ~/evolution folder, so we visit each folder in turn and build and install. Check for the screen for any errors, particularly after each install command, to see if the individual component built ok.  If you experience any errors, leave a comment here so that we can determine if a prerequisite is missing from your environment.

cd ~/evolution/gtkhtml-3.29.3

./configure

make

sudo make install

cd ~/evolution/gtkhtml-3.29.3

./configure

make

sudo make install

cd ~/evolution/evolution-data-server-2.29.3

./configure

make

sudo make install

cd ~/evolution/evolution-2.29.3

./configure

make

sudo make install

cd ~/evolution/evolution-2.29.3.2

./configure

make

sudo make install

cd ~/evolution/evolution-mapi-0.29.3

./configure

make

sudo make install

6. If everything built alright, you should now be able to launch Evolution and check in the Help menu -> About to confirm that you are running 2.29.3.2 now.  You should also have improved (but still buggy) calendar functionality if you have an Exchange 2007 email server  As stated at the top of this posting, this is an unstable release of code under very active development at the moment. Only try these steps if you can cope with Evolution not working or working intermittently.

If you are dependent on Evolution to work and it does not currently meet your requirements with the version you have already installed, then try the steps shown here.

Free graphics card by earning bitcoins


Well, here’s the plan.

I’ve just bought a Powercolor ATI Radeon HD 5770 for AUD$127 from msy.com.au in Ultimo, Sydney yesterday.

This is a good quality graphics card that is below the current 6000 series cards in terms of performance but cheap at the price.

What I plan to do is earn enough money from it to cover the cost within 2 months!

How is that to be achieved? By “GPU mining”.

With the advent of OpenCL support, the newer graphics cards can be instructed to use the onboard graphics chip (known as the “GPU”) to focus on intensive mathematical tasks and free up the main system CPU to do other things.  The GPU is designed for this sort of work so it’s an ideal way to get the most out of the hardware you have purchased.

Ok, here’s where the money comes in.  Bitcoin is a form of virtual currency that is managed by a network of connected computers on the internet that talk to each other at a peer level to manage the transfer of bitcoins from one party to another.  The transfer is anonymous in that you know the identifier of the party you are dealing with but not who they are. There are businesses that are accepting bitcoins as a form of payment. The computers in the bitcoin network belong to any individual that is interested in partaking in the bitcoin economy.

My computer is now one of them.

Every now and again, one of the nodes (i.e. computers) in the network creates some new bitcoins (50 at a time to be precise).  The more processing power you have, the more likely you are to create a block of 50. By adding this graphics card, I have increased my processing power for bitcoin generation by about 150 times and should create 50 within 8 days on average according to the bitcoin calculator. I am processing at 155 Mhash/sec now.

There are markets to swap bitcoins for USD through paypal transactions which at time of writing show that 1 bitcoin is near to USD$1.  I will earn USD$50 per week on that basis which will pay for the card in under a month but I’m being conservative by aiming for 2 months. There has been a significant rise in the relative value of a bitcoin in the past 6 months which is a critical factor in this plan.

In order to get things running on Ubuntu 10.10 64-bit “Maverick Meerkat”, I had to download and install the proprietary version of the ATI driver (10.11 is the best for performance currently) for my card.  It chose the radeon driver by default which does not provide the OpenCL support required. Also, I chose ATI since their performance for mining is far, far better than Nvidia’s.

It was necessary to also get the ATI Stream SDK 2.1 from here.  This provides the OpenCL libraries.

I also downloaded this to get the correct ICD files installed under /etc/OpenCL/vendors

This how-to for Maverick Meerkat and also this link were invaluable in getting to this point.

Having achieved that little success, now we need to get the bitcoin software and also the Diablo GPU miner software.

You should ensure that the bitcoin server is running rather than the actual user interface (see below).

In a terminal window, run the following

sudo gedit ~/.bitcoin/bitcoin.conf

and put in any values for rpcuser and rpcpassword

rpcuser=myself

rpcpassword=somethingsecret

Then you need to run the bitcoin server with

bitcoind -server

Now finally you need to run the Diablo miner making sure that it knows where the ATI Stream SDK OpenCL library is. (find it with sudo find / -name libOpenCL.so) and using the path to where you downloaded the miner along with the username and password you put into the bitcoin.conf earlier.

export LD_LIBRARY_PATH=~/Downloads/ati-stream-sdk-v2.1-lnx64/lib/x86_64/

./DiabloMiner-Linux.sh -u myself -p somethingsecret -w 64 -f 10

This should then show the current processing in terms of khash/sec.  Mine is between 150,000 and 160,000.

Now to sit back and see if I improve on my free 0.05 bitcoins in the next 8 days (UPDATE: First 50 bitcoins arrived this morning, 27/02/11 4:14:49 AM, after about 15 days of processing)

Oh, and if you do manage to get some coins and fancy giving them away, my bitcoin wallet is 1Cfdc5DHMABv27eyQ9xcrnuynQmx9dRXTg

Upgrading from ext3 to ext4 in Ubuntu


If you have installed Ubuntu recently you will find that ext4 is the standard format used for creating filesystems. However, if you upgraded from an older version of Ubuntu you may be still using ext3.

The following instructions show how to upgrade the filesystem format with the data still in place. I am presuming you have a backup of your data in case this goes completely pear-shaped.

First confirm that you are using ext3 by typing the following command in a terminal window (Go to Applications menu, Accessories, Terminal)

sudo df -Th

One of the lines that showed up for me was

/dev/sdb1     ext3   241263968  93827044 137632456  41% /media/mirror

Now download, create and boot from a Ubuntu live CD so that none of your hard drives are in use.  Choose the 32-bit option of the latest version (ver 10.10 at time of writing) of Ubuntu.

Then restart the machine with this newly created CD and again confirm the name of the device you want to upgrade from ext3 to ext4.

sudo df -Th

Before making the change in format, let’s check the disk for any errors

sudo e2fsck -fDC0 /dev/sdb1

When that completed, we make the change from ext3 to ext4 with

sudo tune2fs -O extents,uninit_bg,dir_index /dev/sdb1

The only thing left to do is to edit the fstab file so that the filesystem loads with ext4

The fstab file will be on the drive that you use to boot the PC in the /etc directory.  Although not in my example here, this may be the drive you just modified.

We need to mount the drive that is used for booting (I assume it is sda1 here)

sudo mkdir /mnt/sda1
sudo mount -t ext4 /dev/sda1 /mnt/sda1

Now edit the fstab file

sudo nano /mnt/sda1/etc/fstab

Look for the line which contains your recently changed drive (sdb1 for me) and alter the format from ext3 to ext4 and hit control-x to exit.  Then hit the Y key to accept the changes and hit enter to replace the old fstab file.

Now it’s just a matter of restarting for the drive to be reloaded with ext4.

Thursday, May 5, 2011

5 Best Linux Distribution With No Proprietary Components

Linux is a free and open source operating system. However, Linux (and other open source operating system) can use and load device drivers without publicly available source code. These are vendor-compiled binary drivers without any source code and known as Binary Blobs. Die hard open source fans and Free Software Foundation (FSF) recommends completely removing all proprietary components including blobs. In this post I will list five best Linux distribution that meets the FSF's strict guidelines and contains no proprietary components such as firmware and drivers.

Modification & distribution - Binary blobs can not be improved or fixed by open source developers. You can not distribute modified versions.Reliability - Binary blobs can be unsupported by vendors at any time by abandoning driver maintenance.Auditing - Binary blobs cannot be audited for security and bugs. You are forced to trust vendors not to put backdoors and spyware into the blob.Bugs - Binary blobs hide many bugs. Also, it can motivate people to buy new hardware.Portability - Binary blobs can not be ported on different hardware architectures. It typically runs on a few hardware architectures.

The following are not just a distribution but offers additional benefits too:

Learn how a distribution works on the inside.Ease of use.An active community providing quick and helpful support.

gNewSense is a GNU/Linux distribution based on Ubuntu Linux. However, gNewSense v3.0 will be based on Debian instead of Ubuntu. The current version is same as Ubuntu, but with all non-free software and binary blobs removed. The FSF considers gNewSense to be a GNU/Linux distribution composed entirely of free software.

Fig.01: Default gNewSense Desktop Fig.01: Default gNewSense Desktop

The Latest stable release is v2.3 and it was released on September 14, 2009. By default gNewSense uses GNOME, as the official desktop environment. However, use can change the graphical user interface, install other window managers, and other software via its repositories using the apt-get command.

=> Download gNewSense

Dragora is a GNU/Linux distribution created from scratch. The FSF considers Dragora to be a GNU/Linux distribution composed entirely of free software. It has a very simple packaging system that allows you to: install, remove, upgrade and create packages with ease. Dragora features runit, among other things, for it's system startup by default, which ensures the complete control of system services.

Fig.02: Dragora GNU/Linux Desktop Fig.02: Dragora GNU/Linux Desktop

=> Download Dragora

BLAG Linux is a GNU/Linux distribution based on Fedora Linux. The current version is just like Fedora but with all non-free software and binary blobs removed. The latest stable release, BLAG90001, is based on Fedora 9, and was released 21 July 2008.

Fig.03: Blag GNU/Linux Desktop Fig.03: Blag GNU/Linux Desktop

BLAG comes with various server packages including Fedora plus updates, and support for 3rd party repo rom Dag, Dries, Freshrpms, NewRPMS, and includes custom packages. BLAG140000 (beta version) is based on Fedora 14, and was released on 8 February 2011. The FSF considers BLAG and GNU to be a GNU/Linux distribution composed entirely of free software.

Fig.03-1: Blag GNU/Linux 140k Beta Desktop Fig.03-1: Blag GNU/Linux 140k Beta Desktop

=> Download BLAG Linux (stable) and BLAG140000 (beta version)

Musix GNU/Linux is a live CD and DVD Linux distribution based on Debian Linux. It is intended for music production, graphic design, audio and video editing, and general purpose applications. The FSF considers Musix to be a GNU/Linux distribution composed entirely of free software. Musix is developed by a team from Argentina, Spain, Mexico and Brazil. The main language used in development discussion and documentation is Spanish; however, Musix has a community of users who speak Spanish, Portuguese, and English. The default user interface is set to IceWM. However, user can install other interfaces such as KDE.

Fig.04: Musix GNU/Linux Desktop Fig.04: Musix GNU/Linux Desktop

=> Download Musix GNU/Linux

Trisquel is a GNU/Linux distribution based on Debian Linux operating system. The latest version is derived from Ubuntu Linux, but includes only free software with all blobs removed. The FSF considers Trisquel to be a GNU/Linux distribution composed entirely of free software with its own complete binary repository. It is intended for for small business / enterprises, domestic users and educational centers. From the project home page:

Trisquel has several editions, designed for different uses: the one called simply Trisquel ? the most important one ? is intended for home and personal use, and includes a lot of apps for that: networking, multimedia, office, games, etc. The Edu edition is designed for educational centers, and allows the teacher to build a custom digital classroom within minutes. The Pro edition is for enterprises, and includes accounting and business management software. The Mini edition is for netbooks and older computers.

Fig.05: Trisquel GNU/Linux Desktop Fig.05: Trisquel GNU/Linux Desktop

Ututo is a GNU/Linux distribution based on Gentoo Linux. It is compiled using Gentoo Linux "ebuilds" and "emerge" software. The FSF considers Ututo to be a GNU/Linux distribution composed entirely of free software with its own complete binary repository. It was the first fully free GNU/Linux system recognized by the GNU Project.

Venenux, a GNU/Linux distribution built around the KDE desktop.

Dynebolic, a GNU/Linux distribution with special emphasis on audio and video editing.

A lot of wireless cards and nvidia graphics did not worked with any of the above distros as blobs are removed. However, I was able to install it on my old Intel Celeron 1.7GHz desktop with 512MB RAM + 40GiB disk. Graphics worked well including onboard NIC, sound card and Atheros wireless card also worked out without any problems.

OpenBSD developers do not permit the inclusion of closed source binary drivers in the source tree and are reluctant to sign NDAs. If you are serious about running a system with no binary blobs, you may want to try out OpenBSD too. It supports Gnome, KDE and other desktop environments too.

From the project website:

The Debian project has been working in removing non-free firmware from the Linux kernel shipped with Debian for the past two release cycles. At the time of the releases of Debian 4.0 "Etch" and 5.0 "Lenny", however, it was not yet possible to ship Linux kernels stripped of all non-free firmware bits. Back then we had to acknowledge that freedom issues concerning Linux firmware were not completely sorted out.

Debian v6.x will provide the non-free firmware from the official non-free repository.

Jeremy Andrews conducts a free-ranging interview, focused mainly on OpenBSD 3.9 and drivers, that gives Theo a chance to explain how the big North American chip vendors' business practices make it harder for open source projects, talk about "binary blobs" vs firmware in drivers, and more.Guidelines for free system distributionsOpenBSD 3.9: "Blob!" lyrics. Free GNU/Linux distributions

(Image credit: Respective GNU/Linux distribution projects webpage and wikipedia).

Download Ubuntu 11.04 (Natty Narwhal) CD ISO / DVD Images

The latest version of the popular Linux desktop distribution Ubuntu 11.04 has been released and available from the official project web site. This new version uses the Unity user interface instead of GNOME Shell as default desktop (user can switch back to classic Gnome desktop any time). New features since Ubuntu 10.10 includes - Banshee as the default music player, Mozilla Firefox 4, LibreOffice, Linux kernel v2.6.38.2, gcc 4.5, Python 2.7, dpkg 1.16.0, Upstart 0.9, X.org 1.10.1, Mesa 7.10.2, Shotwell 0.9.2, and Evolution 2.32.2.

Fig.01: Default Ubuntu Linux 11.04 Desktop with Unity Graphical Interface Fig.01: Default Ubuntu Linux 11.04 Desktop with Unity Graphical Interface

64 Bit DVD version (4G)32 Bit DVD version (3.9G)

You can directly upgrade to Ubuntu 11.04 from Ubuntu v10.10, see upgrade howto here.

Download Ubuntu 10.10 (Maverick Meerkat) CD ISO / DVD Images

The latest version of the popular Linux desktop distribution Ubuntu 10.10 has been released and available from the official project web site. New features since Ubuntu 10.04 includes - Gnome 2.32, KDE 4.5.0 (QT 4.7), new KDE browser Rekonq, Pulse Audio as the default sound server, Firefox 3.6.9, OpenOffice 3.2.1, Evolution 2.30.3, Shotwell, Btrfs with experimental support, kernel 2.6.35, and X.org version 1.9.

Fig.01: Ubuntu 10.10 (Maverick Meerkat) Desktop Fig.01: Ubuntu 10.10 (Maverick Meerkat) Desktop

64 Bit DVD version32 Bit DVD version

You can directly upgrade to Ubuntu 10.10 from Ubuntu 10.04, see upgrade howto here.

Wednesday, May 4, 2011

Download Debian Linux 6 Squeeze ISO / CD / DVD Images

Debian GNU/Linux version 6.0 has been released ( jump to download ) after 24 months of constant development and available for download in various media format. Debian 6.0 is a free operating system, coming for the first time in two flavours. Alongside Debian GNU/Linux, Debian GNU/kFreeBSD is introduced with this version as a "technology preview". It also supports various processor architectures and includes the KDE, GNOME, Xfce, LXDE and other desktop environments. It also features compatibility with the FHS v2.3 and software developed for version 3.2 of the LSB.

Debian 6.0 introduces a dependency based boot system, making system start-up faster and more robust due to parallel execution of boot scripts and correct dependency tracking between them. Various other changes make Debian more suitable for small form factor notebooks, like the introduction of the KDE Plasma Netbook shell. Following new updated major software packages are included:

KDE Plasma Workspaces and KDE Applications version 4.4.5an updated version of the GNOME desktop environment version 2.30the Xfce version 4.6 desktop environmentLXDE version 0.5.0X.Org version 7.5OpenOffice.org version 3.2.1GIMP version 2.6.11Iceweasel version 3.5.16 (an unbranded version of Mozilla Firefox)Icedove version 3.0.11 (an unbranded version of Mozilla Thunderbird)PostgreSQL version 8.4.6MySQL version 5.1.49GNU Compiler Collection version 4.4.5Linux kernel version 2.6.32Apache web server version 2.2.16Samba file and print server version 3.5.6Python version 2.6.6, 2.5.5 and 3.1.3Perl version 5.10.1PHP version 5.3.3Asterisk version 1.6.2.9Nagios version 3.2.3Xen Hypervisor 4.0.1 (dom0 as well as domU support)OpenJDK version 6b18Tomcat version 6.0.18And more than 29,000 other ready-to-use software packages, built from nearly 15,000 source packages.

Debian provides all packages on CD / DVD and the total size is around 32GB+ for all media files. You only need to download first CD / DVD and install the base system. Once downloaded, use the Internet to install any packages.

For almost all PCs use 32 bit version. For e.g., most machines with Intel/AMD type processors.Choose 64 bit version to take full advantage of computers based on the AMD64 or EM64T architecture (e.g., Athlon64, Opteron, EM64T Xeon, Core 2 duo).

There are total 8 DVD images:

You can use the following shell script to grab all 8 DVD images:

#!/bin/bash# getdeb6: Download Debian 6 DVD images# Tip: run it over screen session_bit="${1:-64}"_arch="i386"_base="http://cdimage.debian.org/debian-cd/6.0.0/i386/iso-dvd"[ "$_bit" == "64" ] && { _base="http://cdimage.debian.org/debian-cd/6.0.0/amd64/iso-dvd"; _arch="amd64"; }echo "Downloading Debian Linux v6.0 ${_bit} bit DVD..."for i in {1..8}do # build image path_image="${_base}/debian-6.0.0-${_arch}-DVD-${i}.iso"wget -c $_imagedone

To grab 32 bit images, enter:
$ mkdir debian6_32 && cd debian6_32
$ ./getdeb6 32
To grab 64 bit images
$ mkdir debian6_64 && cd debian6_64
$ ./getdeb6

There are total 52 ISO images, I strongly suggest to get DVD images :

You can use the following shell script to grab all 52 images:

#!/bin/bash# getdeb6: Download Debian 6 ISO images# Tip: run it over screen session_bit="${1:-64}"_arch="i386"_base="http://cdimage.debian.org/debian-cd/6.0.0/i386/iso-cd"[ "$_bit" == "64" ] && { _base="http://cdimage.debian.org/debian-cd/6.0.0/amd64/iso-cd"; _arch="amd64";}echo "Downloading Debian Linux v6.0 ${_bit} bit ISO images..."for i in {1..52}do_image="${_base}/debian-6.0.0-${_arch}-CD-${i}.iso"wget -c $_imagedone

To grab 32 bit images, enter:
$ mkdir debian6_32 && cd debian6_32
$ ./getdeb6 32
To grab 64 bit images
$ mkdir debian6_64 && cd debian6_64
$ ./getdeb6

Download images from the following mirror:

Linux Commands For Shared Library Management & Debugging Problem

If you are a developer, you will re-use code provided by others. Usually /lib, /lib64, /usr/local/lib, and other directories stores various shared libraries. You can write your own program using these shared libraries. As a sys admin you need to manage and install these shared libraries. Use the following commands for shared libraries management, security, and debugging problems.

In Linux or UNIX like operating system, a library is noting but a collection of resources such as subroutines / functions, classes, values or type specifications. There are two types of libraries:

Static libraries - All lib*.a fills are included into executables that use their functions. For example you can run a sendmail binary in chrooted jail using statically liked libs.Dynamic libraries or linking [ also known as DSO (dynamic shared object)] - All lib*.so* files are not copied into executables. The executable will automatically load the libraries using ld.so or ld-linux.so.ldconfig : Updates the necessary links for the run time link bindings.ldd : Tells what libraries a given program needs to run.ltrace : A library call tracer.ld.so/ld-linux.so: Dynamic linker/loader.

As a sys admin you should be aware of important files related to shared libraries:

/lib/ld-linux.so.* : Execution time linker/loader./etc/ld.so.conf : File containing a list of colon, space, tab, newline, or comma separated directories in which to search for libraries. /etc/ld.so.cache : File containing an ordered list of libraries found in the directories specified in /etc/ld.so.conf. This file is not in human readable format, and is not intended to be edited. This file is created by ldconfig command.lib*.so.version : Shared libraries stores in /lib, /usr/lib, /usr/lib64, /lib64, /usr/local/lib directories.

You need to use the ldconfig command to create, update, and remove the necessary links and cache (for use by the run-time linker, ld.so) to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/usr/lib, /lib64 and /lib). The ldconfig command checks the header and file names of the libraries it encounters when determining which versions should have their links updated. This command also creates a file called /etc/ld.so.cache which used to speed linking.

In this example, you've installed a new set of shared libraries at /usr/local/lib/:
$ ls -l /usr/local/lib/
Sample outputs:

-rw-r--r-- 1 root root 878738 Jun 16 2010 libGeoIP.a-rwxr-xr-x 1 root root 799 Jun 16 2010 libGeoIP.lalrwxrwxrwx 1 root root 17 Jun 16 2010 libGeoIP.so -> libGeoIP.so.1.4.6lrwxrwxrwx 1 root root 17 Jun 16 2010 libGeoIP.so.1 -> libGeoIP.so.1.4.6-rwxr-xr-x 1 root root 322776 Jun 16 2010 libGeoIP.so.1.4.6-rw-r--r-- 1 root root 72172 Jun 16 2010 libGeoIPUpdate.a-rwxr-xr-x 1 root root 872 Jun 16 2010 libGeoIPUpdate.lalrwxrwxrwx 1 root root 23 Jun 16 2010 libGeoIPUpdate.so -> libGeoIPUpdate.so.0.0.0lrwxrwxrwx 1 root root 23 Jun 16 2010 libGeoIPUpdate.so.0 -> libGeoIPUpdate.so.0.0.0-rwxr-xr-x 1 root root 55003 Jun 16 2010 libGeoIPUpdate.so.0.0.0

Now when you run an app related to libGeoIP.so, you will get an error about missing library. You need to run ldconfig command manually to link libraries by passing them as command line arguments with the -l switch:
# ldconfig -l /path/to/lib/our.new.lib.so
Another recommended options for sys admin is to create a file called /etc/ld.so.conf.d/geoip.conf as follows:

/usr/local/lib

Now just run ldconfig to update the cache:
# ldconfig
To verify new libs or to look for a linked library, enter:
# ldconfig -v
# ldconfig -v | grep -i geoip
Sample outputs:

libGeoIP.so.1 -> libGeoIP.so.1.4.6libGeoIPUpdate.so.0 -> libGeoIPUpdate.so.0.0.0

You can print the current cache with the -p option:
# ldconfig -p
Putting web server such as Apache / Nginx / Lighttpd in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. It is also necessary to copy all files required by Apache inside the filesystem rooted at /jail/ directory , including web server binaries, shared Libraries, modules, configuration files, and php/perl/html web pages. You need to also copy /etc/{ld.so.cache,ld.so.conf} files and /etc/ld.so.conf.d/ directory to /jail/etc/ directory. Use the ldconfig command to update, print and troubleshoot chrooted jail problems:

### chroot to jail bashchroot /jail /bin/bash### now update the cache in /jail ###ldconfig### print the cache in /jail ###ldconfig -p### copy missing libs ###cp /path/to/some.lib /jail/path/to/some.libldconfigldconfig -v | grep some.lib### get out of jail ###exit### may be delete bash and ldconfig to increase security (NOTE path carefully) ###cd /jailrm sbin/ldconfig bin/bash### now start nginx jail ###chroot /jail /usr/local/nginx/sbin/nginx 

A rootkit is a program (or combination of several programs) designed to take fundamental control of a computer system, without authorization by the system's owners and legitimate managers. Usually, rootkit use /lib, /lib64, /usr/local/lib directories to hide itself from real root users. You can use ldconfig command to view all the cache of all shared libraries and unwanted programs:
# /sbin/ldconfig -p | less
You can also use various tools to detect rootkits under Linux.

You may see the errors as follows:

Dynamic linker error in foo
Can't map cache file cache-file
Cache file cache-file foo

All of the above errors means the linker cache file /etc/ld.so.cache is corrupt or does not exists. To fix these errors simply run the ldconfig command as follows:
# ldconfig

The executable required a dynamically linked library that ld.so or ld-linux.so cannot find. It means a library called xyz needed by the program called foo not installed or path is not set. To fix this problem install xyz library and set path in /etc/ld.so.conf file or create a file in /etc/ld.so.conf.d/ directory.

ldd (List Dynamic Dependencies) is a Unix and Linux program to display the shared libraries required by each program. This tools is required to build and run various server programs in a chroot jail. A typical example is as follows to list the Apache server shared libraries, enter:
# ldd /usr/sbin/httpd
Sample outputs:

libm.so.6 => /lib64/libm.so.6 (0x00002aff52a0c000)libpcre.so.0 => /lib64/libpcre.so.0 (0x00002aff52c8f000)libselinux.so.1 => /lib64/libselinux.so.1 (0x00002aff52eab000)libaprutil-1.so.0 => /usr/lib64/libaprutil-1.so.0 (0x00002aff530c4000)libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002aff532de000)libldap-2.3.so.0 => /usr/lib64/libldap-2.3.so.0 (0x00002aff53516000)liblber-2.3.so.0 => /usr/lib64/liblber-2.3.so.0 (0x00002aff53751000)libdb-4.3.so => /lib64/libdb-4.3.so (0x00002aff5395f000)libexpat.so.0 => /lib64/libexpat.so.0 (0x00002aff53c55000)libapr-1.so.0 => /usr/lib64/libapr-1.so.0 (0x00002aff53e78000)libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aff5409f000)libdl.so.2 => /lib64/libdl.so.2 (0x00002aff542ba000)libc.so.6 => /lib64/libc.so.6 (0x00002aff544bf000)libsepol.so.1 => /lib64/libsepol.so.1 (0x00002aff54816000)/lib64/ld-linux-x86-64.so.2 (0x00002aff527ef000)libuuid.so.1 => /lib64/libuuid.so.1 (0x00002aff54a5c000)libresolv.so.2 => /lib64/libresolv.so.2 (0x00002aff54c61000)libsasl2.so.2 => /usr/lib64/libsasl2.so.2 (0x00002aff54e76000)libssl.so.6 => /lib64/libssl.so.6 (0x00002aff5508f000)libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00002aff552dc000)libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00002aff5562d000)libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00002aff5585c000)libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00002aff55af1000)libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00002aff55cf3000)libz.so.1 => /usr/lib64/libz.so.1 (0x00002aff55f19000)libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00002aff5612d000)libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00002aff56335000)

Now, you can copy all those libs one by one to /jail directory

# mkdir /jail/lib# cp /lib64/libm.so.6 /jail/lib# cp /lib64/libkeyutils.so.1 /jail/lib

You can write a bash script to automate the entire procedure:

cp_support_shared_libs(){ local d="$1" # JAIL ROOT local pFILE="$2" # copy bin file libs local files=""## use ldd to get shared libs list ### files="$(ldd $pFILE | awk '{ print $3 }' | sed '/^$/d')"  for i in $files do dcc="${i%/*}" # get dirname only [ ! -d ${d}${dcc} ] && mkdir -p ${d}${dcc} ${_cp} -f $i ${d}${dcc} done  # Works with 32 and 64 bit ld-linux sldl="$(ldd $pFILE | grep 'ld-linux' | awk '{ print $1}')" sldlsubdir="${sldl%/*}" [ ! -f ${d}${sldl} ] && ${_cp} -f ${sldl} ${d}${sldlsubdir}}

Call cp_support_shared_libs() it as follows:

cp_support_shared_libs "/jail" "/usr/local/nginx/sbin/nginx"

Type the following command:
$ ldd -d /path/to/executable

Type the following command:
$ ldd -r /path/to/executable

TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet. TCP wrappers was original written to monitor and stop cracking activities on the UNIX / Linux systems. To determine whether a given executable daemon supports TCP Wrapper or not, run the following command:
$ ldd /usr/sbin/sshd | grep libwrap
Sample outputs:

libwrap.so.0 => /lib64/libwrap.so.0 (0x00002abd70cbc000)

The output indicates that the OpenSSH (sshd) daemon supports TCP Wrapper.

You can use the ldd command when an executable is failing because of a missing dependency. Once you found a missing dependency, you can install it or update the cache with the ldconfig command as mentioned above.

The ltrace command simply runs the specified command until it exits. It intercepts and records the dynamic library calls which are called by the executed process and the signals which are received by that process. It can also intercept and print the system calls executed by the program. Its use is very similar to strace command.
# ltrace /usr/sbin/httpd
# ltrace /sbin/chroot /usr/sbin/httpd
# ltrace /bin/ls
Sample outputs:

__libc_start_main(0x804fae0, 1, 0xbfbd6544, 0x805bce0, 0x805bcd0 strrchr("/bin/ls", '/') = "/ls"setlocale(6, "") = "en_IN.utf8"bindtextdomain("coreutils", "/usr/share/locale") = "/usr/share/locale"textdomain("coreutils") = "coreutils"__cxa_atexit(0x8052d10, 0, 0, 0xbfbd6544, 0xbfbd6498) = 0isatty(1) = 1getenv("QUOTING_STYLE") = NULLgetenv("LS_BLOCK_SIZE") = NULLgetenv("BLOCK_SIZE") = NULLgetenv("BLOCKSIZE") = NULLgetenv("POSIXLY_CORRECT") = NULLgetenv("BLOCK_SIZE") = NULLgetenv("COLUMNS") = NULLioctl(1, 21523, 0xbfbd6470) = 0getenv("TABSIZE") = NULLgetopt_long(1, 0xbfbd6544, "abcdfghiklmnopqrstuvw:xABCDFGHI:"..., 0x0805ea40, -1) = -1__errno_location() = 0xb76b8694malloc(40) = 0x08c8e3e0memcpy(0x08c8e3e0, "", 40) = 0x08c8e3e0...............output truncatedfree(0x08c8e498) = free(NULL) = free(0x08c8e480) = exit(0 __fpending(0xb78334e0, 0xbfbd6334, 0xb78876a3, 0xb78968f8, 0) = 0fclose(0xb78334e0) = 0__fpending(0xb7833580, 0xbfbd6334, 0xb78876a3, 0xb78968f8, 0) = 0fclose(0xb7833580) = 0+++ exited (status 0) +++

The ltrace command is a perfect debugging utility in Linux:

To monitor the library calls used by a program and all the signals it receives. For tracking the execution of processes. It can also show system calls, used by a program.

Consider the following c program:

 #include int main(){printf("Hello world\n");return 0;} 

Compile and run it as follows:
$ cc hello.c -o hello
$ ./hello
Now use the ltrace command to tracking the execution of processes:
$ ltrace -S -tt ./hello
Sample outputs:

15:20:38.561616 SYS_brk(NULL) = 0x08f4200015:20:38.561845 SYS_access("/etc/ld.so.nohwcap", 00) = -215:20:38.562009 SYS_mmap2(0, 8192, 3, 34, -1) = 0xb770800015:20:38.562155 SYS_access("/etc/ld.so.preload", 04) = -215:20:38.562336 SYS_open("/etc/ld.so.cache", 0, 00) = 315:20:38.562502 SYS_fstat64(3, 0xbfaafe20, 0xb7726ff4, 0xb772787c, 3) = 015:20:38.562629 SYS_mmap2(0, 76469, 1, 2, 3) = 0xb76f500015:20:38.562755 SYS_close(3) = 015:20:38.564204 SYS_access("/etc/ld.so.nohwcap", 00) = -215:20:38.564372 SYS_open("/lib/tls/i686/cmov/libc.so.6", 0, 00) = 315:20:38.564561 SYS_read(3, "\177ELF\001\001\001", 512) = 51215:20:38.564694 SYS_fstat64(3, 0xbfaafe6c, 0xb7726ff4, 0xb7705796, 0x8048234) = 015:20:38.564822 SYS_mmap2(0, 0x1599a8, 5, 2050, 3) = 0xb759b00015:20:38.565076 SYS_mprotect(0xb76ee000, 4096, 0) = 015:20:38.565209 SYS_mmap2(0xb76ef000, 12288, 3, 2066, 3) = 0xb76ef00015:20:38.565454 SYS_mmap2(0xb76f2000, 10664, 3, 50, -1) = 0xb76f200015:20:38.565604 SYS_close(3) = 015:20:38.565709 SYS_mmap2(0, 4096, 3, 34, -1) = 0xb759a00015:20:38.565842 SYS_set_thread_area(0xbfab030c, 0xb7726ff4, 0xb759a6c0, 1, 0) = 015:20:38.566070 SYS_mprotect(0xb76ef000, 8192, 1) = 015:20:38.566185 SYS_mprotect(0x08049000, 4096, 1) = 015:20:38.566288 SYS_mprotect(0xb7726000, 4096, 1) = 015:20:38.566381 SYS_munmap(0xb76f5000, 76469) = 015:20:38.566522 __libc_start_main(0x80483e4, 1, 0xbfab04e4, 0x8048410, 0x8048400 15:20:38.566667 puts("Hello world" 15:20:38.566811 SYS_fstat64(1, 0xbfab0310, 0xb76f0ff4, 0xb76f14e0, 0x80484c0) = 015:20:38.566936 SYS_mmap2(0, 4096, 3, 34, -1) = 0xb770700015:20:38.567126 SYS_write(1, "Hello world\n", 12Hello world) = 1215:20:38.567282 <... puts resumed> ) = 1215:20:38.567348 SYS_exit_group(0 15:20:38.567454 +++ exited (status 0) +++

You need to carefully monitor the order and arguments of selected functions such as open() [used to open and possibly create a file or device] or chown() [used to change ownership of a file] so that you can spot simple kinds of race conditions or security related problems. This is quite useful for evaluating the security of binary programs to find out what kind of changes made to the system.

The ltrace command can be used to trace memory usage of the malloc() and free() functions in C program. You can calculate the amount of memory allocated as follows:
[node303 ~]$ ltrace -e malloc,free ./simulator arg1 agr2 arg3
The ltrace will start ./simulator program and it will trace the malloc() and free() functions. You can find out I/O problems as follows:
[node303 ~]$ ltrace -e fopen,fread,fwrite,fclose ./simulator arg1 agr2 arg3
You may need to change function names as your programming languages or UNIX platform may use different memory allocation functions.

The ld.so or / ld-linux.so used as follows by Linux:

To load the shared libraries needed by a program.To prepare the program to run, and then runs it.

Type the following command:
# cd /lib
For 64 bit systems:
# cd /lib64
Pass the --list option, enter:
# ./ld-2.5.so --list /path/to/executable

From the man page:

--verify verify that given object really is a dynamically linked object we can handle --library-path PATH use given PATH instead of content of the environment variable LD_LIBRARY_PATH --inhibit-rpath LIST ignore RUNPATH and RPATH information in object names in LIST

The LD_LIBRARY_PATH can be used to set a library path for finding dynamic libraries using LD_LIBRARY_PATH, in the standard colon seperated format:
$ export LD_LIBRARY_PATH=/opt/simulator/lib:/usr/local/lib
The LD_PRELOAD allow an extra library not specified in the executable to be loaded:
$ export LD_PRELOAD=/home/vivek/dirhard/libdiehard.so
Please note that these variables are ignored when executing setuid/setgid programs.

Top 5 Linux DVD RIP Software

A DVD ripper software allows you to copying the content of a DVD to a hard disk drive. You transfer video on DVDs to different formats, or make a backup of DVD content, and to convert DVD video for playback on media players, streaming, and mobile phone. A few DVD rippers software can copy protected disks so that you can make discs unrestricted and region-free.

Please note that most of the following programs can rip encrypted DVDs, as long as you have libdvdcss2 installed as described here. Please check the copyright laws for your country regarding the backup of any copyright-protected DVDs and other media.

AcidRip is an automated front end for MPlayer/Mencoder (ripping and encoding DVD tool using mplayer and mencoder) written in Perl, using Gtk2::Perl for a graphical interface. Makes encoding a DVD just one button click! You can install it as follows under Debian / Ubuntu Linux:
$ sudo apt-get install acidrip

Fig.01: Linux Ripping And Encoding DVD's With AcidRip Software Fig.01: Linux Ripping And Encoding DVD's With AcidRip Software


On the Preview tab you can choose to watch a bit of a preview of the resulting movie:
Fig.02: Preview your DVD rip Fig.02: Preview your DVD rip


And when you are ready, click the Start button to rip DVDs.

=> Download acidrip

dvd::rip is a full featured DVD copy program written in Perl i.e. fron end for transcode and ffmpeg. It provides an easy to use but feature-rich Gtk+ GUI to control almost all aspects of the ripping and transcoding process. It uses the widely known video processing swissknife transcode and many other Open Source tools. dvd::rip itself is licensed under GPL / Perl Artistic License. You can install dvd::rip as follows under Debian / Ubuntu Linux:
$ sudo apt-get install dvdrip

Fig.03: dvd::rip in action Fig.03: dvd::rip in action


You need to configure dvd::rip before you actually start a project. See the documentation for more information.

=> Download dvd::rip

HandBrake is an open-source, GPL-licensed, multiplatform, multithreaded video transcoder, available for MacOS X, Linux and Windows. It can rip from any DVD or Bluray-like source such as VIDEO_TS folder, DVD image, real DVD or bluray (unencrypted -- removal of copy protection is not supported), and some .VOB, .TS and M2TS files. You can install HandBrake under Debian or Ubuntu Linux as follows:
$ sudo apt-get install handbrake-gtk

Fig.04: HandBrake in action Fig.04: HandBrake in action

=> Download HandBrake

K9copy is a KDE DVD Backup tool. It allows the copy of a DVD9 to a DVD5. It is also known as a Linux DVD shrink. It supports the following features:

The video stream is compressed to make the video fiton a 4.7GB recordable DVDDVD BurningCreation of ISO imagesChoosing which audio and subtitle tracks are copied.Title preview (video only)The ability to preserve the original menus.

To install k9copy, enter:
$ sudo apt-get install k9copy

Fig.05: k9copy - Linux dvd shrink in action Fig.05: k9copy - Linux dvd shrink in action

=> Download k9copy

thoggen is a DVD backup utility ('DVD ripper') for Linux, based on GStreamer and Gtk+ toolkit. Thoggen is designed to be easy and straight-forward to use. It attempts to hide the complexity many other transcoding tools expose and tries to offer sensible defaults that work okay for most people most of the time. It support the following features:

Easy to use, with a nice graphical user interface (GUI).Supports title preview, picture cropping, and picture resizing.Language Selection for audio track (no subtitle support yet though).Encodes into Ogg/Theora video.Can encode from local directory with video DVD files.Based on the GStreamer multimedia framework, which makes it fairly easy to add additional encoding formats/codecs in future.

You can install thoggen as follows:
$ sudo apt-get install thoggen

Fig.06: Thoggen in action Fig.06: Thoggen in action

=> Download thoggen

=> You need to install various libraries to use the above mentioned tools such as (yum or apt-get commands will install them automatically for you):

libdvdcss2 - Simple foundation for reading DVDs - runtime libraries.libdvdnav4 - DVD navigation library.libdvdread4 - library for reading DVDs.

=> mencoder - Personally, I use mencoder to rip my DVDs into .avi files as follows:

mencoder dvd://2 -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate="1200" -vf scale -zoom -xy 640 -oac mp3lame -lameopts br=128 -o /nas/videos/my-movies/example/track2.avi

Please note that AcidRip, is a graphical frontend for mencoder.

=> VLC - Yes, VLC can rip DVDs too.

=> Transcode is a suite of command line utilities for transcoding video and audio codecs, and for converting between different container formats. Transcode can decode and encode many audio and video formats. Both K9Copy and dvd::rip are a graphical frontend for transcode.

=> Wine - It is an open source software for running Windows applications on other operating systems. You can use popular MS-Windows application such as DVDFab to rip encrypted DVD's and DVD Shrink to shrink them to smaller size. I do not *recommend* and encourage this option as it goes against the FOSS philosophy. The following screenshot based on trial version of DVDFab:

Fig.07: Running DVDFab under Wine v1.2.2 Fig.07: Running DVDFab under Wine v1.2.2

Have a favorite Linux DVD ripper software or ripping tip? Let's hear about it in the comments below.

Download Fedora 14 CD / DVD ISO

Fedora Linux version 14 has been released and available for download ( jump to download link ). Fedora Linux is a community-based Linux distribution. Fedora is sponsored by Red Hat, Inc. Fedora is considered as the second most popular distro, behind Ubuntu Linux for desktop and laptop usage.

Fig.01: Fedora Linux v.14 desktop (image credit: wikipedia) Fig.01: Fedora Linux v.14 desktop (image credit: wikipedia)

The new features in Fedora Linux ver. 14 are:

Updated Boost to the upstream 1.44 releaseAddition of the D compiler (LDC) and D standard runtime library (Tango)Concurrent release of Fedora 14 on the Amazon EC2 cloudUpdated Fedora's Eclipse stack to Helios releasesReplacement of libjpeg with libjpeg-turboInclusion of virt-v2v toolInclusion of Spice framework for VDI deploymentUpdates to Rakudo Star implementation of Perl 6NetBeans IDE updated to the 6.9 releaseInclusion of ipmiutil system management toolInclusion of a tech preview of the GNOME Shell environment

You can download Fedora Linux 14 via the web/ftp server or via BitTorrent (recommended).

For almost all PCs select 32 bit version. For e.g., most machines with Intel/AMD/etc type processors. Good for desktop usage. Almost all multimedia plugins and software works with 32bit edition.Choose 64 bit version to take full advantage of computers based on the AMD64 or EM64T architecture (e.g., Athlon64, Opteron, EM64T Xeon, Core 2 Due, Core 2 Quad, and so on). For servers and advanced feature such as hardware error detection, access to more than 4GB RAM and so on; use 64bit version.

There are total 5 ISO images (5 CDs):

Download images from the following mirror:

See complete list of torrents here .

.NET Open Source Community – CodePlex / GitHub Comparision

The .NET segment of the open source ecosystem has been one of the fastest growing over the last few years.  The vast majority of all projects on CodePlex are .NET related, and among .NET developers CodePlex is generally the most well-known open source project hosting site.  The number of new projects started on CodePlex has been ever accelerating as shown in the following chart:

image

CodePlex / GitHub Comparisons

GitHub is another open source project hosting site that has been rising in popularity.  Although GitHub is primarily used by developers preferring Mac or Linux, there are also many .NET developers that use it for their projects.  Sometimes we get questions about how the .NET open source developer community compares between CodePlex and GitHub, so below includes some information around that.

Project Counts

After CodePlex, GitHub probably has the largest number of .NET projects among the various open source project hosting sites.  The following table shows both the total counts and “Popular Project” counts (projects with at least 5 followers):

Popular Projects (5+ followers)

Between the two sites there are over thirty thousand projects, although CodePlex has approximately 2.5x as many .NET projects as GitHub.  For popular projects, CodePlex has approximately 4x as many.  We’re not sure whether this is because popular .NET projects are more likely to choose CodePlex, or the community on CodePlex is more likely to make a .NET project popular, but it is probably some combination of both.

* GitHub does not require developers to specify a license, and typically less than half of them do.  Without a license specified, a project is not considered true “Open Source” since without specifying a valid open source license, project users do not actually have the legal rights that an open source license provides.  The above table counts the total number, not just the number of C# projects with an open source license specified.

Popular Projects

I think another interesting statistic is the percentage of total projects that are “Popular” using the same metric of having 5 or more followers.  The following table shows the popular project percentage for CodePlex and GitHub, including for just the subset of GitHub projects that are C# and Objective-C:

The percentage of popular projects on CodePlex is higher than for C# projects on GitHub, but both are higher than the percentage of popular projects across all languages on GitHub. However for Objective-C projects on GitHub, a very high percentage of them are popular. GitHub is very popular among Mac developers, so is presumably the correlation there.

Overall Summary

I think it is great to see the growth in the .NET open source community, and all indications are it will only continue growing faster. I believe CodePlex has done a lot to help encourage and support .NET open source developers and look forward to helping many thousand more open source projects become popular and successful!

Sunday, May 1, 2011

HowTo: Configure Vbulletin To Use A Content Delivery Network (CDN)

The last time I wrote about CDN, I wrote about how to configure CDN for wordpress to speed up your wordpress blog to display content to users faster and more efficiently. However, a few regular readers like to know how to configure the Amazon CDN or other CDN network to use with Vbulletin forum software. In this quick tutorial, I will explains how to configure Vbulletin, Apache/Lighttpd webserver, Bind dns server to use a CDN to distribute your common files such as css, js, user uploaded files and lighten load on your web server.

Forum URL : http://nixcraft.in/ - This is hosted on your own server using Apache, Lighttpd, or Nginx.Origin Pull URL : http://cdn-origin.nixcraft.in/ - This is hosted on your own server. You need to configure your web server, vbulletin and dns server to use this. This is called as "Origin Pull Host" which is a CDN method by which content is pulled from your web server.CDN URL : http://cdn.nixcraft.in/ - This is a cdn url hosted by your CDN provider such as Amazon. This url always point to an edge server via proprietary DNS hacks. cdn.nixcraft.in must be set as CNAME records which will point to domain names of CDN server.CDN DNS CNAME : cdn.nixcraft.in.example.com - example.com is your CDN provider. This is must be set as CNAME for cdn.nixcraft.in

As I said earlier the cost varies between CDN providers. Check CDN service providers website for more information. Next, you need to use service providers "control panel" to configure an "Origin Pull Host" for each domain. In other words configure cdn.nixcraft.in in origin pull mode. The control panel will also provide your an option to setup CDN dns CNAME. You need to use same CNAME in step # 2. Once the configuration is active and the CNAME is resolving, calls to cdn.nixcraft.in will be cached from cdn-origin.nixcraft.in.

I'm assuming that you are using BIND dns server edit your zone file and add entry as follows (you can skip this step and use your ISP's dns hosting providers control panel to setup CNAME and origin host):

; CDN CNAME mapping for cdn.nixcraft.incdn 3660 IN CNAME cdn.nixcraft.in.example.com.; Your cdn-origin url (note nixcraft.in is also hosted on same server IP 123.1.2.3)cdn-origin 3600 IN A 123.1.2.3

Save and close the file. Reload named:
# rndc reload && tail -f /var/log/messages
To keep your configuration simple use the same web server for origin pull domain and main domain i.e. host both cdn-origin.nixcraft.in and nixcraft.in on same web server. This allows you to directly upload and map files to the CDN server.

You need to configure cdn-origin.nixcraft.in as follows:

Origin pull DocumentRoot: /home/httpd/cdn-origin.nixcraft.in - All your .css, .js and uploaded files are hosted here.Server Forum DocumentRoot: /home/httpd/nixcraft.in - All your vbulletin files are hosted here.MaxAge: Set cache-lifetime headers for static files for cdn network.Etags: An ETag (entity tag) is part of HTTP, the protocol for the World Wide Web. It is a response header that may be returned by an HTTP/1.1 compliant web server and is used to determine change in content at a given URL. When a new HTTP response contains the same ETag as an older HTTP response, the client can conclude that the content is the same without further downloading.ServerAdmin webmaster@nixcraft.inDocumentRoot /home/httpd/cdn-origin.nixcraft.inServerName files.nixcraft.inServerAlias file.nixcraft.inErrorLog /var/logs/httpd/cdn-error_logCustomLog /var/logs/httpd/cdn-access_log common # Files in this directory will be cached for 1 week only.# After 1 week, CDN server will check if the contents has been modified or not.# If not modified, Apache will send 304 "Not Modified" headerHeader set Cache-Control "max-age=604800, must-revalidate" # Disable ETag as we are on cluster Apache serverHeader unset ETagFileETag None # Do not cacheHeader Set Cache-Control "max-age=0, no-store"# Configure ETagsetag.use-inode = "enable"etag.use-mtime = "enable"etag.use-size = "enable"static-file.etags = "enable" ###### CDN FILES via WordPress Upload ##############$HTTP["host"] == "cdn-origin.nixcraft.in"{ server.document-root = "/home/httpd/cdn-origin.nixcraft.in" accesslog.filename = "/var/log/lighttpd/cdn.access.log"# Set max age $HTTP["url"] =~ "^/" { expire.url = ( "" => "access 60 days" ) }}

Adjust documentroot as per your setup.

You need to configure files for cdn-origin.nixcraft.in:
# mkdir -p /home/httpd/cdn-origin.nixcraft.in
# cd /home/httpd/cdn-origin.nixcraft.in
Next, soft link your .css, .js, images, clientscripts files against original forum documentroot (i.e. /home/httpd/nixcraft.in/) as follows:
# ln -s ../nixcraft.in/clear.gif .
# ln -s ../nixcraft.in/clientscript/ .
# ln -s ../nixcraft.in/customavatars/ .
# ln -s ../nixcraft.in/customprofilepics/ .
# ln -s ../nixcraft.in/images/ .
# ln -s ../nixcraft.in/signaturepics/ .
Again, feel free to adjust paths according to your setup. Test your new cdn urls:
http://cdn.nixcraft.in/clientscript/vbulletin_important.css

You need to edit your vbulletin style. Open admincp by visiting http://nixcraft.in/admincp/ > Select Styles & Templates > Replacement Variable Manager:

Fig.01: Vbulletin Editing Styles And Templates Fig.01: Vbulletin Editing Styles And Templates

Click on [Add New Replacement Variable] link and set it as follows:

Set Search for Text to href="clientscriptSet Replace with Text to href="http://cdn.nixcraft.in/clientscript

Sample outputs:

Fig.02: Vbulletin Adding Replacement Variable For CDN Fig.02: Vbulletin Adding Replacement Variable For CDN


You need to repeat this step for images, javascript and other shared media as follows: Search for Text Replace with Textsrc="http://cdn.nixcraft.in/clear.gif"src="http://cdn.nixcraft.in/customavatars/src="http://cdn.nixcraft.in/customprofilepics/src="http://cdn.nixcraft.in/images/url("http://cdn.nixcraft.in/clientscriptsrc="http://cdn.nixcraft.in/clientscript/href="http://cdn.nixcraft.in/clientscript/url(http://cdn.nixcraft.in/images/url(http://cdn.nixcraft.in/images/var imgdir_misc = "images/misc"; var IMGDIR_MISC = "http://cdn.nixcraft.in/images/misc";

Visit Avatars > Storage Type and set them as follows to match your above CDN rules by moving all of them to file systems:

Avatars are currently being served from the filesystem at ./customavatarsProfile pictures are currently being served from the filesystem at ./customprofilepicsSignature pictures are currently being served from the filesystem at ./signaturepics

Use curl to test HTTP headers (look for Etags, max-age and Expires headers):
$ curl -I 'http://cdn.nixcraft.in/clientscript/vbulletin_important.css?v=385'
$ curl -I http://cdn.nixcraft.in/customavatars/avatarx_y.gif

The forum home page loading (rendering) time went from 8.5 seconds to 2.2 seconds and average thread loading time went from 14.3 seconds to 5 seconds:

Fig.03 Speed Improvements With CDN Fig.03 Speed Improvements With CDN


See 6 tools to test web site speed for more information. This blog post is 4 of 4 in the "Networks & Applications of Distributed Computing Tutorial" series. Keep reading the rest of the series:

Download Fedora 14 CD / DVD ISO

Fedora Linux version 14 has been released and available for download ( jump to download link ). Fedora Linux is a community-based Linux distribution. Fedora is sponsored by Red Hat, Inc. Fedora is considered as the second most popular distro, behind Ubuntu Linux for desktop and laptop usage.

Fig.01: Fedora Linux v.14 desktop (image credit: wikipedia) Fig.01: Fedora Linux v.14 desktop (image credit: wikipedia)

The new features in Fedora Linux ver. 14 are:

Updated Boost to the upstream 1.44 releaseAddition of the D compiler (LDC) and D standard runtime library (Tango)Concurrent release of Fedora 14 on the Amazon EC2 cloudUpdated Fedora's Eclipse stack to Helios releasesReplacement of libjpeg with libjpeg-turboInclusion of virt-v2v toolInclusion of Spice framework for VDI deploymentUpdates to Rakudo Star implementation of Perl 6NetBeans IDE updated to the 6.9 releaseInclusion of ipmiutil system management toolInclusion of a tech preview of the GNOME Shell environment

You can download Fedora Linux 14 via the web/ftp server or via BitTorrent (recommended).

For almost all PCs select 32 bit version. For e.g., most machines with Intel/AMD/etc type processors. Good for desktop usage. Almost all multimedia plugins and software works with 32bit edition.Choose 64 bit version to take full advantage of computers based on the AMD64 or EM64T architecture (e.g., Athlon64, Opteron, EM64T Xeon, Core 2 Due, Core 2 Quad, and so on). For servers and advanced feature such as hardware error detection, access to more than 4GB RAM and so on; use 64bit version.

There are total 5 ISO images (5 CDs):

Download images from the following mirror:

See complete list of torrents here .

Linux Commands For Shared Library Management & Debugging Problem


If you are a developer, you will re-use code provided by others. Usually /lib, /lib64, /usr/local/lib, and other directories stores various shared libraries. You can write your own program using these shared libraries. As a sys admin you need to manage and install these shared libraries. Use the following commands for shared libraries management, security, and debugging problems.


In Linux or UNIX like operating system, a library is noting but a collection of resources such as subroutines / functions, classes, values or type specifications. There are two types of libraries:

Static libraries - All lib*.a fills are included into executables that use their functions. For example you can run a sendmail binary in chrooted jail using statically liked libs.Dynamic libraries or linking [ also known as DSO (dynamic shared object)] - All lib*.so* files are not copied into executables. The executable will automatically load the libraries using ld.so or ld-linux.so.ldconfig : Updates the necessary links for the run time link bindings.ldd : Tells what libraries a given program needs to run.ltrace : A library call tracer.ld.so/ld-linux.so: Dynamic linker/loader.

As a sys admin you should be aware of important files related to shared libraries:

/lib/ld-linux.so.* : Execution time linker/loader./etc/ld.so.conf : File containing a list of colon, space, tab, newline, or comma separated directories in which to search for libraries. /etc/ld.so.cache : File containing an ordered list of libraries found in the directories specified in /etc/ld.so.conf. This file is not in human readable format, and is not intended to be edited. This file is created by ldconfig command.lib*.so.version : Shared libraries stores in /lib, /usr/lib, /usr/lib64, /lib64, /usr/local/lib directories.

You need to use the ldconfig command to create, update, and remove the necessary links and cache (for use by the run-time linker, ld.so) to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/usr/lib, /lib64 and /lib). The ldconfig command checks the header and file names of the libraries it encounters when determining which versions should have their links updated. This command also creates a file called /etc/ld.so.cache which used to speed linking.


In this example, you've installed a new set of shared libraries at /usr/local/lib/:
$ ls -l /usr/local/lib/
Sample outputs:

-rw-r--r-- 1 root root 878738 Jun 16 2010 libGeoIP.a-rwxr-xr-x 1 root root 799 Jun 16 2010 libGeoIP.lalrwxrwxrwx 1 root root 17 Jun 16 2010 libGeoIP.so -> libGeoIP.so.1.4.6lrwxrwxrwx 1 root root 17 Jun 16 2010 libGeoIP.so.1 -> libGeoIP.so.1.4.6-rwxr-xr-x 1 root root 322776 Jun 16 2010 libGeoIP.so.1.4.6-rw-r--r-- 1 root root 72172 Jun 16 2010 libGeoIPUpdate.a-rwxr-xr-x 1 root root 872 Jun 16 2010 libGeoIPUpdate.lalrwxrwxrwx 1 root root 23 Jun 16 2010 libGeoIPUpdate.so -> libGeoIPUpdate.so.0.0.0lrwxrwxrwx 1 root root 23 Jun 16 2010 libGeoIPUpdate.so.0 -> libGeoIPUpdate.so.0.0.0-rwxr-xr-x 1 root root 55003 Jun 16 2010 libGeoIPUpdate.so.0.0.0

Now when you run an app related to libGeoIP.so, you will get an error about missing library. You need to run ldconfig command manually to link libraries by passing them as command line arguments with the -l switch:
# ldconfig -l /path/to/lib/our.new.lib.so
Another recommended options for sys admin is to create a file called /etc/ld.so.conf.d/geoip.conf as follows:

/usr/local/lib

Now just run ldconfig to update the cache:
# ldconfig
To verify new libs or to look for a linked library, enter:
# ldconfig -v
# ldconfig -v | grep -i geoip
Sample outputs:

libGeoIP.so.1 -> libGeoIP.so.1.4.6libGeoIPUpdate.so.0 -> libGeoIPUpdate.so.0.0.0

You can print the current cache with the -p option:
# ldconfig -p
Putting web server such as Apache / Nginx / Lighttpd in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. It is also necessary to copy all files required by Apache inside the filesystem rooted at /jail/ directory , including web server binaries, shared Libraries, modules, configuration files, and php/perl/html web pages. You need to also copy /etc/{ld.so.cache,ld.so.conf} files and /etc/ld.so.conf.d/ directory to /jail/etc/ directory. Use the ldconfig command to update, print and troubleshoot chrooted jail problems:

### chroot to jail bashchroot /jail /bin/bash### now update the cache in /jail ###ldconfig### print the cache in /jail ###ldconfig -p### copy missing libs ###cp /path/to/some.lib /jail/path/to/some.libldconfigldconfig -v | grep some.lib### get out of jail ###exit### may be delete bash and ldconfig to increase security (NOTE path carefully) ###cd /jailrm sbin/ldconfig bin/bash### now start nginx jail ###chroot /jail /usr/local/nginx/sbin/nginx 

A rootkit is a program (or combination of several programs) designed to take fundamental control of a computer system, without authorization by the system's owners and legitimate managers. Usually, rootkit use /lib, /lib64, /usr/local/lib directories to hide itself from real root users. You can use ldconfig command to view all the cache of all shared libraries and unwanted programs:
# /sbin/ldconfig -p | less
You can also use various tools to detect rootkits under Linux.


You may see the errors as follows:



Dynamic linker error in foo
Can't map cache file cache-file
Cache file cache-file foo


All of the above errors means the linker cache file /etc/ld.so.cache is corrupt or does not exists. To fix these errors simply run the ldconfig command as follows:
# ldconfig


The executable required a dynamically linked library that ld.so or ld-linux.so cannot find. It means a library called xyz needed by the program called foo not installed or path is not set. To fix this problem install xyz library and set path in /etc/ld.so.conf file or create a file in /etc/ld.so.conf.d/ directory.


ldd (List Dynamic Dependencies) is a Unix and Linux program to display the shared libraries required by each program. This tools is required to build and run various server programs in a chroot jail. A typical example is as follows to list the Apache server shared libraries, enter:
# ldd /usr/sbin/httpd
Sample outputs:

libm.so.6 => /lib64/libm.so.6 (0x00002aff52a0c000)libpcre.so.0 => /lib64/libpcre.so.0 (0x00002aff52c8f000)libselinux.so.1 => /lib64/libselinux.so.1 (0x00002aff52eab000)libaprutil-1.so.0 => /usr/lib64/libaprutil-1.so.0 (0x00002aff530c4000)libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002aff532de000)libldap-2.3.so.0 => /usr/lib64/libldap-2.3.so.0 (0x00002aff53516000)liblber-2.3.so.0 => /usr/lib64/liblber-2.3.so.0 (0x00002aff53751000)libdb-4.3.so => /lib64/libdb-4.3.so (0x00002aff5395f000)libexpat.so.0 => /lib64/libexpat.so.0 (0x00002aff53c55000)libapr-1.so.0 => /usr/lib64/libapr-1.so.0 (0x00002aff53e78000)libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aff5409f000)libdl.so.2 => /lib64/libdl.so.2 (0x00002aff542ba000)libc.so.6 => /lib64/libc.so.6 (0x00002aff544bf000)libsepol.so.1 => /lib64/libsepol.so.1 (0x00002aff54816000)/lib64/ld-linux-x86-64.so.2 (0x00002aff527ef000)libuuid.so.1 => /lib64/libuuid.so.1 (0x00002aff54a5c000)libresolv.so.2 => /lib64/libresolv.so.2 (0x00002aff54c61000)libsasl2.so.2 => /usr/lib64/libsasl2.so.2 (0x00002aff54e76000)libssl.so.6 => /lib64/libssl.so.6 (0x00002aff5508f000)libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00002aff552dc000)libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00002aff5562d000)libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00002aff5585c000)libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00002aff55af1000)libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00002aff55cf3000)libz.so.1 => /usr/lib64/libz.so.1 (0x00002aff55f19000)libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00002aff5612d000)libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00002aff56335000)

Now, you can copy all those libs one by one to /jail directory

# mkdir /jail/lib# cp /lib64/libm.so.6 /jail/lib# cp /lib64/libkeyutils.so.1 /jail/lib

You can write a bash script to automate the entire procedure:

cp_support_shared_libs(){ local d="$1" # JAIL ROOT local pFILE="$2" # copy bin file libs local files=""## use ldd to get shared libs list ### files="$(ldd $pFILE | awk '{ print $3 }' | sed '/^$/d')"  for i in $files do dcc="${i%/*}" # get dirname only [ ! -d ${d}${dcc} ] && mkdir -p ${d}${dcc} ${_cp} -f $i ${d}${dcc} done  # Works with 32 and 64 bit ld-linux sldl="$(ldd $pFILE | grep 'ld-linux' | awk '{ print $1}')" sldlsubdir="${sldl%/*}" [ ! -f ${d}${sldl} ] && ${_cp} -f ${sldl} ${d}${sldlsubdir}}

Call cp_support_shared_libs() it as follows:

cp_support_shared_libs "/jail" "/usr/local/nginx/sbin/nginx"

Type the following command:
$ ldd -d /path/to/executable


Type the following command:
$ ldd -r /path/to/executable


TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet. TCP wrappers was original written to monitor and stop cracking activities on the UNIX / Linux systems. To determine whether a given executable daemon supports TCP Wrapper or not, run the following command:
$ ldd /usr/sbin/sshd | grep libwrap
Sample outputs:

libwrap.so.0 => /lib64/libwrap.so.0 (0x00002abd70cbc000)

The output indicates that the OpenSSH (sshd) daemon supports TCP Wrapper.


You can use the ldd command when an executable is failing because of a missing dependency. Once you found a missing dependency, you can install it or update the cache with the ldconfig command as mentioned above.


The ltrace command simply runs the specified command until it exits. It intercepts and records the dynamic library calls which are called by the executed process and the signals which are received by that process. It can also intercept and print the system calls executed by the program. Its use is very similar to strace command.
# ltrace /usr/sbin/httpd
# ltrace /sbin/chroot /usr/sbin/httpd
# ltrace /bin/ls
Sample outputs:

__libc_start_main(0x804fae0, 1, 0xbfbd6544, 0x805bce0, 0x805bcd0 strrchr("/bin/ls", '/') = "/ls"setlocale(6, "") = "en_IN.utf8"bindtextdomain("coreutils", "/usr/share/locale") = "/usr/share/locale"textdomain("coreutils") = "coreutils"__cxa_atexit(0x8052d10, 0, 0, 0xbfbd6544, 0xbfbd6498) = 0isatty(1) = 1getenv("QUOTING_STYLE") = NULLgetenv("LS_BLOCK_SIZE") = NULLgetenv("BLOCK_SIZE") = NULLgetenv("BLOCKSIZE") = NULLgetenv("POSIXLY_CORRECT") = NULLgetenv("BLOCK_SIZE") = NULLgetenv("COLUMNS") = NULLioctl(1, 21523, 0xbfbd6470) = 0getenv("TABSIZE") = NULLgetopt_long(1, 0xbfbd6544, "abcdfghiklmnopqrstuvw:xABCDFGHI:"..., 0x0805ea40, -1) = -1__errno_location() = 0xb76b8694malloc(40) = 0x08c8e3e0memcpy(0x08c8e3e0, "", 40) = 0x08c8e3e0...............output truncatedfree(0x08c8e498) = free(NULL) = free(0x08c8e480) = exit(0 __fpending(0xb78334e0, 0xbfbd6334, 0xb78876a3, 0xb78968f8, 0) = 0fclose(0xb78334e0) = 0__fpending(0xb7833580, 0xbfbd6334, 0xb78876a3, 0xb78968f8, 0) = 0fclose(0xb7833580) = 0+++ exited (status 0) +++

The ltrace command is a perfect debugging utility in Linux:

To monitor the library calls used by a program and all the signals it receives. For tracking the execution of processes. It can also show system calls, used by a program.

Consider the following c program:

 #include int main(){printf("Hello world\n");return 0;} 

Compile and run it as follows:
$ cc hello.c -o hello
$ ./hello
Now use the ltrace command to tracking the execution of processes:
$ ltrace -S -tt ./hello
Sample outputs:

15:20:38.561616 SYS_brk(NULL) = 0x08f4200015:20:38.561845 SYS_access("/etc/ld.so.nohwcap", 00) = -215:20:38.562009 SYS_mmap2(0, 8192, 3, 34, -1) = 0xb770800015:20:38.562155 SYS_access("/etc/ld.so.preload", 04) = -215:20:38.562336 SYS_open("/etc/ld.so.cache", 0, 00) = 315:20:38.562502 SYS_fstat64(3, 0xbfaafe20, 0xb7726ff4, 0xb772787c, 3) = 015:20:38.562629 SYS_mmap2(0, 76469, 1, 2, 3) = 0xb76f500015:20:38.562755 SYS_close(3) = 015:20:38.564204 SYS_access("/etc/ld.so.nohwcap", 00) = -215:20:38.564372 SYS_open("/lib/tls/i686/cmov/libc.so.6", 0, 00) = 315:20:38.564561 SYS_read(3, "\177ELF\001\001\001", 512) = 51215:20:38.564694 SYS_fstat64(3, 0xbfaafe6c, 0xb7726ff4, 0xb7705796, 0x8048234) = 015:20:38.564822 SYS_mmap2(0, 0x1599a8, 5, 2050, 3) = 0xb759b00015:20:38.565076 SYS_mprotect(0xb76ee000, 4096, 0) = 015:20:38.565209 SYS_mmap2(0xb76ef000, 12288, 3, 2066, 3) = 0xb76ef00015:20:38.565454 SYS_mmap2(0xb76f2000, 10664, 3, 50, -1) = 0xb76f200015:20:38.565604 SYS_close(3) = 015:20:38.565709 SYS_mmap2(0, 4096, 3, 34, -1) = 0xb759a00015:20:38.565842 SYS_set_thread_area(0xbfab030c, 0xb7726ff4, 0xb759a6c0, 1, 0) = 015:20:38.566070 SYS_mprotect(0xb76ef000, 8192, 1) = 015:20:38.566185 SYS_mprotect(0x08049000, 4096, 1) = 015:20:38.566288 SYS_mprotect(0xb7726000, 4096, 1) = 015:20:38.566381 SYS_munmap(0xb76f5000, 76469) = 015:20:38.566522 __libc_start_main(0x80483e4, 1, 0xbfab04e4, 0x8048410, 0x8048400 15:20:38.566667 puts("Hello world" 15:20:38.566811 SYS_fstat64(1, 0xbfab0310, 0xb76f0ff4, 0xb76f14e0, 0x80484c0) = 015:20:38.566936 SYS_mmap2(0, 4096, 3, 34, -1) = 0xb770700015:20:38.567126 SYS_write(1, "Hello world\n", 12Hello world) = 1215:20:38.567282 <... puts resumed> ) = 1215:20:38.567348 SYS_exit_group(0 15:20:38.567454 +++ exited (status 0) +++

You need to carefully monitor the order and arguments of selected functions such as open() [used to open and possibly create a file or device] or chown() [used to change ownership of a file] so that you can spot simple kinds of race conditions or security related problems. This is quite useful for evaluating the security of binary programs to find out what kind of changes made to the system.


The ltrace command can be used to trace memory usage of the malloc() and free() functions in C program. You can calculate the amount of memory allocated as follows:
[node303 ~]$ ltrace -e malloc,free ./simulator arg1 agr2 arg3
The ltrace will start ./simulator program and it will trace the malloc() and free() functions. You can find out I/O problems as follows:
[node303 ~]$ ltrace -e fopen,fread,fwrite,fclose ./simulator arg1 agr2 arg3
You may need to change function names as your programming languages or UNIX platform may use different memory allocation functions.


The ld.so or / ld-linux.so used as follows by Linux:

To load the shared libraries needed by a program.To prepare the program to run, and then runs it.

Type the following command:
# cd /lib
For 64 bit systems:
# cd /lib64
Pass the --list option, enter:
# ./ld-2.5.so --list /path/to/executable

Linux / UNIX Desktop Fun: Terminal ASCII Aquarium

You can now enjoy mysteries of the sea from the safety of your own terminal using ASCIIQuarium. It is an aquarium/sea animation in ASCII art created using perl.

First, you need to install Perl module called Term-Animation. Open a command-line terminal (select Applications > Accessories > Terminal), and then type:
$ sudo apt-get install libcurses-perl
$ cd /tmp
$ wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz
$ tar -zxvf Term-Animation-2.4.tar.gz
$ cd Term-Animation-2.4/
$ perl Makefile.PL && make && make test
$ sudo make install

While still at bash prompt, type:
$ cd /tmp
$ wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz
$ tar -zxvf asciiquarium.tar.gz
$ cd asciiquarium_1.0/
$ sudo cp asciiquarium /usr/local/bin
$ sudo chmod 0755 /usr/local/bin/asciiquarium

Simply type the following command:
$ /usr/local/bin/asciiquarium
OR
$ perl /usr/local/bin/asciiquarium

(Fig.01: ASCII Aquarium [ click to enlarge ] )

Download - If you're running Mac OS X, try a packaged version that will run out of the box. For KDE users, try a KDE Screensaver based on the Asciiquarium.