Sunday, April 15, 2012

How to install rtorrent 0.9.1 and use magnet links on an Iomega Storcenter ix4-200d


This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995

I explained in a previous post why I wanted to use rtorrent instead of the torrent client supplied with the storcenter.
There is a new developpment: thepiratebay.se switched to magnets only for file sharing: the version of rtorrent previously installed did not support magnets....
The good news is that the new rtorrent (0.9.1) does support IP filtering natively!

The problem is it was difficult to compile for the storcenter as the gcc toolchain available on the storcenter is very old... but no worries, I compiled it for you!

1. SSH into your NAS
See my other post: How to ssh into your Iomega StorCenter ix4-200d

2. Install the software
See my other post here to setup at the minimum ikg and ipkg-opt. Then:
ipkg-opt install lighttpd
ipkg-opt install screen

Then, get my pre-compiled version of rtorrent-0.9.1 (works on Iomega Storcenter ix4-200d):
if you want to compile it yourself for a strange architecture, you might want to look at section 3 of my other post How to solve the "undefined reference to '__sync_sub_and_fetch_4'" compilation problem
Warning: this is going to override the following files:
/opt/etc/rtorrent.conf
/opt/etc/init.d/S99rtorrent
/opt/lib/libtorrent.14.0.3
/opt/lib/libtorrent.14
/opt/bin/rtorrent
Make sure you saved everything that needed to be saved before running it!
cd /opt/tmp/
wget http://dl.dropbox.com/u/50398581/rtorrent-0.9.1/rtorrent-0.9.1-package.tar.gz
cd /
tar -xvf /opt/tmp/rtorrent-0.9.1-package.tar.gz


If you don't want to connect remotely to rtorrent to manage it from you computer, you can skip the rest of this section...
Install nTorrent on your computer http://code.google.com/p/ntorrent/
Install xml-rpc on the NAS:
ipkg install optware-devel
ipkg install libcurl-dev
cd /opt/tmp/
svn checkout http://xmlrpc-c.svn.sourceforge.net/svnroot/xmlrpc-c/stable xmlrpc-c    
cd xmlrpc-c/
./configure --prefix=/opt
make
make install
Note: ou can choose something other that nTorrent. Please give me you feedback in the comments if you do.



3. Configure the software

Fix path and other info in rtorrent.conf.
This is also where you want to disable remote access if you don't want it, comment the line:
scgi_port = localhost:5000

If you want to use remote acces, you need to:
vi /opt/etc/lighttpdlighttpd.conf
between
#                               "mod_rrdtool",
and
"mod_accesslog" )
add
"mod_scgi",
and at the end add:
scgi.server = (
"/RPC2" => ( 
    "127.0.0.1" => (
        "host" => "127.0.0.1",
        "port" => 5000,
        "check-local" => "disable"
        )
    )
)
Security warning: if you follow these steps, anybody that can access port 8081 of you NAS will be able to send commands to rtorrent! You want to make sure that this port is only accessible from your local network.

4. Ip filtering
a. download the file
Ip filtering support is build into rtorrent-0.9.1, but you still need to configure the download of the filter files:
vi /etc/cron.daily/rtorrent_ipfilter
#!/bin/sh
cd /mnt/pools/A/A0/torrents/rtorrent/ipfilter/
wget http://list.iblocklist.com/?list=bt_level1
mv index.html\?list\=bt_level1 level1new.gz
gunzip level1new.gz
sed 's/^.*:\([^:]*\)$/\1/g' level1new | grep -v '^#' > level1new_2
rm level1
mv level1new level1
rm level1_2
mv level1new_2 level1_2
then:
mkdir /mnt/pools/A/A0/torrents/rtorrent/ipfilter/
cd /etc/cron.daily/
chmod a+x rtorrent_ipfilter
./rtorrent_ipfilter

b. if not already done, make sure the cron daemon is started at boot
The cron daemon is not started at boot by default....

You can start it manually:
/etc/init.d/cron start

But to have it start up every time at boot, we need to add the line:
/etc/init.d/cron start >> /opt/init-opt.log
to our /opt/init-opt.sh script.

See my other post How to run a program at boot on the Iomega Storcenter NAS to see how it works!


5. Test your setup
/opt/bin/rtorrent -n -o import=/opt/etc/rtorrent.conf
if you get:
rtorrent: Fault occured while inserting xmlrpc call.
did you install xmlprc correctly? is ld.so.conf updated correctly? did you run ldconfig?

to connect to the running instance:
/opt/bin/screen -r rtorrent
and kill the terminal (putty) to exit or press Ctrl-a d.

For remote access: you can start lighthttpd on the NAS
/opt/etc/init.d/S80lighttpd start
and then start nTorrent on your computer and connect to your NAS port 8081 (by default) on path /RPC2.




6. Get rtorrent to start automatically on reboot
Follow the tutorial How to run a program at boot on Iomage Strocenter You just need to add the following lines to the script:
/opt/etc/init.d/S80lighttpd start >> /opt/init-opt.log
/opt/etc/init.d/S99rtorrent start >> /opt/init-opt.log
If you have another brand of NAS (or a regular linux OS), just try to link the startup scripts to /etc/rc2.d/ like ou would normally do an a linux box:
ln -s /opt/etc/init.d/S80lighttpd /etc/rc2.d/S80lighttpd
ln -s /opt/etc/init.d/S99rtorrent /etc/rc2.d/S99rtorrent


7. How to deal with magnet links
I suggest to create a /whereever/rtorrent/magnets like /whereever/rtorrent/torrents and /whereever/rtorrent/download.
And then:
cd /whereever/rtorrent
vi allmagnets.sh 
and add:
#!/bin/bash

for f in magnets/*
do
 echo "Processing $f"
 CONTENT=`cat "$f" | sed s/^URL=// | grep -v '\[InternetShortcut\]' | tr -d '\r'`
 [[ "$CONTENT" =~ xt=urn:btih:([^&/]+) ]] && echo "d10:magnet-uri${#CONTENT}:${CONTENT}e" > "torrents/meta-${BASH_REMATCH[1]}.torrent" && rm "$f"
done
chmod a+x allmagnet.sh
And then, add a cron to run this program every 5 minutes:
vi /etc/cron.d/magnets
and add:
# convert magnets to torrents every 5 minutes
1,6,11,16,21,26,31,36,41,46,51,56 *    * * *   root    cd /mnt/pools/A/A0/data/rtorrent/ && /mnt/pools/A/A0/data/rtorrent/allmagnets.sh 

Make sure the cron daemon is running!!! (see point 4.b above)

Enjoy!

How to solve the "undefined reference to '__sync_sub_and_fetch_4'" compilation problem

If you ran into the following compilation problems:
undefined reference to '__sync_sub_and_fetch_4' problem
or with any of the following functions:
__sync_fetch_and_add, __sync_fetch_and_sub, __sync_fetch_and_or, __sync_fetch_and_and, __sync_fetch_and_xor, __sync_fetch_and_nand,
 __sync_add_and_fetch, __sync_sub_and_fetch, __sync_sub_or_fetch, __sync_and_and_fetch, __sync_xor_and_fetch, __sync_nand_and_fetch
__sync_val_compare_and_swap, 
__sync_bool_compare_and_swap, 
__sync_lock_test_and_set,
__sync_lock_release

Chances are that you are trying to compile for ARM (or an exotic architecture) and your GCC version is too old compared to the source code you are trying to compile!
There is an Easy fix: upgrade your GCC.

If you can't upgrade your GCC for any reason (for example you are on an embedded hardware you don't have full control on), follow the steps below!

1. Find the source code file that's right for the architecture you are trying to compile on
You are going to find it inside a GCC source tarball.
To find it, go into your gcc source gcc/config and do
grep '__sync_fetch' */*
to find the right file.
For ARM, it is:
gcc/config/arm/linux-atomic.c

2. Compile the source code file and link in to the program you are compiling
libtool --tag=CC -mode=compile gcc -g -O2 -MT linux-atomic.lo -MD -MP -MF linux-atomic.Tpo -c -o linux-atmoic.lo linux-atmoic.c
libtool --tag=CC -mode=link gcc -g -O2 -o liblinux-atmoic.la linux-atmoic.lo
And add liblinux-atomic.la in the Makefile so it is linked to the other .la files (into a .so or a program).

3. Example to compile libtorrent 13.1 and rtorrent 0.9.1 for ARM with GCC 4.2.3
If you wonder, this is to compile rtorrent for my Iomega ix4-200d storcenter NAS.

Compile libtorrent:
PATH=$PATH:/opt/bin
wget http://libtorrent.rakshasa.no/downloads/libtorrent-0.13.1.tar.gz
tar -xvf libtorrent-0.13.1.tar.gz
cd libtorrent-0.13.1
vi configure
OPENSSL_CFLAGS='-I/opt/include/'
OPENSSL_LIBS='-L/opt/lib/ -lssl'
STUFF_LIBS='-L/opt/lib/ -lsigc-2.0'
STUFF_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include'

./configure --prefix=/opt/

Add linux-atomic:
cd src
wget http://dl.dropbox.com/u/50398581/rtorrent-0.9.1/linux_atomic.c
libtool --tag=CC --mode=compile gcc -g -O2 -MT linux_atomic.lo -MD -MP -MF linux_atomic.Tpo -c -o linux_atomic.lo linux_atomic.c
vi /opt/bin/libtool

And if necessary, modify libtool for the follwoing entries:
AR="ar"
RANLIB="ranlib"
CC="g++"

libtool --tag=CC   --mode=link gcc  -g -O2  -o liblinux_atomic.la linux_atomic.lo
vi Makefile

add
liblinux_atomic.la
at the end of libtorrent_la_LIBADD

cd ..
make
strip .libs/libtorrent.so
make install


Compile rtorrent:
wget http://libtorrent.rakshasa.no/downloads/rtorrent-0.9.1.tar.gz
tar -xvf rtorrent-0.9.1.tar.gz
cd rtorrent-0.9.1
vi configure
And add:
sigc_LIBS='-L/opt/lib/ -lsigc-2.0 -L/lib/'
sigc_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include -I/opt/include/ncurses'
libcurl_LIBS='-L/opt/lib/ -lcurl'
libcurl_CFLAGS='-I/opt/include/'
libtorrent_LIBS='-L/opt/lib/ -ltorrent'
libtorrent_CFLAGS='-I/opt/include/'

Then:
./configure --prefix=/opt/ --with-xmlrpc-c=/opt/bin/xmlrpc-c-config  --with-ncurses=yes LDFLAGS='-L/opt/lib/' CPPFLAGS='-I/opt/include -I/opt/include/ncurses/' 
cd src
cp ../../libtorrent-0.13.1/.libs/liblinux_atomic.a .
vi Makefile
at the end of rtorrent_LDADD, add
liblinux_atomic.a

Then:
make
strip rtorrent
cd ..
make install

You are done!

Friday, March 2, 2012

How to backup my data (and why!)



It is the digital age, the amount of personal data we produce keeps going up: digital pictures, HD movies and documents take an ever increasing amount of space.
That's a lot a memories and information we don't want to loose (or can't afford to).

Companies have devised backup plans for a long time but the concept is now entering homes though cloud storage (and other means). When your data is lost, it is too late: you need to devise a plan now!

I will focus on the needs of individuals and deal with 3 different data types:
- photos
- videos
- documents (excel, doc, text, pdf...)

Also, there are diffrent risks to take into account when defining a backup plan:
- hardware failure (crashed hard drive)
- physical destruction of data at a physical location (think fire, theft ...)
- human error (oops! I deleted the file)


1. Put your data in the Cloud

The cloud will shield you from hardware failure, physical destruction but might not protect you against human error.... Added bonus is that you can share your data with other people :)
You also take on additional risks: like the risk of you online account being hacked or the risk of your data becoming visible to everybody because of a misconfiguration on your side.

The good news is that there usually are free allocations for each service but you might have to pay for a feature you really need.

For pictures, you have:
- Picasa: 1GB free + free unlimited storage of pictures up to 800x800 pixels (additional storage available: cost of 20GB is 5USD/year, see all prices here)
- Google plus: free unlimited storage of pictures up to 2048x2048 pixels
- Flickr: free upload of 300MB worth of pictures every month, paid option unlimited storage (original quality) & bandwith for (25 USD/year or 45 SD/2y. see all prices here)


For movies, you have:
- Youtube: Videos can be uploaded for free (up to 20GB per video)
The problem is that the videos are automatically edited (and reduced in quality) and it is not easy to download them once they are in the cloud!


For documents, you have:
- Google Docs: storage of documents, presentation and spreadsheets in google format is unlimited, you get 1GB for other type of files. Additional space can be bought (and shared with picasa). See the above link in Picasa for pricing details. The problem is there is no easy way to synchronize a local folder with Google Documents...
- Dropbox: 2B free storage. Local folders can be synchronized with dropbox.

2. Use a backup Service

This will shield you from hardware failure (but it might be slow to recover the data), physical destruction and human error.

The idea is to send your compressed and encrypted data to a remote server where it is stored. You can usually access our backups from a website and from a specific software.
The problem is that all your data goes through the internet and it can be very slow to do the initial upload if you have a lot of data. For example, if you have 1TB of data to backup, it can take months to do the initial backup!
Same problem when you try to do a full backup: it will be usually an order of magnitude faster than doing the initial backup but it can still take a few days.


If you also need to recover fast from an hard drive failure (the most common hardware failure) you can use a local redundant RAID configuration (like RAID 1 or 5). Please note that RAID alone will not prevent data loss: you are still vulnerable to other hardware failures (like RAID controller failure, destruction of the device and human error).

Let's compare the different plans out there. I will focus on 3 providers: Mozy, Carbonite and Crashplan.
Plan Name +10GBUnlimitedFamily unlimited HomeHomePlusHomePremier 50GB125GB
Yearly Price 25 USD50 USD120 USD 59 USD99 USD149 USD 72 USD120 USD
Space 10 GBUnlimitedUnlimited Unlimited 50 GB125 GB
Number of computers 112-10 111 1 (add computer +2USD/month)
Suported Os Windows, Mac, Linux, Solaris Windows, MacWindows Windows, Mac
Automated backup All files All Except videoAll files All files

Whatever your usage, Crashplan seems always cheeper and has more features. I use it myself and I am very satisfied with it....


The free crasplan software also allows you to backup on a friend computer (running crashplan as well). This means that you can backup your data without paying anything, provided a friend is ready to allocate you some data for backup.


3. Case studies

We just need to find the most cost effective combination of the above:

Profile A:10GB of pictures and a few documents: 5USD/year
Pay 5USD/year for Picasa storage (20GB)
Use DropBox free allocation to store the documents.
Problem: the backup process is manual: if you forget to upload your pictures to Picasa, they are not uploaded, unless you use the software I wrote to automatically upload to Picasa: see my post here). You are still vulnerable to Human error.

Profile B:100GB of pictures, 200GB of movies and a 10GB of documents: 50USD/year
Cheapest alternative is Crashplan Unlimited (50USD / year)
The backup process is now automatic: no need to worry about forgetting to backup something. On top of that, you are protected against human error as you can retrieve former versions of a file.

If your data is spread across different computer, you can buy a NAS and run crasplan on the NAS (see my post on how to install crasplan on an Iomega NAS here. Alternatively, you have the simpler option to buy Crashplan unlimited Family.


4. Conclusion

If you care about your data: take the time to devise your backup/data recovery plan now! You can always find a way that fits your budget.

You can get a reduced quality backup for your pictures and videos for free. Truct me, it is better to have a reduced quality backup than nothing!


Thursday, March 1, 2012

How to install Vuze on a NAS




The goal of this tutorial is to install vuze headless (as a command line application). Most of the tutorials found on the web suggest to do the configuration of vuze in the UI before starting it in headless mode. Unfortunately, this is not possible on a NAS where you have to X server and no screen...



Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995 but uses unsupported features on the hardware. Please use at your own risk.


Since Vuze if a java program, the same steps should allow you to install vuze as an headless client on any hardware running java.




Unfortunately, I ran into a lot of JVM crashes with vuze headless and oracle jvm ejre1.7.0 (for ARM). On top of that, vuze is quite an heavy program in terms of CPU and memory usage, which is annoying for the type of hardware we are looking at (like a NAS). Therefore, I don't recommand to install vuze on a NAS. I suggest you look at rtorrent, which is much more reliable (see my tutorial How to install rtorrent with IP filtering).





1. SSH into your NAS

See my other post:
How to ssh into your Iomega StorCenter ix4-200d if you have an IOMEGA NAS




2. Download and install

Steps adapted from the Console_UI Vuze wiki
cd /opt/tmp
wget http://ftp.heanet.ie/mirrors/www.apache.org/dist//commons/cli/binaries/commons-cli-1.2-bin.tar.gz
tar 
wget http://sourceforge.net/projects/azureus/files/vuze/Vuze_4702/Vuze_4702_linux.tar.bz2
PATH=$PATH:/opt/bin/
tar -xvf Vuze_4702_linux.tar.bz2

wget http://ftp.heanet.ie/mirrors/www.apache.org/dist//commons/cli/binaries/commons-cli-1.2-bin.tar.gz
tar -xvf commons-cli-1.2-bin.tar.gz
mv commons-cli-1.2/commons-cli-1.2.jar vuze/

wget http://ftp.heanet.ie/mirrors/www.apache.org/dist//logging/log4j/1.2.16/apache-log4j-1.2.16.tar.gz
tar -xvf apache-log4j-1.2.16.tar.gz
mv apache-log4j-1.2.16/log4j-1.2.16.jar vuze

to install java, you can look at the java section of my other post: How to install crashplan on an Iomaga NAS.

The installation is pretty straight forward...



Now, install the webUI plugin:
cd vuze
cd plugins
mkdir webui
cd webui
wget http://azureus.sourceforge.net/plugins/webui_1.7.0.zip
ipkg-opt install zip
ipkg-opt install unzip
unzip webui_1.7.0.zip
mkdir /opt/var/log/vuze
If you don't have ipkg-opt, see my other post: How to install software into your Iomega StorCenter NAS



3. Configure the Vuze installation

cd /opt/tmp/
mv vuze /opt/
cd /opt/vuze/
/mnt/pools/A/A0/NAS_Extension/ejre1.7.0/bin/java -Xmx128m -Dazureus.config.path=/opt/vuze/.azureus/ -cp "Azureus2.jar:commons-cli-1.2.jar:log4j-1.2.16.jar" org.gudy.azureus2.ui.common.Main --ui=console
Vuze should now be running, we need to configure it now. Adapt the paths to suit your needs and type at the vuze cli:
set "Default save path" "/mnt/pools/A/A0/torrents/vuze/download" string
set "Use default data dir" true boolean
set "Logger.Enabled" true boolean
set "Logging Enable" true boolean
set "Logging Dir" "/opt/var/log/vuze/" string
set "Ip Filter Autoload File" "http://list.iblocklist.com/?list=bt_level1" string
set Plugin.azhtmlwebui.User myusername
set Plugin.azhtmlwebui.Password mypassword password
set "Plugin.azhtmlwebui.Password Enable" true boolean
This is installing IP filtering as well. If you don't want that, just skip the set "Ip Filter Autoload File" command.

You are good to go now. To have vuze automatically start at boot, you need to create the script in /etc/init.d (you can adapt the azureus script provided inside the install).
If you have an Iomega NAS, look at this tutorial to see how to have the program run at boot.





You can now connect to the web Vuze UI from your web browser at
http://ip_of_nas:6883/. Please note that the web UI is not as rich as the regular UI (most options are not available in the web UI)




Please comment to let me know how stable your install is!

Thanks

Sunday, February 26, 2012

How to backup your google docs documents




I am a fan of google docs: I often needs to access and edit my documents while I am away, and google docs offers a great way to do that.
The problem is: I have a lot of large pdfs there, and they can take a while to load: I would love to have a local copy when I am in the office...
On top of that, I always like to have a local copy of stuff... just in case! Call me paranoid but what happens if your account is hacked? or if google unilaterally closes your account because they consider you don't respect the terms of use? Better be safe than sorry...


I couldn't find anywhere an application that would do what I want (get a local backup of my google documents and update it regularly).
There is the google "takeout" application but you can not schedule regular downoads...
A project like google-docs-fs seems promissing but it does only support google documents (and not any other file you may have uploaded if you have -like me- a premium account). Plus, my analysis is that there are too many possible points of failures if you rsync this file system... I need something more robust.


I decided to code what I need myself: a java command line application that can be used to schedule regular downloads of all your google docs documents.


1. Presentation gdocsauploader.jar

The features implemented:
- re use data from a previous data to avoid re-downloading files that haven't changed
- rotating backups (for example, a maximum of 7 backups backup.zip being the most recent one and backup.7.zip being the oldest one)
- zip archive or just a folder archive (takes more space but easier to access)
- configurable document export mode (export google spreasheets as xls or as csv)
- download only once documents that are in multiple folders (gdocsbackup.removeduplicateddocuments)
- archive without folder structure (all documents in a zip, like google takeout) or with folder structure (much easier to navigate)
- support for any type of files.

TODO:
- use hard links on operating systems that support it (that would substantially reduce that amount of disk needed for multiple backups with a lot of unchanged documents)
- fix the bug that forces you to use a temp directory on the same partition as the destination directory

In my setup, I want to install pgdocsauploader.jar as a daily cron on my NAS, but you can install it anywhere.

The program is configured using the config file gdocsuploader.properties which reads as follows:
#use system defined proxy
gdocsbackup.usesystemproxy=true
#google account username and password
gdocsbackup.username=xxxx
gdocsbackup.password=xxx
#the path where we want to backup
gdocsbackup.backuppath=C:\\Users\\xxx\\Documents\\Data\\
#the name of the backup archive. 
#the zip archives will be named: backuprootname.zip backuprootname.1.zip
#the folder archives will be named: backuprootname/ backuprootname.1/
gdocsbackup.backuprootname=gdocs_backup
#the number of backup files to keep
gdocsbackup.nbbackupfiles=7
#TRUE is you want to stroe backup as zip file. 
gdocsbackup.usezip=FALSE
#zip compression level (0-9) with 9 being the most compressed (and most CPU intensive)
gdocsbackup.zipcompresslevel=6
#use hard links to link new data identical to older data. This does save a lot of space (you can't use this option with usezip)
#not supported yet!
gdocsbackup.usehardlinks=FALSE
#document export format: one of doc html odt pdf png rtf txt zip
gdocsbackup.documentexportformat=doc
#presentation export format: one of pdf png ppt txt
gdocsbackup.presentationexportformat=ppt
#spreadsheet export format: one of xls csv pdf ods tsv html (NB: first sheet export only for csv and tsv)
gdocsbackup.spreadsheetexportformat=xls
#try to replicate the directory structure in the zip
docsbackup.keepdirectorystructure=TRUE
#show documents that appear at different places in the folder tree only once (in the first folder where it is found)
gdocsbackup.removeduplicateddocuments=TRUE
#log file (for linux, good practice is to put it in /var/log/ or /opt/var/log (and make sur logrotate works correctly))
gdocsbackup.logfile=C:\\gdocsbackup.log

All options are self explanatory. You can customize it as required by your setup.

As the program is java, it can be run on any OS / Architecture supporting Java.

The jar is available for download at http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.jar
sample properties files is available at http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.properties
and source code is available at: http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload-src.zip


Please note that in order to "rotate" backups, the program will delete the oldest backup! Don't modify the backups or store anything there!
The program only gets information from the google server: it does not update or delet anything: you are safe there!


To determine if the file was already downloaded, the last_update tag given by google is checked. I suggest you do a full backup from time to time to avoid an error propagating from backup to backup (to do that, just add the option full download after the "properties" file launching the jar)


2. Steps to install the gdocsbackup on a linux based NAS
The setup is easy to adapt to any machine running linux. I didn't do a tutorial for Windows or Mac as I lack some knowledge to do it, but it can of course be done... feel free to adapt it and post your results and hints in the comments!
This tutorial assumes some vi ans linux knowledge...

This is how I installed the gdocsbackup.jar on my NAS (an Iomega Storcenter ix4-200d). Please note that the procedure is unsupported by Iomega! use at your own risk!

a. Download and setup of gdocsdownload
First, you need to ssh into your NAS (see my other post if you have am Iomega Storcenter)
Then:
mkdir /opt/usr/local
mkdir /opt/usr/local/gdocsdownload/
cd /opt/usr/local/gdocsdownload/
wget http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.jar
wget http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.properties
Don't forget to change the properties file to make it work for your setup (you at least need to change account information and paths):
vi gdocsdownload.properties

If you are concerned about security, you should put the properties files into you home folder...

If you haven't already done so, you need to install java on your NAS. See the java section of my previous post How to install Crashplan on an Iomega Storcenter to find out how to do it for an Iomega storcenter.

If you followed the java installation procedure of my other post, link java to a more usual location:
ln -s /mnt/pools/A/A0/NAS_Extension/ejre1.7.0/bin/java /opt/bin/java
The setup can already be tested by starting the command:
/opt/bin/java -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar /opt/usr/local/gdocsdownload/gdocsdownload.properties
press Ctrl-C to stop the run


the program will need to be started from a script so that we can set correct folder permissions and TMP folder.
You need to make sure there is enough space in your temp folder (my /tmp/ folder is way to small, that's why I use /opt/tmp/
vi gdocsdownloader

and then type:
#!/bin/sh

#this is to have a backup that's readable by everybody
#but only writeable by the owner.
#change it to suit your needs
umask 022
#use a tmp file with enough space to fit all your docs
#NB: it seems like there is a bug somewhere and the tmp directory has to
#be on the same partition than the destination directory....
#please choose a tmp file respecting these conditions
#/opt/bin/java -Djava.io.tmpdir=/opt/tmp/ -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar $@
/opt/bin/java -Djava.io.tmpdir=/mnt/pools/A/A0/data/perso/gdocs/ -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar $@
make it an executable:
chmod a+x gdocsdownloader
And test with:
./gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties


b. Set up a cron job to backup google docs data
Create the gdocsdownloader cron (I don't use /etc/cron.daily/ because I want a full download once a week):
vi /etc/cron.d/gdocsdownload
and add:
# download google docs files at 3:45 AM

#full download on sunday
45 3    * * 0   root    /opt/usr/local/gdocsdownload/gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties fulldownload > /dev/null 2>&1
#regular download the other days
45 3    * * 1,2,3,4,5,6   root    /opt/usr/local/gdocsdownload/gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties > /dev/null 2>&1

The cron will run everyday!
you may want to run the first batch by starting:
/opt/usr/local/gdocsdownload/gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties

c. start the cron daemon

The cron daemon is not started at boot by default....

You can start it manually:
/etc/init.d/cron start

But to have it start up every time at boot, we need to add the line:
/etc/init.d/cron start >> /opt/init-opt.log
to our /opt/init-opt.sh script.

See my other post How to run a program at boot on the Iomega Storcenter NAS to see how it works!

d. set up logrotate
Logrotate is the process that compresses and delete old logs so that your logs don't eat all you disk space!
vi /etc/logrotate.d/gdocsdownload
and add:
/opt/var/log/gdocsdownload.log {
    rotate 4
    weekly
    compress
    delaycompress
    missingok
    notifempty
    prerotate
      while [ "`ps aux | grep gdocsdownloader.jar | grep -v grep | wc -l`" = "1" ]
        do
          sleep 10
        done
    endscript
}


This will rotate your gdocsdownload logs once a week and keep at least 4 weeks worth of logs. It is easy to modify these parameters in the config file above.

I try to make sure the gdocsdownload is done before rotating the logs to avoid conflict...

Don't forget to change the path if your log is somewhere else!

Saturday, February 11, 2012

How to automatically synchronize a picture folder with picasa (on a NAS or anywhere else)






I like to have a copy of my pictures on picasa to be able to share them with friends and family. I usually upload them in reduced resolution, to stay within the free storage space given by google.
The problem is that uploading them can be a pain: the picasa software can be very slow to upload them, especially if your are accessing the pictures on your NAS with a wireless network.

Instead of using the picasa software, I tried to use googlecl tools (http://code.google.com/p/googlecl/) to do that but it turns out I couldn't get it to do want I want (no sync folder option + no resize of picture on the fly).
There is an unsupported patch to synchronize folders with googlecl (http://code.google.com/p/googlecl/issues/detail?id=170) but that doesn't solve the problem of image resizing...I did not even test it...


1. Presentation of my solution: picasauploader.jar

To solve the problem, I wrote a small piece of java code (picasauploader.jar) that:
- creates any new album in picasa web when a new folder is created on the disk
- upload (and resizes if necessary) new pictures on the disk to picasa web

In my setup, I want to install picasauploader.jar as a daily cron on my NAS, but you can install it anywhere.

You just need to organize your pictures as
/path/albumname/picture.jpg
and use /path in the picasauploader.properties

The jar is configured using the config file picasauploader.properties which reads as follows:
#use system defined proxy
picasauploader.usesystemproxy=true
#picasa/google account username and password
picasauploader.username=xxxx
picasauploader.password=xxx
#semi column separated directories
picasauploader.diskpaths=/xxx/yyyy;/aaaa/bbbb
#can be either:
# private: accessible to anybody with the direct link but not without the direct link
# protected: not accessible except from your account
# public: available for everybody to see
picasauploader.albumcreationaccess=private
#if you want to resize images before uploading (aspect ratio is kept)
#Note: only JPEG images are resized...
#max Height in px
picasauploader.maxheigt=1600
#max Width in px
picasauploader.maxwidth=1600
#jpg quality when resizing
picasauploader.resizequality=85
#log file (for linux, good practice is to put it in /var/log/ or /opt/var/log (and make sur logrotate works correctly))
picasauploader.logfile=/opt/var/log/picasaupload.log

All options are self explanatory. You can customize it as required by your setup.

As the program is java, it can be run on any OS / Architecture supporting Java.

The jar is available for download at http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.jar
sample properties files is available at http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.properties
and source code is available at: http://dl.dropbox.com/u/50398581/picasauploader/PicasaUploader.java

Please note that for safety, the program does not delete anything on picasa web (nor on the disk, of course). Therefore, it is very safe to use.

Known Limitations:
- only suports JPG GIF PNG BMP image formats
- picture resizing is only supported for jpg images
- only the name is used to determine if a picture was already uploaded: if a picture was already uploaded and then changed on disk, it won't be uploaded again.


2. Steps to install the picasauploader on a linux based NAS
The setup is easy to adapt to any machine running linux. I didn't do a tutorial for Windows or Mac as I lack some knowledge to do it, but it can of course be done... feel free to adapt it and post your results and hints in the comments!
This tutorial assumes some vi ans linux knowledge...

This is how I installed the picasauploader.jar on my NAS (an Iomega Storcenter ix4-200d). Please note that the procedure is unsupported by Iomega! use at your own risk!

a. Download and setup of picasauploader
First, you need to ssh into your NAS (see my other post if you have am Iomega Storcenter)
Then:
mkdir /opt/usr/local
mkdir /opt/usr/local/picasauploader
cd /opt/usr/local/picasauploader
wget http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.jar
wget http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.properties
Don't forget to change the properties file to make it work for your setup (you at least need to change account information and paths):
vi picasauploader.properties

If you haven't already done so, you need to install java on your NAS. See the java section of my previous post How to install Crashplan on an Iomega Storcenter to find out how to do it for an Iomega storcenter.

If you followed the java installation procedure of my other post, link java to a more usual location:
ln -s /mnt/pools/A/A0/NAS_Extension/ejre1.7.0/bin/java /opt/bin/java
The setup can already be tested by starting the command:
/opt/bin/java -jar /opt/usr/local/picasauploader/picasauploader.jar /opt/usr/local/picasauploader/picasauploader.properties

b. Set up a cron job to synchronize image folders with picasa
Create the picasauploader cron:
cd /etc/cron.daily/
vi picasauploader
and add:
#!/bin/sh
/opt/bin/java -jar /opt/usr/local/picasauploader/picasauploader.jar /opt/usr/local/picasauploader/picasauploader.properties
Then:
chmod a+x picasauploader
And test with:
./picasauploader

c. start the cron daemon

The cron daemon is not started at boot by default....

You can start it manually:
/etc/init.d/cron start

But to have it start up every time at boot, we need to add the line:
/etc/init.d/cron start >> /opt/init-opt.log
to our /opt/init-opt.sh script.

See my other post How to run a program at boot on the Iomega Storcenter NAS to see how it works!

d. set up logrotate
Logrotate is the process that compresses and delete old logs so that your logs don't eat all you disk space!
vi /etc/logrotate.d/picasauploader
and add:
/opt/var/log/picasaupload.log {
    rotate 4
    weekly
    compress
    delaycompress
    missingok
    notifempty
    prerotate
      while [ "`ps aux | grep picasauploader.jar | grep -v grep | wc -l`" = "1" ]
        do
          sleep 10
        done
    endscript
}


This will rotate your picasauploader logs once a week and keep at least 4 weeks worth of logs. It is easy to modify these parameters in the config file above.

As logrotate is started by the same cron that starts the picasauploader (daily cron), you will notice that I try to make sure the picasauploader is done before rotating the logs...

Don't forget to change the path if your log is somewhere else!

Thursday, January 26, 2012

How to install rtorrent with ip filtering into your Iomega StorCenter ix4-200d


EDIT: this post is outdated now, please see this post instead with a much newer version of rtorrent that works with magnets and natively supports ip filtering

This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
The torrent software supplied with the Storcenter doesn't work well for me: some torrents never load, some disappear etc.. plus there is no ip filtering capability. The aim of the tutorial is to install rtorrent on the NAS which seems the most logical choice for a NAS (light weight and reliable) and explains how to enable ip filtering directly within rtorrent which is especially usefull since peerguardian/moblock can't be installed on the NAS because some kernel modules are missing...


1. SSH into your NAS
See my other post: How to ssh into your Iomega StorCenter ix4-200d


2. Install the software
See my other post here to setup at the minimum ikg and ipkg-opt. Then:
ipkg-opt install rtorrent
ipkg-opt install lighttpd
ipkg-opt install screen

If you don't want to connect remotely to rtorrent to manage it from you computer, you can skip the rest of this section...
Install nTorrent on your computer http://code.google.com/p/ntorrent/
Install xml-rpc on the NAS:
ipkg install optware-devel
ipkg install libcurl-dev
cd /opt/tmp/
svn checkout http://xmlrpc-c.svn.sourceforge.net/svnroot/xmlrpc-c/stable xmlrpc-c    
cd xmlrpc-c/
./configure --prefix=/opt
make
make install
Note: ou can choose something other that nTorrent. Please give me you feedback in the comments if you do.


3. Configure the software
Fix paths in the startup script:
vi /opt/etc/init.d/S99rtorrent
Some absolute paths need to be set. Replace screen with /opt/bin/screen in lines like:
su -c "/opt/bin/screen -ls | grep -sq "\.${srnname}[[:space:]]" " ${user} || su -c "/opt/bin/screen -dm -S ${srnname} 2>&1 1>/dev/null" ${user} | tee -a "$logfile" >&2
except:
su -c "/opt/bin/screen -S ${srnname} -X screen /opt/bin/rtorrent ${options} 2>&1 1>/dev/null" ${user} | tee -a "$logfile" >&2
replace:
if ps | grep -sq ${pid}.*rtorrent ; then # make sure the pid doesn't belong to another process
with
if ps auxxx | grep -sq ${pid}.*rtorrent ; then # make sure the pid doesn't belong to another process
Configure rtorrent (you should get a real rtorrent help for this, I am just trying to get you to a point where is works!):
vi /opt/etc/rtorrent.conf
set download and torrent directories:
instead of
directory = /opt/share/torrent/work/
set something like
directory = /mnt/pools/A/A0/torrents/rtorrent/download
instead of
schedule = watch_directory,5,5,load_start=/opt/share/torrent/dl/*.torrent
set something like
schedule = watch_directory,5,5,load_start=/mnt/pools/A/A0/torrents/rtorrent/torrents/*.torrent
comment out:
#schedule = untied_directory,5,5,stop_untied=
add at the end:
#files rwx for everybody
system.set_umask = 0000
Run:
mkdir /opt/share/torrent/session
Then, create the directories rtorrent/download and rtorrent/torrents inside the torrent share using regular NAS access (to have the right permissions)


4. Configure the software for remote access
This is only if you want to manage your rtorrent remotely:
Thanks to http://www.nslu2-linux.org/wiki/HowTo/RtorrentWithRemoteGUI for the setup.
Security warning: if you follow these steps, anybody that can access port 8081 of you NAS will be able to send commands to rtorrent! You want to make sure that this port is only accessible from your local network.
vi /opt/etc/lighttpdlighttpd.conf
between
#                               "mod_rrdtool",
and
"mod_accesslog" )
add
"mod_scgi",
and at the end add:
scgi.server = (
"/RPC2" => ( 
    "127.0.0.1" => (
        "host" => "127.0.0.1",
        "port" => 5000,
        "check-local" => "disable"
        )
    )
)
vi /opt/etc/rtorrent.conf
and at the end add:
scgi_port = localhost:5000


4. Test your setup
/opt/bin/rtorrent -n -o import=/opt/etc/rtorrent.conf
if you get:
rtorrent: Fault occured while inserting xmlrpc call.
did you install xmlprc correctly? is ld.so.conf updated correctly? did you run ldconfig?

to connect to the running instance:
/opt/bin/screen -r rtorrent
and kill the terminal (putty) to exit or press Ctrl-a d.

For remote access: you can start lighthttpd on the NAS
/opt/etc/init.d/S80lighttpd start
and then start nTorrent on your computer and connect to your NAS port 8081 (by default) on path /RPC2.



5. Get rtorrent to start automatically on reboot
Follow the tutorial How to run a program at boot on Iomage Strocenter You just need to add the following lines to the script:
/opt/etc/init.d/S80lighttpd start >> /opt/init-opt.log
/opt/etc/init.d/S99rtorrent start >> /opt/init-opt.log
If you have another brand of NAS (or a regular linux OS), just try to link the startup scripts to /etc/rc2.d/ like ou would normally do an a linux box:
ln -s /opt/etc/init.d/S80lighttpd /etc/rc2.d/S80lighttpd
ln -s /opt/etc/init.d/S99rtorrent /etc/rc2.d/S99rtorrent


6. Get a peerguardian like protection
First, I tried to install peerguardian linux but ran into a wall: the LifeLine OS on the Iomega Storcenter does not have the right kernel modules. I tried to recompute the kernel from the sources given by IOMEGA on their website (it is available for download in the support section) I got it to compile but insmod of the required module (x_tables.ko) does freeze the kernel (hard reboot required). Since I could not think of a safe way to push further in this direction (without risking to brick the NAS), I investigated other possibilities...(post a comment if you want more details on the kernel compilation)
I thought about abandoning rtorrent altogether and try Vuze (which has ip filtering).I got it to run but it was pretty unstable (jre crash)...

Luckily, someboby wrote a patch for rtorrent so that it supports ip filtering. I got it to compile on my NAS (version 0.8.6) and here is the result.You just need to:
cd /opt/bin/
mv rtorrent rtorrent.sav
wget http://dl.dropbox.com/u/50398581/rtorrent-0.8.6/rtorrent
that will give you a rtorrent with ip filtering supported.
Note: this only work if you previously installed version 0.8.6 of rtorrent!

The precompiled version will only work if you have an "armel" architecture. Otherwise, you need to recompile from source (see point 7 below)

The some config:
vi /opt/etc/rtorrent.conf
and add at the end
ip_filter=/mnt/pools/A/A0/torrents/rtorrent/ipfilter/level1
schedule = filter,18:30:00,24:00:00,reload_ip_filter=
thanks to http://bogdan.org.ua/2011/04/01/rtorrent-enhanced-with-ipfilter-and-geoip-debian-squeeze-amd64-package.html

Now, we need to download and update regularly the ip filter file:
vi /etc/cron.daily/rtorrent_ipfilter
#!/bin/sh
cd /mnt/pools/A/A0/torrents/rtorrent/ipfilter/
wget http://list.iblocklist.com/?list=bt_level1
mv index.html\?list\=bt_level1 level1new.gz
gunzip level1new.gz
rm level1
mv level1new level1
then:
mkdir /mnt/pools/A/A0/torrents/rtorrent/ipfilter/
cd /etc/cron.daily/
chmod a+x rtorrent_ipfilter
./rtorrent_ipfilter
That's it: you just need to restart rtorrent to enjoy ip filtering. The ip filter file will be update everyday thanks to the cron (and rtotorrent will reload it).




7. In case you want/need to compile rtorrent with ip filtering yourself!
This is usefull if you are compiling a different version or a different architecture (please comment and report your success if you do so).

First, you need the header files for libsigc++-2.0.
wget http://ftp.de.debian.org/debian/pool/main/libs/libsigc++-2.0/libsigc++-2.0-dev_2.0.18-2_armel.deb
dpkg --instdir=/opt/ --admindir=/opt/dpkg/ -i libsigc++-2.0-dev_2.0.18-2_armel.deb

Then, take care of litorrent. I recompile libtorrent to install the correct headers as I don't find them anywhere (I didn't look for a deb archive with the correct headers but that might have done the trick...):
wget http://libtorrent.rakshasa.no/downloads/libtorrent-0.12.6.tar.gz
tar -xvf libtorrent-0.12.6.tar.gz
cd libtorrent-0.12.6
PATH=$PATH:/opt/bin
Then:
vi configure
and add at the begining of the configure script:
OPENSSL_CFLAGS='-I/opt/include/'
OPENSSL_LIBS='-L/opt/lib/ -lopenssl'
STUFF_LIBS='-L/opt/lib/ -lsigc-2.0'
STUFF_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include'
Note that I do edit the configure because I don't manage to get ipkg-config to work correctly. Using /opt/bin/pkg-config works better than the /bin/pkg-config but still not good enough...
./configure --prefix=/opt/
make
make install

Then, the main thing: rtorrent
ipkg-opt install libcurl-dev
ipkg-opt install ncurses-dev
PATH=$PATH:/opt/bin
wget http://libtorrent.rakshasa.no/downloads/rtorrent-0.8.6.tar.gz
tar -xvf rtorrent-0.8.6.tar.gz
cd rtorrent-0.8.6
vi configure
and add at the begining of the configure script:
sigc_LIBS='-L/opt/lib/ -lsigc-2.0 -L/lib/'
sigc_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include -I/opt/include/ncurses'
libcurl_LIBS='-L/opt/lib/ -lcurl'
libcurl_CFLAGS='-I/opt/include/'
libtorrent_LIBS='-L/opt/lib/ -ltorrent'
libtorrent_CFLAGS='-I/opt/include/'
I did not know were to put the ncurses include, that's why you'll find it in sigc_CFLAGS...

Now, install the patch to have ip filtering (more details on the patch here http://libtorrent.rakshasa.no/ticket/239):
wget http://libtorrent.rakshasa.no/raw-attachment/ticket/239/rtorrent-0.8.6-ip_filter_no_boost-fast-bsd2.patch
/opt/bin/patch-patch -p1 < rtorrent-0.8.6-ip_filter_no_boost-fast-bsd2.patch
./configure --prefix=/opt/ --with-xmlrpc-c=/opt/bin/xmlrpc-c-config
Then, I got the following issue:
/opt/arm-none-linux-gnueabi/lib/libdl.so.2: undefined reference to `_dl_tls_get_addr_soft@GLIBC_PRIVATE'
My system is starting to be a mess. The problem comes from the fact that I have 2 libdb.so libs:
root@xxx:/opt/tmp/rtorrent-0.8.6# ls -l /mnt/apps/lib/libdl.so.2
lrwxrwxrwx 1 root root 12 Sep  9 20:46 /mnt/apps/lib/libdl.so.2 -> libdl-2.8.so
root@xxx:/opt/tmp/rtorrent-0.8.6# ls -l /mnt/system/opt/arm-none-linux-gnueabi/lib/libdl.so.2
lrwxrwxrwx 1 root root 12 Jan  6 22:47 /mnt/system/opt/arm-none-linux-gnueabi/lib/libdl.so.2 -> libdl-2.5.so
to fix the issue:
rm /mnt/system/opt/arm-none-linux-gnueabi/lib/libdl.so.2
Then, compile and install:
make
make install
final issue:
Could not compile XMLRPC-C test.
This one came from a mismatch between heders and libs. Recompiling the package xml-rpc from source did fix the issue. 8. What's next In a different post, I will detail how to install Vuze headless (without graphical interface)... I don't recommand installing vuze because I ran into stability issues while testing it (several jre crashes). However, a new Jre version might solve the issue...
On top of that, you can't do much from the Web UI: when you want to setup something, you ofen have to use the Vuze command line inteface and I did not find any proper documentation for it.
Note as well that rss feeds features don't work in headless mode.

Monday, January 23, 2012

How to run a program at boot on the Iomega Storcenter NAS


This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! I undertand it works (but still isn't unsupported by IOMEGA) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
This post is extracted from a previous post (how to install crashplan on Iomega storcenter). I am planning more tutorials on the Iomega storcenter ix4 and wanted centralize this part in case it needs to evolve with future firmwares...


1. Enable SSH on the NAS
see my other post How to SSH into your Iomega Storcenter




2. Create a script that runs at boot
Iomega OS (EMC LifeLine) does not respect what is inside /etc/rcx.d/. If you have another brand of NAS, chances are that installing the script into /etc/rc2.d/ will result in the script running at boot...
If you already downloaded the scripts to have a command run at boot, you can just add the new command below the one you already have in /opt/init-opt.sh. Just make sure the commands all return immediately (or add a & at the end) so that the script does not get stuck before reaching the last command.

If that the first time you do this, download the scripts:
cd /opt/
wget http://dl.dropbox.com/u/50398581/Storcenter%20add%20on%20boot/editconfig.sh
chmod +x /opt/editconfig.sh
wget http://dl.dropbox.com/u/50398581/Storcenter%20add%20on%20boot/init-opt.sh
chmod +x /opt/init-opt.sh
Now we start editing the XML list of programs that will automatically be started. Run:
/opt/editconfig.sh
You will see lots of Groups. We are going to add one <program> to <group level="1">. We will add:
<program name="init-opt" path="/opt/init-opt.sh">
<sysoption restart="-1"/>
</program>
the scripts content for you reference:
/opt/editconfig.sh
#!/bin/sh
# edit the bootup config of the ix-2
# inspired by http://www.chrispont.co.uk/2010/10/allow-startup-daemons-on-storcenter-ix2-200-nas/
mknod -m0660 /dev/loop3 b 7 3
chown root.disk /dev/loop3
mkdir /tmp/apps
mount -o loop /boot/images/apps /tmp/apps
vi /tmp/apps/usr/local/cfg/sohoProcs.xml
sleep 1
umount /tmp/apps
rm /dev/loop3
/opt/init-opt.sh
#!/bin/sh
# modified from http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/
rm /opt/init-opt.log
echo "Last bootup:" >> /opt/init-opt.log
date >> /opt/init-opt.log
#Add your command below
#/etc/init.d/xxxxxx start >> /opt/init-opt.log
while true; do
 sleep 1d
done
Then, you just need to edit editconfig.sh add the command(s) you wish to run after the
#/etc/init.d/xxxxxx start >> /opt/init-opt.log
line!

How to install software into your Iomega StorCenter ix4-200d


This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
The aim of the tutorial is to be able to add programs to you NAS without having to go too deep in the system. This is also helpful to compile natively on the NAS without needing to cross compile for your architecture....


1. SSH into your NAS
See my other post: How to ssh into your Iomega StorCenter ix4-200d


2. Directory Structure on the NAS
The Lifeline OS (Iomega's OS) does put most of the root file system in read only mode. It is not much use to try to put stuff there anyway because the partitition is very small
You can type:
df
Filesystem           1K-blocks      Used Available Use% Mounted on
rootfs                   51200      5652     45548  12% /
/dev/root.old             6613      2119      4494  33% /initrd
none                     51200      5652     45548  12% /
/dev/md0_vg/BFDlv      4128448    619496   3299240  16% /boot
/dev/loop0              587077    580124      6953  99% /mnt/apps
/dev/loop1                4959      2230      2473  48% /etc
/dev/loop2                 216       216         0 100% /oem
tmpfs                   255748         0    255748   0% /mnt/apps/lib/init/rw
tmpfs                   255748         0    255748   0% /dev/shm
/dev/mapper/md0_vg-vol1
                      16775168   3283704  13491464  20% /mnt/system
/dev/mapper/2602b0ce_vg-lv43ec31bd
                     2867212288 1169119852 1698092436  41% /mnt/pools/A/A0
to see the partitions and their mountpoint.
The idea seems to be that third party programs should be installed in the /opt/ directory, which has ample storage (16GB) whereas root (/) only has 50MB.


3. Use ipkg
ipkg is intalled by default in the Iomega storcenter. We just need to specify the right place to find the packages:
vi /etc/ipkg.conf
src cross http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/cross/stable
src cross http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/cross/unstable
Then type:
ipkg update
to build the list of available packages.


The problem of this setup is that you won't be able to install some packages because installation will fail because part of the filesystem is readonly.
Thanks to ipkg, there is an easy fix:
ipkg install ipkg-opt
This installs the binary /opt/bin/ipkg-opt. The idea is then to use this binary instead of the regular ipkg: as a result all packages will be installed in /opt/ and you won't run into problems with the read only filesystem.
The only drawback is that /opt/bin/ is not in your path... There is a simple remedy for that:
PATH=/opt/bin:$PATH
Note: this is not persistent (if you start another shell, you will need to do that again).
Also, as a one time persistent thing, I recommand to do
vi /etc/ld.so.conf
and add
/opt/lib/
at the end. That's the main problem with /opt installed software: you might end up to get duplicated libraries between /lib and /opt/lib (ldd and ldconfig are your friends).

You also need to do:
mv /opt/etc/ipkg.conf /opt/etc/ipkg.conf.old
ln -s /etc/ipkg.conf /opt/etc/ipkg.conf
so that you config in /etc/ipkg.conf remains useable with /opt/bin/ipkg and /opt/bin/ipkg-opt

Then type:
/opt/bin/ipkg update
to setup the list of available packages for /opt/bin/ipkg /opt/bin/ipkg-opt

4. Install utilities and optware-devel
First install the utilities you miss to do some actual linux stuff:
ipkg-opt install zip unzip bzip2 gzip

If you want a full gcc toolchain to compile your own applications from source.
ipkg-opt install optware-devel
The compilation can be slow but this allows you to natively compile on your NAS (I think it is simpler because there is no need to set up cross compiling on another box)...

5. Install armel/debian compiled software
Unfortunately, you will soon discover that some of the packages you want are not available for ipkg.
You can then either compile your own software (see next point) or get some ready-made debian archives....
In this case, I suggest to use the following command (for example for libsigc++-2.0-dev):
cd /opt/tmp/
wget http://ftp.de.debian.org/debian/pool/main/libs/libsigc++-2.0/libsigc++-2.0-dev_2.0.18-2_armel.deb
dpkg --instdir=/opt/ --admindir=/opt/dpkg/ -i libsigc++-2.0-dev_2.0.18-2_armel.deb
Note: do not use /tmp/ as the space available there is very small...
Note2: be careful to choose packages compiled for your architecture (armel in my case)! The above command will install your soft as if /opt/ was the root directory (you will end up with /opt/usr/lib directories and the like). As a result, you might need to add stuff in your PATH or edit /etc/ld.so.conf.
Be careful not to make a mess of your system or you will soon end up with several times the same library (with different versions) at different locations... You will need to sort this manually(ln, rm...)


6. Compile from source
For example, a very classic install for libnfnetlink:
cd /opt/tmp/
wget http://www.netfilter.org/projects/libnfnetlink/files/libnfnetlink-1.0.0.tar.bz2
tar -xvf libnfnetlink-1.0.0.tar.bz2
cd libnfnetlink-1.0.0
PATH=$PATH:/opt/bin
./configure --prefix=/opt/
make
make install
Note: to get bzip2 to work I had to do before the tar -xvf:
ln -s /opt/bin/bzip2-bzip2 /opt/bin/bzip2
Another example using svn
cd /opt/tmp/
PAH=/opt/bin:$PATH
svn checkout http://xmlrpc-c.svn.sourceforge.net/svnroot/xmlrpc-c/stable xmlrpc-c    
cd xmlrpc-c/
./configure --prefix=/opt
make
make install
Don't forget the --prefix=/opt to specifiy you want to install your package.

When compiling from source, you run into the usual complation problems you can get with linux (libraries/includes not found etc...). It gets even more annoying because default stuff does not work well anymore (package manager is not where expected etc), and sometimes you end up having to specify the complie flags yourself.
For example, I recently had to edit the configure script of a source tarball to add:
sigc_LIBS='-L/opt/lib/ -lsigc-2.0 -L/lib/'
sigc_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include -I/opt/include/ncurses'
libcurl_LIBS='-L/opt/lib/ -lcurl'
libcurl_CFLAGS='-I/opt/include/'
libtorrent_LIBS='-L/opt/lib/ -ltorrent'
libtorrent_CFLAGS='-I/opt/include/'
-dev packages can be difficult to find with ipkg, this is where you often need to get a .deb package or compile the library from source just to get the header files right...

7. Conclusion
As you noticed, it is just a matter of using the tools (and using them right). It just gets a little bit more complicated because the usual package manager does not work out of the box, the procedure is unsupported by the hardware vendor and precompiled packages can be difficult to find for armel...

Thursday, January 12, 2012

How to ssh into your Iomega StorCenter ix4-200d


This tutorial uses an unsupported feature of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! I undertand it works (but still isn't unsupported by IOMEGA) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
This post is extracted from a previous post (how to install crashplan on Iomega storcenter). I am planning more tutorials on the Iomega storcenter ix4 and wanted centralize this part in case it needs to evolve with future firmwares...


1. Enable SSH on the NAS
Go to
http://your-nas-adress/diagnostics.html
click "enable SSH"
on older firware versions, I understand an equivalent page could be found at:
http://your-nas-adress/support.html

If the admin password of the NAS is "pass" the root password to use in ssh is sohopass.(thanks http://planetkris.com/2010/05/iomega-storcenter-ix2-ssh-email-notifications-and-busybox-init-d)