The 10 Best Android Apps that Make Rooting Your Phone Worth the Hassle

0 komentar Sabtu, 11 Juni 2011

Android phones are spectacular little devices because they're able to so much that others simply can't, but one big snag in that greatness is that many of those best features require that the phones be rooted. Whether you plan on installing custom ROMs or not, you may want to root your phone just to use the great apps that require root access. Here are the ten most essential apps available for Android that require root.


View the original article here

Read On

VIA KMS Linux Driver Still Far From Being Ready

0 komentar

In the KMS (kernel mode-setting) world there is not only news today to report on a new open-source Freescale KMS driver, but on the state of VIA's kernel mode-setting driver. VIA Technologies may have killed off their open-source strategy, but for the past number of months there's been a developer writing a VIA KMS/TTM DRM driver that would work with the OpenChrome user-space X.Org driver...


View the original article here

Read On

Mozilla Delivers New Beta of Thunderbird Email Client

0 komentar

As we've recently covered, Mozilla has moved to an agressive rapid release cycle for its Firefox browser, with the latest version 4 quickly emerging as the most popular version of Firefox. At the same time, though, Mozilla is pursuing rapid release cycle for other projects, including its Thunderbird email engine. Thunderbird is now out in a new beta version 5.01, featuring a number of upgrades.



View the original article here

Read On

To many files in /var/spool/asterisk/monitor/ Not all files processed

0 komentar

The following error may occur when a user logs in and goes into the call monitor menu.

To many files in /var/spool/asterisk/monitor/filename.gsm Not all files processed

This occurs because by default the maximum number of files allowed is 3,000 which is defined in /var/www/html/recordings/includes/bootstrap.php.

The number of files allowed is defined in the following section. You can increase 3000 to something higher, however as I read it may cause performance issues with the system.

function getFiles($path,$filter,$recursive_max,$recursive_count) {
global $SETTINGS_MAX_FILES;
$SETTINGS_MAX_FILES = isset($SETTINGS_MAX_FILES) ? $SETTINGS_MAX_FILES : 3000;

There is also a typo that can be corrected in the following section. "To many files in" should be "Too many files in".

$fileCount++;
if ($fileCount>$SETTINGS_MAX_FILES) {
$_SESSION['ari_error']
.= _("To many files in $msg_path Not all files processed") . "
";
return;
}

Rather than increasing the number of files allowed, I would suggest creating a cronjob to cleanup the directory. If you edit /etc/crontab you could add the following line which would delete all the files on a weekly basis. You can change the frequency that this occurs with a pre-defined definition or using a custom schedule of your own. For more information check out the Wikipedia article on cron.

@weekly root rm -f /var/spool/asterisk/monitor/*


View the original article here

Read On

404 When Listening To A CDR Report Recording In trixbox

0 komentar

When trying to listen to a recording from the CDR Report section of the PBX menu you might get a 404 saying it can't find /maint/cache/monitor/filename.wav

To resolve this create a symbolic link to the correct directory where the files reside.

ln -s /var/spool/asterisk/monitor /var/www/html/maint/cache/monitor


View the original article here

Read On

Mindtouch 400 Bad Request / Response Status Code: 0

0 komentar
Read On

Install corkscrew on MacOSX

0 komentar Jumat, 10 Juni 2011
Read On

Build Linux drivers for ATI 6450 on FC15

0 komentar
Read On

Setup CentOS and move WordPress

0 komentar

I got a vServer by Hetzner and I chose a 64bit CentOS 5.5 install. Now I need to transfer WordPress to it with all my stuff, like database, plugins, themes and special settings like pretty permalinks.


I want to achieve all of this without any additional tools or backup plugins. In my experience WP backup tools mess with SQL tables and produce more headaches than necessary.
If this is a fresh install of Linux you should read on.
 


Why not start with adding an user to your fresh CentOS install. It’s never a good idea to login with a root password. You should rather elevate your rights with su or sudo when needed and login via SSH with a user with less permissions.
 

[root@CentOS ~]#useradd someName -g someMainGroup[root@CentOS ~]#passwd someName

The command -g someMainGroup will add the user to the any default group of your liking. -G would add the user to any secondary groups. You can always change these settings with usermod later.
 


I would use Public-key Authentication since it uses identity keys to authenticate individual users. The identity key has its own passphrase so it will protect my system login. Copies of the public key will have to be distributed to every host that I want to access. The private key should stay protected and must not be shared.


Secondly I would change the port to a nonstandard port because automated attack kits are likely to try brute-forcing their way in via port 22. This has the additional beneficial effect of lightening the load on any firewall logfiles.
 


As pointed out in the comments below by MidnighToker using plain FTP is a very bad idea. I decided to remove my FTP section completely because it wasn’t secure. Because setting up SFTP on CentOS / RHEL with added security -a chroot jail- is rather difficult, I dedicated an entire [blog post] to that topic. The post will cover how you upgrade openSSH 4.6. to 5.6 on RHEL / CentOS by compiling from source building your own rpm package.


LAMP stands for Linux, Apache, MySQL and PHP. To deal with this in one go we log into our web server and type into the shell:
 

yum install mysql mysql-server httpd php php-mysql -y

Above line uses the Yellow Dog Updater Modified to install a complete LAMP system. The -y switch means “assume that the answer to any question which would be asked is yes” (yum manpage, 2010)


Next, we have to have to insure that our services that we installed just a minute ago will start up after reboot. We do this with chkconfig, a system startup linking tool. It manages the symbolic links of the services inside the /etc/rc[0-6].d directory. These links would point to the startup scripts inside /etc/init.d. You could do this manually for the runlevels you need, but Redhat, Fedora and thus CentOS features chkconfig. In Debian you would have to use update-rc.d
 

chkconfig --levels 235 mysqld on

Better do it in one go:
 

chkconfig --levels 235 mysqld on && chkconfig --levels 235 httpd on && /etc/init.d/mysqld start && /etc/init.d/httpd start

The operator && does execute the next command if the previous one executed was successful. Now that your Apache server and SQL server is running, we will have to setup some security in MySQL.

mysqladmin -u root password mysecretphrase

Rather use this command since it is more secure and sets also a root access mask:

/usr/bin/mysql_secure_installation

If you now try to login into mysql and get an access denied then most probably your SQL server has no Grant-tables setup. Do this with the following file:
 

mysql_install_db

Create a database, furthermore a SQL user and give him permissions.
 

mysql -u adminusername -password=yourPassCREATE DATABASE databasename;GRANT ALL PRIVILEGES ON databasename.* TO wordpressusername@hostname IDENTIFIED BY "password";FLUSH PRIVILEGES;

Flushing Privileges empties the SQL servers cache.


Restore your database with:
 

mysql -h mysqlhostserver -u mysqlusername -p databasename < blog.bak.sql

Go into the httpd.conf file which you can locate with find / | grep httpd.conf.


I would change below settings to get things started. You can use vi with its / slash to search for the respective setting.
 

KeepAlive OnServerLimit 40MaxClients 40ServerAdmin yourMailAdressServerName www.yourDomain.com:80

KeepAlive On is one of the important features of HTTP 1.1, so enable it. With this enabled apache and a client will use a single TCP/IP connection to send continuous data instead of opening many simultaneous TCP connections. Every TCP connection goes through the slow start algorithm before reaching maximum transfer speed -the slow start threshold- so in essence KeepAlive On ensures faster loading of Web pages.


Lowering the ServerLimit ensures that your server will not hang or crash due to extreme swapping of memory. A server swaps memory from RAM to its pagefile or swap partition when for instance apache requests too many memory due too many open HTTP requests. The same goes for MaxClients. You should lower this value in both – the module section of prefork.c and worker.c


Set AllowOverride All to allow the file .htaccess do its magic. We will configure it later for pretty permalinks.


You could also add %T/%D in the LogFormat line to enable the module mod_headers to measure the time your server takes to process a HTTP request from the receival of the HTTP headers to delivery of the response headers.


If you want ErrorDocuments to be served properly, you should uncomment the line following #ErrorDocument 400.


Finally we setup our default virtual server, which will be taken as default server if any additional virtual servers cannot be resolved by apache. Since we defined ServerAdmin and ErrorLog earlier it is not necessary to define this again for our virtual server, but you could of course user alternate log file locations or ServerAdmins.

NameVirtualHost *.80 DocumentRoot /var/www/html/yourdomain ServerName yourdomain.com ServerAlias www.yourdomain.com

Now create the directory for your domain at /var/www/html/ otherwise apache will not start. Give this directory permissions with chmod 775, in order for the owner and the group having full permissions. You can change the group of the directory with /usr/bin/groupmod.


To enable compression in Apache I would add the recommended code taken from the apache 2.2 documentation.
 

# Insert filter SetOutputFilter DEFLATE # Netscape 4.x has some problems... BrowserMatch ^Mozilla/4 gzip-only-text/html # Netscape 4.06-4.08 have some more problems BrowserMatch ^Mozilla/4\.0[678] no-gzip # MSIE masquerades as Netscape, but it is fine BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Don't compress images SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary # Make sure proxies don't deliver the wrong content Header append Vary User-Agent env=!dont-var

The Location directive makes sense as the apache documentation states:
 



“Location sections are processed in the order they appear in the configuration file, after the Directory sections and .htaccess files are read, and after the Files sections.”


In case you are not only moving WordPress but also changed your domain name you would have to alter the WordPress adress and Site address in the General Settings prior to making a SQL backup. Be aware you can lock yourself out if you change these settings. I would recommend to make the SQL backup and after restoring it on your new server I would change the domain settings with these SQL queries:
 

use yourDatabase;UPDATE 'wp_options' SET 'option_value' = 'http://newDomain.tld' WHERE 'option_id' =1 AND 'blog_id' =0 AND 'option_name' = 'siteurl' LIMIT 1 ;UPDATE 'wp_options' SET 'option_value' = 'http://newDomain.tld' WHERE 'option_id' =46 AND 'blog_id' =0 AND 'option_name' = 'home' LIMIT 1 ;

You can also do this with phpMyAdmin with these queries:
 

SELECT 'option_value'FROM 'wp_options'WHERE option_id =1AND blog_id =0AND option_name = 'siteurl'LIMIT 1 ;SELECT 'option_value'FROM 'wp_options'WHERE option_name = 'home'LIMIT 1 ;

Moving WordPress itself from one server to another is the simplest part. Just download your whole WordPress folder via FTP. That is the one where directories wp-content, wp-admin and so forth reside in. Then upload it via our secure openSSH/SFTP connection. Don’t forget to include your .htaccess file in case you want to use permalinks.


 


View the original article here

Read On

SFTP with chroot jail on CentOS

0 komentar
SFTP which uses openssh was engineered by the IETF 2001-2007. It has the potential to replace the insecure and legacy FTP. Another alternative would be WebDAV. It seems like a great alternative to SFTP since it uses HTTP/TCP pipelining and should considerably speed up file transfers. However if you want WebDAV to be secure you would have go through the pain of creating a valid SSL connection.SFTP should work out of the box if ssh is running on your server. Just connect from your client machine with sftp user@hostname.tldThe connection itself is tunneled over SSH now. However the user will have access to your entire file system and even worse if anyone gained these credentials, he could login via ssh to your server.Since we don’t want that, we will have to chroot (jail root) the sftp user and forbid him to use ssh itself. For this you will need openSSH 5 or greater. At the time of this blog post, CentOS 5.5 still ships with the year old openSSH 4.6. Therefore we will have to make a rpm package with the latest openSSH 5.6 sources and install it on our CentOS. rpm -qa | grep sshyum -y install gcc automake autoconf libtool make openssl-devel pam-devel rpm-buildwget http://ftp.halifax.rwth-aachen.de/openbsd/OpenSSH/portable/openssh-5.6p1.tar.gzwget http://ftp.halifax.rwth-aachen.de/openbsd/OpenSSH/portable/openssh-5.6p1.tar.gz.ascwget -O- http://ftp.halifax.rwth-aachen.de/openbsd/OpenSSH/portable/DJM-GPG-KEY.asc | gpg --importgpg openssh-5.6p1.tar.gz.asctar zxvf openssh-5.6p1.tar.gzcp openssh-5.6p1/contrib/redhat/openssh.spec /usr/src/redhat/SPECS/cp openssh-5.6p1.tar.gz /usr/src/redhat/SOURCES/cd /usr/src/redhat/SPECS/perl -i.bak -pe 's/^(%define no_(gnome|x11)_askpass)\s+0$/$1 1/' openssh.specrpmbuild -bb openssh.speccd /usr/src/redhat/RPMS/`uname -i`uname -ils -lrpm -Uvh openssh*rpm/etc/init.d/sshd restartAlright what does all this command lines do? First yum installs the gnu compiler with important tools like make, then yum installs openssl development packages and rpm-build. Then I choose a repository from Aachen, Germany, which is nearest from my server’s location. We also get the gpg signature and import it to our local gpg databse then we check it. The rest is building the rpm package; without GUI support like X11 or gnome, because it’s not really usable on a server. I must credit some of these steps to someone else on the interwebs: http://binblog.info/2009/02/27/packaging-openssh-on-centos/I found perl -i.bak -pe ‘s/^(%define no_(gnome|x11)_askpass)\s+0$/$1 1/’ openssh.spec to be very elegant and I liked also cd /usr/src/redhat/RPMS/`uname -i` very cool. The latter changes directory to the correct build version of your kernel. Did I mention you should be root to do this?I do it in one go, absolute hardcore. rpm -qa | grep ssh && yum -y install gcc automake autoconf libtool make openssl-devel pam-devel rpm-build && wget http://ftp.halifax.rwth-aachen.de/openbsd/OpenSSH/portable/openssh-5.6p1.tar.gz && wget http://ftp.halifax.rwth-aachen.de/openbsd/OpenSSH/portable/openssh-5.6p1.tar.gz.asc && wget -O- http://ftp.halifax.rwth-aachen.de/openbsd/OpenSSH/portable/DJM-GPG-KEY.asc | gpg --import && gpg openssh-5.6p1.tar.gz.asc && tar zxvf openssh-5.6p1.tar.gz && cp openssh-5.6p1/contrib/redhat/openssh.spec /usr/src/redhat/SPECS/ && cp openssh-5.6p1.tar.gz /usr/src/redhat/SOURCES/ && cd /usr/src/redhat/SPECS/ && perl -i.bak -pe 's/^(%define no_(gnome|x11)_askpass)\s+0$/$1 1/' openssh.spec && rpmbuild -bb openssh.spec && cd /usr/src/redhat/RPMS/`uname -i` && uname -i && ls -l && rpm -Uvh openssh*rpmBetter check if it worked. rpm -qa | grep sshopenssh-clients-5.6p1-1openssh-5.6p1-1openssh-server-5.6p1-1This were the pre-requisites for using chroot with SFTP. I will show how to setup SFTP and prepare your directory structure.First, setup two new groups. One group fullssh is going to have full access to the Linux filesystem. Just add your main user to the fullssh supplementary group. The other group is called sftponly for our SFTP users that will be jailed to a particular directory. groupadd sftponlyuseradd peteusermod -aG sftponly peteIn above snippet one could also use useradd -G sftponly pete, but in case the user pete exists already the -a switch ensures he is added to the supplemental group sftponly instead of deleting existing supplemental groups.Now go into /etc/ssh/sshd_config and comment out the default entry for the sftp service. #Subsystem sftp /usr/lib64/misc/sftp-serverSubsystem sftp internal-sftpAllowGroups fullssh sftponlyMatch Group sftponly ChrootDirectory /var/www/html ForceCommand internal-sftp X11Forwarding no AllowTcpForwarding noNow let’s set permissions on the directory structure. This has to be done with great care. chown root:sftponly /var/www/htmlchmod 755 /var/www/htmlmkdir -p /var/www/html/yourDomainchown pete:sftponly /var/www/html/yourDomainSome tutorials on the web suggest setting permissions to 750, but as it turns out this will lock out apache and produces a HTTP 403 Forbidden when accessing your website. We don’t want that.With chown you change the owners of the directory to root user-wise and to sftponly group-wise. The user pete should be a member of that group by now. You can check that with groups pete. Creating the directory with the -p shouldn’t be necessary in above example, but I included it nonetheless. The switch would take care of creating all necessary parent folders. This is how the permissions must look like in order to make the ssh / sftp login chain to work. ls -lishad /var/www/htmlinode 4.0K drwxr-xr-x 3 root sftponly 4.0K Nov 9 10:27 /var/www/htmlls -lishad /var/www/html/yourDomaininode 4.0K drwxr-xr-x 8 joe sftponly 4.0K Nov 9 11:33 /var/www/html/yourDomainNow if you login, ssh has root permissions and handles the login process until the user pete is authenticated. The user is then jail chrooted to the yourDomain directory, at least in my understanding. With SFTP we don’t need a proper chroot environment with access to /dev or any hardlinks – therefore it should be rather difficult for an attacker to escape the jail. There exists an exploit [1] since 1999. That is why Solaris and Linux stopped to consider chroot a secure solution. However it is possible to harden [2] a shell chroot environment.I hope you enjoyed this article. I would appreciate any thoughts on the topic of server security in the comments below. 
Read On

The social network

0 komentar
Read On

What is Usability Design?

0 komentar

Weather you are a Web designer, programmer or project manager – if you want to know how to create competitive Web sites, software as a service applications or mobile phone software you have to know about Usability Design, in short UX.
 
This tutorial is mostly, but not completely copied from Jakob Nielsen, one of the most important figures in Usability Design. He bundled together and described in great detail 3 Design models to maximize usability of any application, weather off- or online.

Competitive testingParallel designIterative design

You have to try (and test) multiple design ideas. Competitive, parallel, and iterative testing are simply 3 different ways to consider design alternatives. By combining them, you get wide diversity at a lower cost than simply sticking to a single approach.


Iterative: Keep going for as many iterations as your budget allows. Test iterations via Heuristic Evaluation.


The goal of heuristic evaluation is to find the usability problems in the design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”).


3-5 evaluators check the proposed Design/Wireframe against the Heuristic list each iteration.
 



‘Heuristic (pronounced /hj?'r?st?k/) or heuristics (from the Greek “????s??” for “find” or “discover”) refers to experience-based techniques for problem solving, learning, and discovery.’ – Wikipedia.


In a parallel design process, you create multiple alternative designs at the same time. You can do this either by encouraging a single designer to really push their creativity or by assigning different design directions to different designers, each of whom makes one draft design.


Use cheap methods if parallel designing, e.g. Wireframing




After wireframing you could go to interactive wireframing and then the actual visual design. Let the the user test up to 2-3 designs. The first one will be the one where the testers are the freshest, so variate the order. When finished take the best elements out of all different designs and merge the designs. Then refine this new design with iterative testing.

Out of 4 parallel versions,  pick the best one and iterate on it. This approach resulted in measured usability 56% higher than the average of the 4 designs.Follow the recommended process and use a merged design, instead of picking a winner. Here, measured usability was 70% higher.After one iteration, measured usability was 152% higher.

Stanford University took this approach to the domain of Internet advertising. Ads created through a parallel design process performed 67% better.


In a competitive usability study, you test your own design and 3–4 other companies’ designs. Competitive testing is advantageous in that you don’t spend resources creating early design alternatives. But as always, quantitative measurements provide weaker insights than qualitative research.


Combining these 3 methods prevents you from being stuck with your best idea and maximizes your chances of hitting on something better.


At each step, you should be sure to judge the designs based on empirical observations of real user behavior instead of your own preferences.


These are ten general principles for user interface design. They are called “heuristics” because they are more in the nature of rules of thumb than specific usability guidelines.

Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
BaseCamp by 37Signals
  Match between system and the real world The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
iTunes User control and freedom Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. 
 
CollabFinder Consistency and standards Users should not have to wonder weather different words, situations, or actions mean the same thing. Follow platform conventions
Gmail Error prevention Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Web Form Design by Luke W. Recognition rather than recall Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Keynote Flexibility and efficiency of use Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Numbers Aesthetic and minimalist design Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Kontain Help users recognize, diagnose, and recover from errors Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. 
 
Digg Help and documentation Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.
Flickr


Heuristic evaluation is performed by having each individual evaluator inspect the interface alone. This procedure is important in order to ensure independent and unbiased evaluations from each evaluator. The results of the evaluation can be recorded either as written reports from each evaluator or by having the evaluators verbalize their comments to an observer as they go through the interface.


During the evaluation session, the evaluator goes through the interface several times and inspects the various dialogue elements and compares them with a list of recognized usability principles (the heuristics).


One way of building a supplementary list of category-specific heuristics is to perform competitive analysis and user testing of existing products in the given category and try to abstract principles to explain the usability problems that are found (Dykstra 1993).


One approach that has been applied successfully is to supply the evaluators with a typical usage scenario, listing the various steps a user would take to perform a sample set of realistic tasks.


The output from using the heuristic evaluation method is a list of usability problems in the interface with references to those usability principles that were violated by the design in each case in the opinion of the evaluator. Detailed answers are needed.


Debriefing Meeting: A debriefing is also a good opportunity for discussing the positive aspects of the design, since heuristic evaluation does not otherwise address this important issue.


DeviantArt (Creative Commons)


A good site, service or application also takes into account if it needs to be used by the hearing impaired and a screen reader, or weather your users use mobile devices, such as smartphones, tablet PCs or others. Evaluate if your users prefer or are bound to a keyboard and provide shortcuts.
 
I hope you enjoyed this tutorial on Usability. Why don’t you subscribe to my RSS feed? I also appreciate any comments in the section below. Thanks for reading.


View the original article here

Read On

Scalix: push mail on Nokia mobile phone with IMAP IDLE

0 komentar Kamis, 09 Juni 2011

You have a Scalix server and you’re desperately looking for a free and easy solution to push email to your mobile phone. Just like anybody else, when you think  about push mail, you really think BlackBerry. It’s so standard in these days… It has to be the solution! Well maybe not…


I’ve goggled extensively looking for a way to achieve push mail by installing an additional, open source,  RIM-like server (Funambol, Notifylink, etc…) on my Scalix box, but I’ve never been convinced by any of these solutions. It was either too expensive, too limited, too difficult to set up, insecure,… I just didn’t liked it. Then I discovered IMAP IDLE. In fact, everything was already build in for push mail, both on the Scalix server and on my Symbian mobile phone. There is no extra server needed. Indeed there is no code or installation trick in this post, it’s all there already!


Wikipedia tells us:



The IDLE feature allows IMAP e-mail users to receive immediately any mailbox changes without having to undertake any action such as clicking on a refresh button. This feature provides automated mail updates on the client computer.


In practice, there is a permanent connection between the phone and the server  (just like BB technology). This part is achieved by sending keep alive packets regularly. As soon as a message hits a mailbox, Scalix alerts all IMAP-IDLE clients logged on the matching account, and magically, the message gets to the phone. It’s so fast it even reaches my phone before it appears in SWA or Outlook 50% of the time. Of course when I delete a message or move it to another folder, modifications are applied on the phone automatically. Brilliant!


Just to make things clear, this solution handles messages only, no calendar or contact sync is to be expected. To be precise, Scalix actually handles contacts and calendar items in IMAP folders, which can be seen as emails from your phone. They will not be synced with the phone’s internal calendar or contact list however. Another difference is that only the message header is sent to the phone, not the entire body and attachments. I actually had to check whether I did it on purpose or if it was a limitation. Personally I like this, it saves bandwidth, increases  reactivity and allows me to filter what I really need to read immediately. When opening new mail however, it might take some time to download big messages. So using the IMAP protocol is not equivalent to running a BB server. It’s just a smart way of doing what 99% of the people really care about: receiving their emails on their mobile phones as soon as it hits their mailbox.


This is the most cost effective solution in my opinion. I use a Nokia E71 which is  cheaper, better looking, better build, better featured, lighter and smaller than a BlackBerry Bold. Reliability is decent, but the mail application still crashes once in a while (once a month or so) and requires to manually disconnect from the server (simply by editing your email settings). Any recent Symbian or Treo smart phone should be IMAP IDLE compliant, and some mail clients for Windows mobile exist too. Technically speaking, the overhead in terms of bandwidth is ridiculously low. Expect something like 5Mb/month on a 200Mb mailbox with 20 subfolders synced 24 hours a day.


If you don’t manage to get the phone connected to the server, it’s probably your mobile ISP who’s blocking the IMAP and/or the SMTP ports. Find ports that you don’t use and forwarding them to the IMAP/SMTP standard ports either directly on your Linux box or on your firewall. You can test these settings with any popular email clients that supports IMAP IDLE (Thunderbird, Outlook, Outlook Express, Apple mail, etc…)


For contacts and calendar sync, you still have to use the Nokia PC Suite, but I don’t think it’s a big deal. It’s very easy to sync the phone automatically with Outlook (of course you need the Scalix Outlook connector) as soon it is physically connected to the computer (USB) or within the Bluetooth range. The Nokia PC suite works so well it can even sync your phone from multiple computers (home and office in my case). I’ve never seen anything better than this for resolving possible conflicts. Whenever I have access to one of my computers, everything gets synced via Bluetooth without touching a button. When I’m away and don’t even have a laptop with me, I only use my phone anyway. Every change that I make will be synced perfectly with the Scalix server as soon as I reach civilization.


View the original article here

Read On

Scalix : change the time messages stay in Deleted Items before being deleted permanently

0 komentar

In Scalix, messages present in the Deleted Items are kept in this folder for one day by default before being deleted.

This behavior can be changed globally at installation by modifying the userconf file or individually for each user.

$ more /var/opt/scalix/XX/s/sys/userconf #XX being the first and last letter of your mail node1 # This file contains the default configuration values for a mail user.# (The line length should not exceed 128 charaters.)# # ConfigType=1 The default Waste Basket clearance time in days.# This value represents how long a message will reside in# The Waste Basket before it is eligible for deletion1 1...

Changing this parameter to

# ConfigType=1 The default Waste Basket clearance time in days.# This value represents how long a message will reside in# The Waste Basket before it is eligible for deletion1 30

will make Scalix keep messages 30 days in the Deleted Items before permanently deleting them.

However, this will only apply to users created after the file modification. Indeed, this file is read and parameters copied in the configuration file of the user whenever a new user is created,.

So in order to modify this behavior for existing users, you have to know which folder Scalix uses for this user:

Look for the “User Folder” value.

In this folder, edit the 000002g file:

1# User Configuration File1 1 ...

Change it to

for keeping messages 30 days


View the original article here

Read On

Select the entire URL on a single click in Firefox’s address bar under linux

0 komentar

For those using Windows and Linux everyday, and who prefer the behavior of Firefox under Windows….

In Firefox, go the the page:
about:config

Filter the settings on “urlbar” and right-click/toggle on :
browser.urlbar.clickSelectsAll
to change the value to “true”

For applying this to all users, you can change the firefox.js file :

/usr/lib/firefox-3.0.1/defaults/preferences/firefox.js on RHEL / CentOS/usr/share/firefox/defaults/pref/firefox.js on ubuntu

Change
user_pref(”browser.urlbar.clickSelectsAll”, false);
to ‘true’.

Here are some useful Firefox shortcuts:

Ctrl + L - select the URL
Ctrl + I - show page information
Ctrl + K - select text in search bar
Esc when the pointer is in the address bar to select the url


View the original article here

Read On

Manage you iPod with Floola

0 komentar

Have you ever wanted to extract songs from your iPod or to modify you playlists  when away from your computer? The major problem you’ll face is that your iPod can only be managed by a single installation of iTunes.

So if you plug it on you kitchen computer instead of the office computer for instance, you won’t be able play/extract/upload/modify anything on you iPod as it can only be “synced” with your office computer.

Floola is the perfect solution for bypassing this limitation. It can extract song from and to your iPod, add/modify playlists and smart playlists, manage podcasts, videos, lyrics, artwork, etc… virtually anything iTunes can do!

This freeware is available for Linux, Mac and Windows, and best of all it’s portable : it doesn’t require any software installation. Just leave the executable files on you iPod and you’ll always be carring a universal iTune alternative with you, which you’ll be able to run and use to manage your iPod from any computer!


View the original article here

Read On

Script to monitor assigned IP address on a local network

0 komentar

I wanted to monitor all assigned IP addresses on my local network. Since I have a hardware router/DHCP server, looking at the DHCP table was not an option. So I wrote a script on a CentOS Linux server.

You might need to install nmap on your distribution before using the script. On CentOS, install nmap with:

The script pings all addresses in a specific range and looks at who has connected/disconnected since the last time the script was run. Whenever activity is detected, it is sent by mail. Of course the accuracy of the results depends on how often the script is run. I use a crontab entry for this purpose.

Don’t forget to change the path, the IP range, the email address, etc… before using !

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061#!bin/sh cd /path/to/scripti=0unset arr #Read the hostlist.dat file from a previous run and store it to an arraywhile read linedo arr[i]=$line (( i=$i+1 ))done < hostlist.dat #ping all IP's in a range and redirect the output to hostlist.dat in the same directorynmap -sP 192.168.0.1-255 | grep 192.168.0. | awk -F ' appears' '{ print $1 }' > hostlist.dat #first loop to detect new hosts on the local networkwhile read line #read the just created hostlist.dat file one line at the timedo j=0 found=0 while [[ $j -lt ${#arr[*]} ]] #read the array do if [[ ${arr[$j]} = $line ]] #compare the hostlist.dat file to the array then found=1 fi (( j=$j+1 )) done if [[ $found = 0 ]] then lineip=$line line=`echo $line | egrep -o '192.[0-9.]+'` #return ip adress line=`nmblookup -A $line` #retreive machine name #I chose to send a mail, but you can change this line to whatever suits you echo $line | mailx -s "INFO: $lineip now connected to the local network!!!" name@domain.com fidone < hostlist.dat j=0#second loop to detect hosts disconnected from the local network since last runwhile [[ $j -lt ${#arr[*]} ]] #read the arraydo found=0 while read line #read the just created hostlist.dat file one line at the time do if [[ ${arr[$j]} = $line ]] #compare the hostlist.dat file to the array then found=1 fi done < hostlist.dat line=${arr[$j]} (( j=$j+1 )) if [[ $found = 0 ]] then lineip=$line line=`echo $line | egrep -o '192.[0-9.]+'` #return ip address, no nmblookup here since machine is disconnected #I chose to send a mail, but you can change this line to whatever suits you echo $lineip | mailx -s "INFO: $line now disconnected from the network!!!" name@domain.com fidone

And this is the crontab entry for running the script every two minutes.

*/2 * * * * . /path/to/script/hostlist > /dev/null 2 >& 1

View the original article here

Read On

Nouveau Gallium3D, LLVMpipe In Ubuntu 11.10?

0 komentar Rabu, 08 Juni 2011

Here's the next chapter of the X.Org / Mesa plans for Ubuntu 11.10, in continuation of the earlier X.Org / Mesa talks at UDS Budapest...


View the original article here

Read On

Open Standards and a Smart Energy Grid: Interview with Green Energy Corp

0 komentar

Green Energy Corp creates software and services for communications and energy companies. They're working towards an open source smart grid solution that will help both new and old companies in the industry for more efficient, greener energy.


View the original article here

Read On

Open-Source Scala Gains Commercial Backing

0 komentar

Will Scala be the next Java, PHP or Ruby? A newly funded start-up aims to find out.


View the original article here

Read On

openSUSE Renames OBS

0 komentar

The openSUSE Build Service Team has decided to rename its cutting-edge packaging- and distribution build technology to Open Build Service. The new name, while maintaining the well-known OBS acronym, signals its open and cross-distribution nature.


View the original article here

Read On

OSI Open Reformation Begins in Earnest

0 komentar

The Open Source Initiative (OSI) is creating a process for wider participation through working groups and new affiliate programmes which will influence its thinking on its future mission


View the original article here

Read On

Paravirtualization With Xen On CentOS 5.6 (x86_64)

0 komentar

This tutorial provides step-by-step instructions on how to install Xen (version 3.0.3) on a CentOS 5.6 (x86_64) system. Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.


View the original article here

Read On

Perl 5.14 Released

0 komentar Selasa, 07 Juni 2011

The new version of Perl brings better Unicode compliance, improved documentation, new substitution styles, improved IPv6 support and more in the language's now annual evolution


View the original article here

Read On

Platform Gets Graphic with HPC Cluster Manager

0 komentar

Not everybody who needs to build a cluster wants to be a Linux expert. And that is why Platform Computing has slapped an all-encompassing Web-based graphical user interface onto the 3 release of its Platform HPC cluster management tool.


Those who are Linux experts, of course, will be able to fly from command to command as they set up clusters, using the command line interface as they have in prior Platform HPC releases. And they will be able to take advantage of a number of performance enhancements that Platform Computing has added with this rev of the product.


View the original article here

Read On

Preparing for LinuxCon Japan: Tsunami Relief, Comic Relief

0 komentar

We're excited to be hosting LinuxCon Japan next week, June 1-3, in Yokohama. Bringing together the Linux community in Japan and supporting the country after its devastating tsunami is very important to us. Linus Torvalds will be there next week, as will a variety of Linux community leaders and contributors.


We recently announced a couple of programs we hoped could contribute to relief efforts in the country and give everyone in the Linux community a chance to support, one of which concludes this Tuesday. All new Linux Foundation individual membership dues we receive through end of the day May 31, 2011 will go to the U.S Fund for UNICEF Children of Japan. You can sign up to become a member and support this cause at The Linux Foundation membership page.


We have also designed an exclusive T-shirt to be available in the Linux.com Store. All revenue generated from sales of this Japan-specific T-shirt throughout this year will be donated to the same fund. We'll announce here on Linux.com when that T-shirt is available to purchase.


Lastly, we're considering running a regular comic strip on Linux.com but want your thoughts. So, here is one we thought was fun and that helps us prepare to kick-off LinuxCon Japan next week!


View the original article here

Read On

PSC Accelerates Machine-Learning Algorithm with CUDA

0 komentar

Researchers at the Pittsburgh Supercomputing Center and HP Labs have achieved unprecedented speedup of 10X on a key machine-learning algorithm. A branch of artificial intelligence, machine learning enables computers to process and learn from vast amounts of empirical data through algorithms that can recognize complex patterns and make intelligent decisions based on them. For many machine-learning applications, a first step is identifying how data can be partitioned into related groups or “clustered.”


View the original article here

Read On

Rapid-Release Idea Spreads to Firefox 5 Beta

0 komentar

Mozilla has released the first beta of Firefox 5, the initial version of the open-source browser to be produced under a new fast-moving schedule.


View the original article here

Read On

Read-Only Nation: Can Open Source Change the British Way?

0 komentar

We asked if open-source software had a part to play in increasing technological innovation in the UK. It seems that for a nation with such a great engineering heritage, we have too easily passed the tech leadership flag over to the US and to the emerging economies.…


View the original article here

Read On

Replicate MySQL to MongoDB with Tungsten Replicator

0 komentar Senin, 06 Juni 2011

You can now replicate data from MySQL data to MongoDB using Tungsten Replicator, an open source data replication engine for MySQL. It's sponsored by Continuent, makers of Tungsten Enterprise.


The new functionality was added by Continuent CTO Robert Hodges, Flavio Percoco Premoli of The Net Planet and Continuent employee Stephane Giron as part of a hackathon at the Open DB Camp in Sardinia.


View the original article here

Read On

RHEL 6.1 Lays Foundation for Future Servers

0 komentar

Commercial Linux distributor Red Hat has moved the 6.1 release of its Enterprise Linux from beta to prime time.…


View the original article here

Read On

RHEL 6.1 Rolls Out

0 komentar

Red Hat Enterprise Linux 6.1 boasts enhancements, patches, and security updates.


View the original article here

Read On

Say Hello To Linux 3.0; Linus Just Tagged 3.0-rc1

0 komentar

For anyone that was doubting Linus Torvalds would finally part ways with the Linux 2.6 kernel series, you lost your bets. On the eve of Memorial Day in the United States and his departure to Japan for LinuxCon, Linus Torvalds just tagged Linux 3.0-rc1 in Git...


View the original article here

Read On

SCO Becomes TSG in Legal Name-Shift

0 komentar

The SCO Group Inc is no more as the company changes its name to TSG Group Inc


View the original article here

Read On

Six Quick Tips Get You Started with Open Compliance

0 komentar

More and more companies turn to Linux and other open source software for great functionality and competitive advantage in product development.  When they do so, most organizations recognize their responsibility to comply with open source license obligations.  They embrace the responsibility as part of using open source.  Unfortunately, some companies remain unaware of their obligations or choose to ignore them.  Others are simply daunted by the task of putting a compliance program in place.  They needn’t be:  There are lots of resources to turn to for guidance.  The Linux Foundation has created comprehensive training courses on compliance that are delivered confidentially onsite to help companies meet their responsibilities.  We also have instructive white papers and a great checklist of compliance actions compiled from experiences of industry-best compliance programs, and the FOSSBazaar governance community to share thoughts about compliance challenges and solutions.


But those resources may be most useful to companies that have committed themselves to compliance and understand the scope of the task before them.  What about companies that know they have to do something but haven’t even thought about where to start?  To help those companies, we’ve recorded a webinar titled “Six Tips for Getting Started with Open Source Compliance.”   It’s readily understandable, even by someone whose expertise lies outside software development.  The webinar is a great place to start with compliance and lays the groundwork for the more comprehensive Linux Foundation compliance training later.


Who should listen to the webinar?  Whoever will be responsible for establishing their company’s open source compliance program.  This could be someone in product development, or the software engineering department, or the Law Department, or Corporate Compliance, or Supplier Management, or QA.  Whoever it turns out to be, they need to get things rolling and learn enough to designate or recruit the right people to implement a compliance program.


So, check out the Six Tips webinar.  It’s well worth the 15 minutes you’ll spend.  While you’re at it, take a listen to the Introduction to SPDXTM webinar.  Phil Odence provides a great three-minute introduction to the Software Package Data Exchange project, which will transform the way companies inform their trading partners of the open source content in the software they deliver.  After listening, you’ll want to visit the SPDX webpages to learn more about the project.


It’s time to get started!


View the original article here

Read On

Snort Creator talks Razorback and ClamAV

0 komentar Minggu, 05 Juni 2011

Snort creator and CTO of Sourcefire, Marty Roesch, talked to The H about the next generation of malware detection in development at Sourcefire – Razorback


View the original article here

Read On

Stable Kernel 2.6.38.7

0 komentar

The 2.6.38.7 stable kernel update is out with another set of important fixes.


View the original article here

Read On

Stable Kernels 2.6.32.41 and 2.6.33.14

0 komentar

The 2.6.32.41 and 2.6.33.14 stable kernels have been released with the usual pile of important fixes.


View the original article here

Read On

Stop Spam with Bogofilter on Linux

0 komentar

There are two things in life that are assured: taxes and spam. There is little that can be done about taxes. As for spam, you can find a solution. Ideally, ISPs handle this and spam never makes its way to an end user system. This rarely happens, though, so measures must be taken to prevent the deluge from coming.


There are a number of tools available for Linux to prevent spam. One of those tools, Bogofilter, is an incredibly well done system that seamlessly integrates into both Evolution and Claws-Mail. This tutorial will walk you through the process of getting Bogofilter integrated with two of the more popular Linux email clients as well as helping you train Bogofilter for spam and ham.


Some distributions (such as Ubuntu 11.04) will already have Bogofilter installed, by default. If unsure, open up a terminal window and issue the command which bogofilter. If the command returns /usr/bin/bogofilter, congratulations, Bogofilter is installed. If not, time to install.


The easiest method of installation is to open up your Add/Remove Software tool (such as The Ubuntu Software Center or Synaptic) and search for "bogofilter". The results should come back with the Bogofilter application. If so click to install.


Although Evolution includes everything it needs to integrate with Bogofilter, Claws-Mail needs a little help. With the Add/Remove Software tool open, do a search for "claws" (no quotes.) Within the results there should be a package called claws-mail-plugins. This package must be installed in order for Claws-Mail to be able to integrate with Bogofilter.


That should do it for the installation. Now for the configuration of the mail clients.


The first step for Evolution is to ensure the Bogofilter plugin is enabled. To do this fire up Evolution and then click Edit > Plugins. When the new window opens, scroll down until the Bogofilter Junk Filter plugin is visible. Make sure that plugin is checked to be enabled. Once enabled, close the Plugin window. Now it's time to configure Evolution to work with Bogofilter.


With Evolution open, click Edit > Preferences and then click on the Junk tab. In this tab (see Figure 1) there are a few settings to take care of. The first configuration is to make sure Evolution is using Bogofilter. In the Default Junk Plugin drop-down select Bogofilter.



Once Bogofilter is slected as the default junk plugin, Evolution will let you know if everything necessary is installed.


With Bogofilter set as the junk filter, it's time to take care of the other options, including:

Check incoming messages for junk: This is set by default and must be set to on in order to have Bogofitler check new mail.Delete junk message on exit: If you want the Junk folder emptied upon exit of Evolution, set this. This option can be set to run upon every exit, once per day, once per week, or once per month.Check custom headers for junk: This option allows the user to configure a custom head to be marked as junk. Some ISPs use spam tools that add extra headers to email (Such as X-Spamscore:). More on this in a moment.Do not mark messages as junk if sender is in my address book: This is helpful if you find Bogofilter catching false-positives thereby marking mail as junk that should not be marked as such.

Let's add a custom header to the Bogofilter configuration. This custom header will assume the ISP tags email with the X-Spamscore header and rates the mail with the standard system (0-10). And ISP might have a lower spam score threshold in order to prevent stopping legitimate email from getting into the inboxes of users. Say the ISP rejects at a level 10 and allows all other mail through (with the new X-spamscore header added). If email marked with 7 or higher is consistently spam, custom headers could be added for 7, 8, and 9 to make sure all email with a spam score of 7 or higher be caught by Bogofilter.


To create the headers click the Add button and then create the filter for a spam score of 7 with the following information:


Name: Spam Score 7


Header value contains: X-Spamscore: 7


Now create headers for spam scores of 8 and 9 and it's complete.


I want to address one last issue before I move on to Claws-Mail. This applies to both Evolution and Claws-Mail. Just because the Bogofilter system is set up, don't assume it will star working perfectly, out of the box. Most spam filters must first be trained before they will work. How is this done? Simple: A collection of both ham (good email) and spam (junk email) must be marked as such to begin the training. To this end, I keep a small archive of spam handy that can be used to train a spam filter. A good collection of 100+ spam emails will be great to help begin the training process. But it's not sufficient to simply open up a large folder of spam and mark it as such. The ham must also be marked as such, so the spam filter knows what is legitimate email as well. For this, simply allow email to begin pouring into the email client, make sure everything in the inbox is ham, select the entire contents of the inbox, and mark all emails as ham.


Your spam filter has now had a great first lesson in what is spam and what is ham.


To configure Claws-Mail for Bogofilter the plugin must be first loaded. To do this click Configuration > Plugins. With the Plugins window open, click the Load button which will open up an file navigator. Located the bogofilter.so, select it, and click Open. The plugin is now loaded and ready for configuration.


To configure Bogofilter on Claws-Mail click Configuration > Preferences and then scroll down to the Plugins section in the left navigation. Select the Bogofilter entry to reveal the configuration options (see Figure 2.)


Claws-Mail offers a very different take on the Bogofilter setup.


There are really two main configurations that need changing here:

Save spam in: This is the folder that will hold messages tagged as spam.When unsure, move to: This folder will hold messages that Bogofilter is not 100% sure of.

The Unsure option is not necessary. However, I find using this setup ensures you do not lose false-positives to the ether. When messages are sent to the Unsure folder it is just a matter of opening up the folder, selecting the false-positives, and marking them as ham. When doing this, Bogofilter is getting further training and, over time, fewer messages will wind up in the Unsure folder.


Like Evolution, senders in the Address book can be automatically marked as white listed. To add this check the box associated with Whitelist senders found in address book. When selected, either an individual address book can be selected, or any (all) books can be selected.


Don't forget, it is important to help in the training of Bogofilter for Claws-Mail, just like I described above. The more training Bogofilter gets, the more accurate it will be.


As with anything, nothing is perfect and stopping spam is one of the best examples of that in the world of computers. As much as we try, spam still gets through. But by doing everything we can, the amount of spam that actually makes it through to the inbox is greatly lessened. Follow the processes outlined above and spam will soon become nothing more than an insignificant bother instead of a menace.


View the original article here

Read On

Switched On: Adding to Android's Army

0 komentar
Android, as Andy Rubin (no relation) has pointed out on multiple occasions, plays a game of numbers. And at Google I/O, the company carrying on its development shared some large ones: 100 million activated devices with 400,000 being added each day. However, like in many games, different players can catch up or overtake each other at different points. Just ask Nokia and RIM. To stay on top, operating system vendors implement strategies that lock consumers in. The more money consumers sink into iPhone apps, for example, the more incentive they have to stay with that platform; the same is true for accessories that use Apple's 30-pin dock connector that has been around since the third-generation iPod.

View the original article here

Read On

TDF Announces the Engineering Steering Committee

0 komentar

The Document Foundation has announced the members of the Engineering Steering Committee. "The 10 members of the ESC are Andras Timar (localization), Michael Meeks and Petr Mladek of Novell, Caolan McNamara and David Tardon of RedHat, Bjoern Michaelsen of Canonical, Michael Natterer of Lanedo, Rene Engelhard of Debian, and the independent contributors Norbert Thiebaud and Rainer Bielefeld (QA). The ESC convenes once a week by telephone to discuss the progress of the time-based release schedule and coordinate development activities. Their meetings routinely include other active, interested developers and topic experts."


View the original article here

Read On