Calxeda Announces ARM Server Alliance

0 komentar Sabtu, 25 Juni 2011

Officials with Calxeda, the startup that's building ARM-based chips for low-power data center servers, announced a & Trailblazer& program designed to create an ecosystem around its technology. But, while Calxeda touted support from Ubuntu Linux sponsor Canonical, among other companies, there's been no hint from Microsoft that it will create...


Officials with Calxeda, the startup that's building ARM-based chips for low-power data center servers, announced a & Trailblazer& program designed to create an ecosystem around its technology. But, while Calxeda touted support from Ubuntu Linux sponsor Canonical, among other companies, there's been no hint from Microsoft that it will create a server edition of its ARM-based & Windows 8.& ...


View the original article here

Read On

Can Project Harmony Streamline Rules for Open Source Contributions?

0 komentar

According to the Project Harmony page: "Project Harmony is a community-centered group focused on contributor agreements for free and open source software (FOSS). As a group, we represent a diverse collection of perspectives, experiences, communities, projects, non-profit and for-profit entities. In that diversity, we share a common belief in the future of FOSS, and a common interest in using our skills (whether they're legal, organizational, editorial, technical, or otherwise) to the benefit of collaborative FOSS communities." The...


OStatic's open source theme of the day today is whether open source contributions are tracking with increases in open source usage, especially by businesses and organizations. In this post, we discussed how many organizations that now use open source aren't giving back at all. On this topic, one of the more interesting projects currently running isn't an open source software development project, but rather a coordinated effort to establish rules and guidelines for making contributions to open source. It's called Project Harmony, is heavily backed by Canonical, and on June 23 its first year of notable effort to establish rules for open source contributions will arrive.


According to the Project Harmony page:



"Project Harmony is a community-centered group focused on contributor agreements for free and open source software (FOSS). As a group, we represent a diverse collection of perspectives, experiences, communities, projects, non-profit and for-profit entities. In that diversity, we share a common belief in the future of FOSS, and a common interest in using our skills (whether they're legal, organizational, editorial, technical, or otherwise) to the benefit of collaborative FOSS communities."


View the original article here

Read On

Chrome May Become Ubuntu's Browser

0 komentar

Canonical founder Mark Shuttleworth says there is "a real possibility" that Chrome will replace Firefox as the bundled browser in future distributions of the Linux operating system.


Canonical founder Mark Shuttleworth says there is "a real possibility" that Chrome will replace Firefox as the bundled browser in future distributions of the Linux operating system.


View the original article here

Read On

Cloud Adoption Survey Says Linux is OS of Choice

0 komentar

Cloud.com, BitRock, and Zenoss have surveyed more than 500 members of the open source and systems management community about trends in cloud computing and users' preferences and plans. The result? There's a strong correlation between open source and cloud usage — and the survey found that Linux looms large in plans for deployments.


Cloud.com, BitRock, and Zenoss have surveyed more than 500 members of the open source and systems management community about trends in cloud computing and users' preferences and plans. The result? There's a strong correlation between open source and cloud usage — and the survey found that Linux looms large in plans for deployments.


The survey was taken by 521 IT professionals in a broad variety of institutions, with 9% working for public companies, 51% working for private / privately-held companies, 11% working in educational institutions, 5% in government, and 4% at non-profits. The respondents range from CTOs (11%), IT managers (18%), to technical support (7%) and developers (12%).


Planning for cloud infrastructure varies widely, with only 7% having an "approved cloud computing strategy," and 20% with "no plans to develop" a cloud computing strategy. About 44% of the respondents have at least a partial or fully developed strategy for cloud computing and — good news for the marketing folks — 32% are still gathering input for their 2011 cloud computing strategy. (Though the survey was run earlier this year, so it may well be that the ones gathering input earlier in the year are now finished.)


Now that we have a profile of the people responding, let's take a look at the results. One of the most interesting, here at Linux.com at least, is the OS that respondents plan to run. Overwhelmingly, Linux was on the shopping list for 83% of the respondents — compared to 66% for Windows, 8% looking to BSD, only 5% for Solaris, and 12% choosing "other." Naturally, many shops are looking at mixed deployments to satisfy needs for applications that run only on Linux or Windows, but it's clear from the survey that Linux is doing quite well.


Not just Linux, of course, open source is doing quite well too. Most organizations (69%) plan to use open source "whenever possible," and only 3% of the organizations are against using open source in the cloud.


What do organizations want to do with all this open cloud Linux-based goodness? Right now there's a fairly even mix of plans to use cloud computing for compute (59%), storage (51%), or Platform as a Service (Paas) with 47%.


Application choice for the cloud shows strong interest in content management and Web publishing (57%), document management (39%), and network monitoring and management (34%). See Figure 1 for the chart of results.


The majority of organizations want to run cloud computing on their own hardware, with 57% of the respondents wanting to use their own hardware and facilities. Only 18% wanted to use dedicated hardware at a managed service provider, and 23% of the organizations want to use their own hardware at a service provider using a shared infrastructure.


Why are organizations turning to cloud computing? The reasons are varied, and most organizations have a number of reasons for wanting to use cloud computing. The top reason, at 61%, is scalability. Scalability is followed closely by cost savings (54%), and ease of management (53%).


My favorite reason, redundancy, came in fourth with only 49% of respondents. Greater flexibility also came in at 49%, and elasticity was right there with 48%. It's a bit surprising that elasticity isn't higher on the list, given that scalability features so highly. You'd think that the two go hand-in-hand, with a need to meet fluctuating demand. See Figure 2 for the full results.


The organizations also have some notions about what the cloud is good for. Though only 54% listed cost savings as a reason for cloud computing, 68% believe it will save on hardware costs, and 66% believe it will be faster to deploy infrastructure. And 57% say that it will reduce the burden of systems management. Though less than half of the respondents cite elasticity as a reason for choosing the cloud, 51% say that elasticity is a benefit of cloud computing.


It doesn't look like most of the organizations are depending too heavily on cloud computing just yet. Many of the organizations (61%) plan to use the cloud for development and testing. Far behind development comes Software-as-a-Service (SaaS), with 37% of the organizations planning to use the cloud to offer SaaS. Note that doesn't measure the companies that want to use SaaS that's hosted in the cloud. A third (33%) of the organizations want to use cloud computing to mimic public cloud services behind their firewall, and just 27% want to use cloud computing for High Performance Computing (HPC).


Cloud computing does have some hurdles to overcome. A lot of respondents are worried about the security of the cloud, and inertia (otherwise known as a "conservative IT strategy") is in the way for 30% of organizations. The lead inhibitor, though, is training — 43% of organizations see a lack of cloud training as a problem for deploying cloud computing.


It's also worth noting that regulatory compliance is cited by more than 20% of the organizations. That's worth paying attention to for those companies supplying solutions related to cloud computing. No doubt regulatory compliance features highly on the list of the 9% of public companies that are mulling cloud computing.


Security is also seen as a challenge for management in the cloud with 36% of users saying that security is a headache, while only 12% cited said that performance management is a problem. Configuring guest instances is only a challenge for 10% of the users, and provisioning Linux instances came in dead last at 7%.


Finally, a whopping 53% say that their existing systems management tools do not translate well for managing their cloud computing environment. Something to pay attention to for systems management vendors.


If you're hoping to make use of the survey in your own work, note that the survey results are provided under the Creative Commons Attribution 3.0 Unported (CC BY 3.0) license.


The bottom line? It looks like cloud computing is following a typical adoption pattern. Organizations are finding out what cloud computing is food for, and not. Naturally, Linux is featuring significantly in most organizations plans for cloud computing — as is open source software.


Does this fit your expectations? Tell us in the comments how your organization is using Linux and cloud computing!


View the original article here

Read On

Debian Squeeze, Squid, Kerberos/LDAP Authentication, Active Directory Integration And Cyfin Reporter

0 komentar

This document covers setup of a Squid Proxy which will seamlessly integrate with Active Directory for authentication using Kerberos with LDAP as a backup for users not authenticated via Kerberos. Authorisation is managed by Groups in Active Directory. This...


This document covers setup of a Squid Proxy which will seamlessly integrate with Active Directory for authentication using Kerberos with LDAP as a backup for users not authenticated via Kerberos. Authorisation is managed by Groups in Active Directory. This is especially useful for Windows 7 clients which no longer support NTLMv2 without changing the local computer policy. It is capable of using white lists and black lists for site access and restrictions.


View the original article here

Read On

Development Release: Scientific Linux 5.6 RC1

0 komentar

Troy Dawson has announced that the first release candidate for Scientific Linux 5.6 is out and ready for testing: "Scientific Linux 5.6 RC 1 is now available. We have pushed out the latest update to Scientific Linux (SL) 5.6. Changed since beta 3: SL 5.6 has a new....


Troy Dawson has announced that the first release candidate for Scientific Linux 5.6 is out and ready for testing: "Scientific Linux 5.6 RC 1 is now available. We have pushed out the latest update to Scientific Linux (SL) 5.6. Changed since beta 3: SL 5.6 has a new....


View the original article here

Read On

Development Release: Scientific Linux 6.1 Alpha 1

0 komentar Jumat, 24 Juni 2011

Troy Dawson has announced the availability of the first alpha release of Scientific Linux 6.1, a distribution built from source packages for Red Hat Enterprise Linux 6.1 and enhanced with extra application useful in academic environments: "The first alpha for Scientific Linux 6.1 has been released. This release....


View the original article here

Read On

Distribution Release: Chakra GNU/Linux 2011.04-r1

0 komentar

Phil Miller has announced the release of Chakra GNU/Linux 2011.04-r1, a new respin of the Arch-based desktop distribution: "The Chakra development team is proud to announce the first respin of 'Aida'. Some weeks passed since Chakra 2011.04, we have added lots of package updates, KDE got updated to....


Phil Miller has announced the release of Chakra GNU/Linux 2011.04-r1, a new respin of the Arch-based desktop distribution: "The Chakra development team is proud to announce the first respin of 'Aida'. Some weeks passed since Chakra 2011.04, we have added lots of package updates, KDE got updated to....


View the original article here

Read On

Distribution Release: Toorox 06.2011

0 komentar

Jörn Lindau has announced the release of Toorox 06.2011 "GNOME" edition, a Gentoo-based distribution showcasing the new GNOME 3 desktop: "A new version of Toorox 'GNOME' has been finished. This one contains the GNOME esktop 3.0.2. What's new? The kernel is Linux 2.6.39-gentoo and USB 3.0 support has....


Jörn Lindau has announced the release of Toorox 06.2011 "GNOME" edition, a Gentoo-based distribution showcasing the new GNOME 3 desktop: "A new version of Toorox 'GNOME' has been finished. This one contains the GNOME esktop 3.0.2. What's new? The kernel is Linux 2.6.39-gentoo and USB 3.0 support has....


View the original article here

Read On

Do Mor with Tor: Running Bridges and Invisible Services

0 komentar

Last time, we took a look at basic browsing with Tor, the anonymizing Web relay network. At the very end of that article, we touched on how to actively participate in Tor by running your own relay. That's when your local copy of Tor functions as a node in the network, funneling encrypted Tor traffic peer-to-peer to help increase the overall Tor network's bandwidth. But there is even more you can do, such as running invisible services and bridges for those who need even more privacy than vanilla Tor provides out of the box.


As a refresher, all active Tor nodes are called "relays" — they pass packets between other relays. Each connection is encrypted, and no relay knows the starting point or ultimate destination of any of the traffic it relays. That's what makes Tor impossible to snoop: the route is calculated out-of-band (so to speak), and no one on the network knows it so no one else can steal it.


But the end-user's HTTP (or IM, or IRC, or whatever else) traffic does have to enter the Tor network somewhere. By default, whenever you launch Tor, it requests addresses of some Tor network "on-ramp" relays. Although the topography of the Tor network is constantly changing, and although the connection between the user and the on-ramp is encrypted, these addresses are public information, so adversaries could still watch the user's connection and interfere somehow — even by crude means such as switching off the user's connectivity.


The solution is to have secret, unpublished on-ramp relays. The Tor project calls them bridges, in order to denote the distinction. How many bridges there are is unknown, because there is no list. The most an ISP or attacker can do to block Tor is cut off access to the public relays, but if a user has the address of a Tor bridge, he or she can still connect.


Running a Tor bridge is as simple as running a normal Tor relay. The simplest way is to install the Vidalia GUI client, which allows you to start and stop Tor functionality on demand. The project recommends you use the latest files directly from them, rather than use a distribution's package management system, because security fixes can take too long to pass through distro review. The Linux/Unix download page links to repositories for Debian-based, RPM-based, and Gentoo distributions, as well as the three BSD flavors and source packages.


Note that this is not the "Browser Bundle" which is geared towards end-users only. You'll need to install the "vidalia" package, which will pull in the necessary Tor dependencies. Launch Vidalia, then choose the "Setup Relaying" button. Selecting "Relay traffic for the Tor network" configures your node as a standard relay. "Help censored users reach the Tor network" is the bridge option.


There are a few options to consider in the "Basic Settings" tab. Stick with the default relay port (9001) unless you know that your ISP blocks it. Unless you have a compelling reason not to, the project also wants you to provide some sort of contact information — but it is not published. Your IP address and port number are all that Tor users see. By default, you should check "Mirror the Relay Directory," because this is how Tor users establish connections. At the very bottom, you see "Automatically distribute my bridge address." To run a generic bridge, leave this checked. If, however, you are setting up your bridge for the benefit of some particular friend (including yourself), you can leave it unchecked — but you will need to tell the person in question your bridge IP address and port number.


You'll notice that the "Exit Policies" tab is grayed-out when you configure a bridge. When running a normal relay, you can set options here to limit access to particular types of traffic or block specific site requests from exiting the network at your node. But since a bridge is an entry point, those options do not apply.


That's all there is to bridge setup. To use a bridge as your own entry point to the Tor network, visit Vidalia's Network tab. Check the "My ISP blocks connections to the Tor network" option, which will reveal a list box where you can enter individual bridges. If someone you know is running an unpublished bridge, you can enter it directly. Otherwise, you will need to request bridge information from the Tor project.


How that works securely is a bit complicated. You can request a bridge list by visiting a special SSL-encrypted page on the Tor site; my understanding is that the project keeps track of what bridges it sends to what requesting IPs, so that evildoers cannot harvest the entire bridge collection. You can also send an email to the Tor project, and as long as you use one of the few well-known email address domains, it will return a set of bridge IDs. I assume that this information is also tracked; how to allow access to bridges without compromising their security is a hard problem.


But however you get them, simply enter the bridge IP:port information into Vidalia's Network tab, and you can browse and network without getting blocked. All bridge IDs consist of a an IP address and port number separated by a colon, and optionally can provide a cryptographic fingerprint, although this feature does not seem to be in widespread usage.


Essentially, bridges simply offer an alternate, harder-to-block access method to the Tor network. A more intriguing use of the software is to run an IP-based service that can only be accessed through Tor (as opposed to the Internet at large). You can publish a Web site, run a POP/IMAP/IRC server, or even make an SSH server accessible, all without ever revealing your address to visitors, and even from behind a firewall.


How is that possible? The actual traffic is routed through the Tor network, just like any other Tor data. The tricky part is making the service reachable. Tor does this by maintaining a distributed hash table of services, each of which is identified by a pseudo-random host name in the .onion domain. Whenever a new service launches, it connects to a few Tor relays (like any other relay would), then tells the hash database which relays those are. When a client makes a request to the ABCDEFWXYZ.onion host, the hash database picks one of the relays associated with the service and forwards the request on. The relays involved never know that the packets they are carrying are destined for a particular service, because the data is mixed in with all other Tor-based, encrypted traffic.


There are a few other checks-and-balances involved to protect everyone; if you're interested, the entire protocol is documented on the Tor Web site. There you can also find a link to the Tor hidden service search engine (based on DuckDuckGo), as well as an example Web site run by the project. A key point to remember, however, is that you must be running Tor on the client side to access these services, because they are accessible only within the Tor network.


It is also important to remember that the hidden service should probably only connect to Tor on the server-side, too — in can be extremely tricky to try and run a normal Web server setup and a Tor-based .onion site from the same Apache configuration, plus, someone who finds the hidden content on your existing IP could then prove that you are the host, which defeats the purpose of running a "hidden" service entirely.


Tor recommends you take a look at a lightweight Web server like thttpd. Whichever HTTP server you choose, you should make it accessible only to localhost. Next, in your .torrc configuration file, find the location-hidden services block, and add a pair of lines like

HiddenServiceDir /some/path/to/a/place/where/you/can/keep/files/for.your/hidden_service/HiddenServicePort 80 127.0.0.1:5222

The HiddenServiceDir directory is merely a location where Tor will dump a text file containing the .onion address for your service. The HiddenServicePort line has three parts: the "fake" port number advertised to visitors (80 here, to serve as a standard Web server), the address to bind to (here, 127.0.0.1, which is localhost), and the local port number (5222). You can also provide this information in Vidalia, in the Setup: Services tab.


Now, when you restart Tor, it will fetch a .onion host name for you, and save a private key file in your HiddenServiceDir directory. This key verifies that you are, in fact, the service listed in the distributed hash database, so that clients can connect with confidence — so don't lose it. That's all there is to it; you can set up as many services as you like, running anything that you care to configure and that can be ferried by Tor.


How you spread the word about your service is another matter — if you post about it on the public Internet, your foes can almost certainly associate you with it. There are in-Tor-only message boards, however, as well as community forums where people often post links to .onion services. Of course, that's assuming you want to publish your content. As with bridges, you may also need to make something available only to specific people, or only for a short amount of time, in which case person-to-person is probably best.


There is definitely a trade-off involved with both of these techniques. You cannot simply run an invisible Tor bridge and expect dissidents to find it and use it — they will have to set up and run Tor. Likewise, you cannot run an anonymous Web server dishing out Truth by the barrel-full to the whole wide world — you can only make it accessible to other people running Tor. Nevertheless, these are both exciting opportunities that without Tor wouldn't exist at all. The initial Tor concept didn't include either — it just goes to show you that a solid technology like Tor has more and better uses than casual Web surfing, as long as users are willing to push the boundaries. Who knows what else can be built on top of Tor?


View the original article here

Read On

First Step Towards openSUSE 12.1 with Milestone 1

0 komentar

Milestone 1, the first step towards the upcoming openSUSE 12.1 release, is now available. It is the first milestone, hence far from stable, but the images are now finally building, so we have a good starting point for further development.


With over 800 updates, including minor and major updates, the current milestone is ready for some serious testing. This iteration already sees some major upgrades taking place, with the kernel moving on to 2.6.39 and GNOME to 3.0. In addition we have popular GNOME applications like Evolution, Eye of GNOME and others all synchronized, and KDE’s Plasma Desktop coming along nicely with a minor version upgrade to 4.6.3. You will also find upgrades to GCC, glibc, Perl, Python, and the RPM package manager. Much work has also been put into the much-lauded systemd which has now been upgraded to version 26.


You can read some info on the progress in this recent blog on progress in Factory by Andreas Jaeger.


As expected from a development release, there is still a lot of work to do, so your input at this early stage will be a huge help in making the final release into the beautifully polished work that we aim for. openSUSE 12.1 Milestone 1 has a list of most annoying bugs here, please add issues you find and help fix them. As Will Stephenson recently blogged, fixing an issue is a matter of BURPing on build.opensuse.org! Find a how-to here.


View the original article here

Read On

Friday Five: Linux Stories for the Weekend

0 komentar Kamis, 23 Juni 2011

It's Friday, and your body may still be at work — but your brain has checked out for the weekend. Let's give it something to do, by checking out these five posts you might have missed over the week.

It's Friday, and your body may still be at work — but your brain has checked out for the weekend. Let's give it something to do, by checking out these five posts you might have missed over the week.

Here at Linux.com we link to the stories of the day related to Linux and open source. But sometimes I run into posts and articles that don't quite fit our news categories, or maybe they're just worth calling out in particular. So I wanted to try something new, and post five pieces on Friday that are really worth reading and thinking about. (Hat tip to Ron Miller, from whom I've borrowed the idea...)

Why a JavaScript hater thinks everyone needs to learn JavaScript in the next year: A strong argument in favor of JavaScript, food for thought for anybody who's thinking about learning a new (or their first) programming language.

Samba 3.6 release soon, Samba 4 pushed to late 2011, 2012: Paula Rooney looks at the upcoming Samba release and the long road to Samba 4.0.

Rebooting: Matthew Garrett looks at what happens when you reset a computer. (Technically, this was the week prior, but it's interesting and this is the first week I'm doing this feature...)

Presenting GNOME Contacts: This looks pretty snazzy. Allan Day previews GNOME Contacts, a feature for GNOME 3.2 — and a bunch of mockups that look quite nice.

Living off Freedom: Lars Wirzenius, a longtime Debian contributor, writes about being laid off and pondering doing crowd-funded free software development. Would you pay someone to develop free software?

And of course, I'm sure you've checked out today's Weekend Project on Xfce from Carla Schroder, and my piece from earlier this week on Linux Learners' Student Day, and our other tutorials. Thoughts or comments? Suggestions for next week's five? Let me know, and have a great weekend!


View the original article here

Read On

From the MeeGo Conference: The State of MeeGo

0 komentar

Last week I was in San Francisco for MeeGoConf SF, the second large-scale MeeGo event. A lot has changed since the Dublin get-together last November — or at least that's how it looks from the outside. Nokia (one of the co-founders of the project) hired on a new CEO from Microsoft, who announced in February that the Finnish phone maker would start using Microsoft's Windows Phone 7 instead of its own smartphone operating systems. To a lot of mobile-phone-industry watchers, that looked like bad news for MeeGo, and it certainly disappointed a huge portion of Nokia's MeeGo and Qt engineers, not to mention Maemo fans. But there is more to the MeeGo picture, which frames those events in a different light — as last week's event showed.


The truth is handsets aren't the whole story for MeeGo — they're simply the current darling platform of the gadget blog set. In fact, they may not make up a significant portion of MeeGo's revenue stream for device makers, considering that the margins on handsets get smaller and smaller all the time. The Linux Foundation's Jim Zemlin raised that point in Monday's keynote (note that the LF hosts Linux.com, in addition to curating MeeGo project resources, although it does not provide donations or engineering resources), which featured a cavalcade of industry-types and community hackers showing off the latest work in MeeGo's 1.2 release.


What is a high-margin and ever-growing business is selling software services across non-PC computing devices. Everything from games to books to specialty content to cloud-based music and storage — and it depends on a user installing an app on some device with a screen, an OS, but no keyboard. Right now, most consumers in the US think of these services on phones, and to a lesser degree tablets. But they're only thinking about today. It won't be long before connected televisions are commonplace instead of a high-end novelty, kids are demanding games and social apps in the back seat of the minivan, and a slew of other appliances need to connect to something, somewhere. When the other non-PC platforms catch up to handsets in volume, what are they going to run under the hood?


Zemlin made a very strong case for Linux being the answer, with twenty minutes' worth of slides and IDC analysis to substantiate it. Lower development cost, faster time-to-market, all the usual reasons any open source fan already knows. But buried deep within the statistics was an easy-to-overlook point that only MeeGo has going for it: when non-PC computing is pervasive, service vendors are not going to want to re-write their applications for every device.


That's MeeGo's secret weapon: because the core OS is the same, all applications are compatible across all of the deployment platforms. Right now, even the other embedded Linux vendors aren't pursuing cross-device compatibility (see LiMo or Mobilinux, for example).


In contrast, there were MeeGo vendors from a wide variety of hardware angles on display in San Francisco. Lots of tablets from the likes of Intel and WeTab, plus set-top boxes, car navigation units (several already on the road in China, plus Nissan's Chief Service Architect Tsuguo Nobe dropped by to announce the Japanese car-maker was adopting MeeGo), and even music consoles.


Without a doubt, Nokia's decision to ship Windows Phone 7 on its next round of smartphones (it still has one MeeGo phone already nearing completion scheduled to be released this year) looks dour. It makes some people think the phone-maker didn't find Linux and MeeGo up-to-snuff, and (worse), it keeps devices off the market. But the other MeeGo "verticals" don't seem to be affected in the least.


Of course, "the other verticals" essentially means OEMs: hardware device manufacturers. Most of them are interested in the MeeGo Core platform, with an eye towards customizing the interface to fit their own branding and "product differentiation" strategies. What is certainly more important to MeeGo's viability is the health of the developer and contributor community, which makes for another interesting MeeGo Conf assessment.


By all reports, turnout at MeeGo Conf SF was higher than it was last fall in Dublin (an unofficial estimate pegged it at 850; it is trickier to count because as a free event there are always no-shows and people who register, grab a badge, then wander away). Partly the higher attendance reflects the more tech-centric location, but the really interesting factoid was that attendance was "significantly" up among the non-sponsor-attendees — meaning the community.


Close to half of the program was targeted at developers: the application framework, designing interfaces for multiple devices, the build and packaging systems, etc. Based on session attendance and conversations, the MeeGo developer community remains fired up about the platform. On the other hand, it is also frustrated at the lack of commercial MeeGo-based consumer products. Set-top boxes and car dashboard unit are good for the foundations of the project, but hardly generate buzz. Most of the community members I talked to were resigned to the fact that public perception of the project is simply going to stall until more devices reach users. They do seem to be using the out-of-the-spotlight time wisely, however, working on the QA process and infrastructure.


But there are two areas where the project leadership does not seem to be getting its message out to the broader open source community. The first is the compatibility between MeeGo and desktop Linux. While the core set of APIs is smaller, by and large porting desktop applications to MeeGo is not difficult, thanks to the availability of Qt, GTK+, and the usual Linux stack underneath. Yet there remains a perception that MeeGo is a different beast, and most ports of desktop applications to the platform come from MeeGo community volunteers, not the upstream projects themselves.


The second message misfire surrounds the demo UX layers. Officially, the screenshots you see of tablet, handset, and even IVI MeeGo front-ends are "reference" designs: the project expects device makers to customize (or even custom-build) their own user interface layers. That concept is a difficult one for the outside world to grasp; you routinely see reviews and criticism of the look and feel or application offerings in the reference UXes, and some of them — netbook in particular — are actually in regular use. By leaving them in the bare-bones, not-quite-polished state that ship in the semi-annual releases, the public at large is getting a bad impression.


The "reference only" concept is probably a relic of Nokia's involvement; the phone maker steadfastly kept its own UI layer closed-source so that it could "differentiate" itself in the market. That's a fair enough concern, but the rest of the project doesn't need to let "unpolished" remain the public face of the MeeGo UX. Slicker UX releases can only help build excitement.


Luckily, there does seem to be some movement on that point; the N900 Developer Edition team is a volunteer squad building a more polished, usable MeeGo Handset experience for the Nokia N900 hardware. Better still, it is providing its changes back to the project. The community itself can build a slick UX layer.


Ultimately, as the hallway consensus indicated, MeeGo will probably continue to have a bit of a public perception issue as long as no mass-market phones and tablets are shipping for the gadget-hungry consumer sector. That's too bad, but that's life. It's good to see that the community is taking it in stride, however, and actually committing its time towards improving the platform. Android and Apple both had to wait until after their devices launched to start building a developer ecosystem: MeeGo actually has an advantage because it already has one just waiting for the hardware to hit the shelf.


View the original article here

Read On

Get Ready for LibreOffice 3.4

0 komentar

LibreOffice 3.4 is approaching. The second release candidate for 3.4 was released on May 27, and has improvements for Writer, Calc, and much more. Ready for a look?


The upcoming release of LibreOffice 3.4 is slightly overshadowed by the announcement that Oracle is proposing OpenOffice.org as an Apache Incubator project. What does that mean for the free office suite landscape? It's far too soon to tell, though Apache president Jim Jagielski has reached out to The Document Foundation about cooperation. I'm cautiously optimistic that the projects will find a way to work together and benefit the rest of the FOSS community.


But for now, LibreOffice is the only project with an imminent release — so let's take a look at that and what's in store.


LibreOffice is focusing on more modest, time-based releases. This means that 3.4 doesn't have massive new features, but it does have a slew of performance improvements and minor new features that make life a little better. Let's take a look at some of the highlights.


Sadly, the LibreOffice folks still haven't implemented vi-like keybindings for Writer. (OK, that may only be sad for some of us, but still...) But Writer still has a few minor features that you might enjoy.


If you do a lot of footnotes and bullets, you're going to find this release interesting. LibreOffice now has support for Greek (upper and lower case) letters for bullets — not something that I've had call for yet, but might be of interest to some users. (Testing this feature shows that I'm not, in fact, up on my Greek alphabet...) You'll find this in the Options tab of the Bullets and Numbering dialog.


If you're working on a paper or document that will be printed in color, or distributed as a PDF, you now have the option of defining the style and color of footnote separator. You'll find that one in the Footnote tab of the Page Style dialog.


The LibreOffice folks have also been working on "flat ODF" import and export filters — so if you have a need for the .fodt document type, you might want to check this out. What's flat ODF? In a nutshell, it's uncompressed ODF — the standard ODF document is a zipped file with XML data. Most users probably will want to stick with the traditional ODF — but this is a way to use LibreOffice to produce documents that can be worked with by other programs.


The Pivot Table support in Calc has been stepped up a notch in 3.4, and heavy spreadsheet users may want to look at upgrading to 3.4 right away. You now have support for unlimited fields (as opposed to a limit of 8 fields) using Pivot Table. The Pivot Table feature now allows users to define named ranges as a data source as well.


The 3.4 release also adds support for OLE links in Excel documents — so if you're working with a lot of Excel documents, this means that you won't be seeing import errors from Excel docs with OLE links.


A couple of features have been refined to allow per-sheet support as opposed to global document support. Autofilter and named ranges can now be defined on a per-sheet basis rather than being applied to the entire document.


Are you an Ubuntu Unity user? If so, you now have support for the global menu.


The 3.4 release also has a few features for improved graphite font handling, and for drawing text with Cairo as well as improved GTK+ theme support. This means that LibreOffice should look much nicer than 3.3 as a native Linux app.


Do you do presentations, and want to put them up on the Web? (One of the first — and most annoying — questions I get when doing a presentation is "will the slides be online?") This has been, let's say, not one of LibreOffice/OpenOffice.org's strong points. I tried it out with a couple of my old presentations, and it works like a charm now. So if you need to put a presentation online, LibreOffice 3.4 has you covered.


There's also the usual under the hood improvements, bug fixes, and so on. The 3.4 release is not a big leap forward — but it's an improvement and seems stable enough for most users to dive in.


Remember, the LibreOffice project recommends the .0 releases for more adventurous users. If you're wanting to contribute to LibreOffice, or just like to live a bit closer to the edge, the 3.4.0 release is for you. Odds are, if you're reading this article you like to try new features and want to be running the latest and greatest. But if not, then just hang on until the latest LibreOffice turns up in your favorite Linux distribution or at least wait for one of the point releases (like 3.4.1 or 3.4.2) that have cleaned up any nagging bugs that slipped through in 3.4.0.


According to the release notes, you should be able to install 3.4 side-by-side with 3.3. Of course, I read this after I removed the 3.3 packages from Linux Mint and installed 3.4 — but it should save you some trouble if you want to test 3.4 without removing the older release.


Naturally, you'll find packages for most major Linux distributions — the pre-release page has RPM and Debian packages for 32- and 64-bit systems.


The release plan calls for 3.4.1 to be out in late June, and for 3.4.2 to be released in late July. The next major release of LibreOffice is set for next February. Whether the OpenOffice.org news will impact LibreOffice releases, if at all, is unclear. With LibreOffice ramping up, OpenOffice.org apparently moving to the Apache Software Foundation, and Calligra picking up steam, it looks to be an interesting time for free office suites.


View the original article here

Read On

Get Your Fresh Kernels from openSUSE and Test Linux 3.0!

0 komentar

The openSUSE kernel developers have recently announced that the kernel git trees have moved to kernel.opensuse.org/git, providing better reliability than gitorious. Gitorious had trouble with cloning the nearly 1GB repository sometimes, but the developers will keep syncing to gitorious so nothing should break. Moreover, kernel.opensuse.org offers an easy interface to install openSUSE kernels on a variety of openSUSE releases.


More things are planned for kernel.opensuse.org including the introduction LXR. cgit was already added during the writing of this article!


View the original article here

Read On

Github Has Surpassed Sourceforge and Google Code in Popularity

0 komentar

Github is now the most popular open source forge, having surpassed Sourceforge, Google Code and CodePlex in total number of commits for the period of January to May 2011, according to data released today by Black Duck Software. This should probably come as no surprise, but it's good to have data to backup assumptions.


During the period Black Duck examined, Github had 1,153,059 commits, Sourceforge had 624,989, Google Code and 287,901 and CodePlex had 49,839.


Black Duck also found that C++ and Java were the most popular languages for commits in these forges during this period of time.


Black Duck didn't look at language specific forges such as RubyForge, and it excluded shell, XML and assembler commits.


View the original article here

Read On

GPUs Demonstrate Potential for NASA Science Simulations

0 komentar

At NASA’s Goddard Space Flight Center, simulations of Earth and space phenomena are getting a boost from GPUs. Early results on a new GPU cluster at the NASA Center for Climate Simulation (NCCS) demonstrate potential for significant speedups versus conventional computer processors.


At NASA’s Goddard Space Flight Center, simulations of Earth and space phenomena are getting a boost from GPUs. Early results on a new GPU cluster at the NASA Center for Climate Simulation (NCCS) demonstrate potential for significant speedups versus conventional computer processors.


“GPUs offer a large number of streaming processing cores for performing calculations. For instance, the NCCS cluster’s 64 NVIDIA Tesla M2070 GPUs have 28,672 total cores, nearly as many cores as the rest of the Discover supercomputer. However, since GPUs only perform simple arithmetic, they must be connected to conventional CPUs for executing complex applications. “A GPU does not see the outside world, and one frequently must copy data from CPU to GPU and back again,” explained Tom Clune, Advanced Software and Technology Group (ASTG) lead in the Software Integration and Visualization Office (SIVO).”


View the original article here

Read On

Has Open Source Made Google’s Software Stack Obsolete?

0 komentar Rabu, 22 Juni 2011

Google has succeeded as much or perhaps more than any other technology company ever. They have accomplished all of this all the while being supporters of and with the support of the open source community. They are known for their corporate drive to have the best of everything as much...


Google has succeeded as much or perhaps more than any other technology company ever. They have accomplished all of this all the while being supporters of and with the support of the open source community. They are known for their corporate drive to have the best of everything as much as their corporate motto of "do no evil".  They have the best hardware, the best data centers, they hire the best people....


View the original article here

Read On

Hone Your Desktop Clipboards with Parcellite on Linux

0 komentar

If you're a normal desktop Linux user, it has probably been while since you thought about the X Window System. Modern distros let you configure your video card without ever touching xorg.conf, and by and large the window managers and GUI toolkits just work without getting in your way. But there is still one lingering pain point: the clipboard. Between most user apps, cut, copy and paste work without a hitch, but terminals, text editors like Emacs and Vim, and a few other stragglers refuse to cooperate. If that sounds familiar, consider checking out Parcellite.


Parcellite is a lightweight clipboard manager that sits unobtrusively in the top panel, but smooths over the rough spots of inter-app cut-and-paste behavior. The latest version is 1.0.2, which fixes a few bugs that slipped in after the recent 1.0 stable release. You can get source code packages on the project's download site, but there is a pretty good chance your distribution already packages it. Distro-provided packages might not include the latest release, simply due to how recent it is, however.


Whether or not it is worth installing from source depends on your exact needs. 1.0.x adds a few interesting features over the 0.9 series provided by most distros, such as the ability to search through Parcellite's clipboard history in as-you-type mode, and the ability to reposition where the clipboard history pop-up appears. Neither one of those is core functionality, but your mileage may vary. If you do build from source, Parcellite depends on basic GNOME stack libraries: GTK+, Pango, Cairo, etc.; nothing out of the ordinary.


At the heart of Parcellite's behavior is its ability to synchronize the two distinct X selection methods: the CLIPBOARD selection and the PRIMARY selection. In typical X fashion, the official designations for both are in all caps, so we'll use that format to distinguish them from generic usage of either term.


The PRIMARY selection is whatever is highlighted by the cursor; you often see this functionality when you highlight text in a terminal window using the mouse. Mouse-1 (the left button) starts the selection, Mouse-2 (the right button) ends it, and Mouse-3 (the middle button) inserts the selection. This old relic is still useful today because the keyboard shortcuts Control-C and Control-X are often captured by the shell and mapped to other functions if you type them inside a terminal window, but mouse clicks aren't.


In contrast, the CLIPBOARD selection is what is copied whenever you explicitly give a "copy" command to the current app — generally by Control-C or a button press, but potentially a different key combination. Yes, the world would be simpler if PRIMARY and CLIPBOARD were always the same, but that isn't going to happen any time soon.


You can launch Parcellite from the command line with parcellite —. The current version places a clipboard icon in the GNOME 2 panel's notification area; no word yet on an app-indicator for Unity or a GNOME Shell port. When in use the panel icon is essentially a system-wide clipboard menu: click it and you'll get a history of your copied items (25 entries long, by default), from which you can select whichever suits your fancy. The history is preserved across sessions, which is a nice touch.


Right-click on the icon to open the preferences dialog. The Behavior tab hold the important options. At the top, the "Clipboards" section has three checkboxes: one for "Use Copy" (which makes Parcellite watch CLIPBOARD), one for "Use Primary" (which watches PRIMARY), and one for "Synchronize clipboards (which keeps them in sync with each other). That is the core of the magic, of course, but the other options are worth looking at, too.


You can set how many history items to save — including none, in which case Parcellite just syncs CLIPBOARD and PRIMARY to whichever is most-recently-used — plus alter where the history list pops up (which honestly I have not discovered a use-case for). Another use case entirely is available in the "Miscellaneous" section, where you can have Parcellite copy only URLs to the history. That can be useful if you share a lot of links. Miscellaneous also lets you toggle on the search-as-you-type functionality, which can be good if you like to save a lengthy history.


Finally, the Display tab allows you to tweak a few display settings, such as the sort order (newest-to-oldest or vice versa), how many characters wide to make the history pop-up, and whether to truncate the beginning, middle, or end of selections that are too long.


If all you care about is freeing up the mental energy you used to expend keeping CLIPBOARD and PRIMARY straight in your mind, you're all set. But Parcellite has a few other tricks up its sleeve.


For one thing, despite the fact that the history menu shows you your actual selections, it also makes them all editable. That's not particularly handy if you're cutting and pasting in an email, but when you're writing something complex (code, an article, your new free software license), it can turn the clipboard into a handy saved-block-of-text storage system. Particularly when writing tutorials, I reuse lots of HTML fragments to mark up commands (imagine a long ifup command, littered with and tags). With Parcellite I can write the first one, copy it, then cut down the selection to just the generic portion and re-use it at will.


Parcellite also supports configuring "actions" (i.e., custom commands) that take the clipboard selection as an argument. The simplest example would be an action like wget %s -- the %s variable is replaced by the selection, so whenever you copy a URL, you can bring up Parcellite's actions menu and download it with one click. Users can also write actions to perform search-and-replace, dictionary lookup, or anything else. By default, Control-Alt-A opens the actions menu, but this is configurable.


Another nice touch is that Parcellite is usable from the command line. So if you have to SSH into your machine from a remote location, you can retrieve your clipboard history, or add to it. From a shell, you can type echo "http://some.url.youll/otherwise/forget" | parcellite and the URL will be added to your Parcellite selection history. Type parcellite -c to see your CLIPBOARD history, and parcellite -p to see your PRIMARY history (although naturally they will be the same if you sync them).


If you ask me, Parcellite's ability to unify the two main X selection methods is something that really ought to be built right into GNOME and the other desktop environments; it's just that useful, and I've never had occasion to want the two to be separate. On the other hand, I do use Emacs every day, and I really like having Emacs' slightly-different "kill" and "yank" behavior available when I'm writing. Owing to its heritage, Emacs is one of those programs that, like the terminal, has always exposed the difference between CLIPBOARD and PRIMARY.


You can use Parcellite with Emacs naively to keep the program's kill-ring synchronized with the system-wide clipboard, but if you're a heavy Emacser, you may want some more flexibility. For that, EmacsWiki.org has a nice summary of the copy-and-paste options and how they affect Emacs's interaction with the system clipboard -- and thus, with Parcellite.


Personally, I've never been unlucky enough to have to use Vim, but if that lot does befall you, I would recommend looking at the corresponding page on the Vim wiki for a detailed look at that application's copy and paste behavior. It, like Emacs, has different expectations of CLIPBOARD and PRIMARY, which can lead to conflicts if you're not careful.


These days, most other common applications with sync with Parcellite without incident, but if you do find another one with its own brand of clipboard behavior, be sure to share it. Parcellite is under active development again (after a few years off following the 0.9 series), so your feedback could improve the app and help out some other user at the same time.


View the original article here

Read On

How to Build a Distributed Monitoring Solution with Nagios

0 komentar

With Nagios, the leading open source infrastructure monitoring application, you can monitor your whole enterprise by using a distributed monitoring scheme in which local slave instances of Nagios perform monitoring tasks and report the results back to a single master. You manage all configuration, notification, and reporting from the master, while the slaves do all the work.


This design takes advantage of Nagios’s ability to utilize passive checks – that is, external applications or processes that send results back to Nagios. In a distributed configuration, these external applications are other instances of Nagios.


View the original article here

Read On

Install and Configure OpenVPN Server on Linux

0 komentar

The VPN is very often critical to working within a company. With working from home being such a popular draw to many industries, it is still necessary to be able to access company folders and hardware that exists within the LAN. When outside of that LAN, one of the best ways to gain that access is with the help of a VPN. Many VPN solutions are costly, and/or challenging to set up and manage. Fortunately, for the open source/Linux community, there is a solution that is actually quite simple to set up, configure, and manage. OpenVPN is that solution and here you will learn how to set up the server end of that system.


The VPN is very often critical to working within a company. With working from home being such a popular draw to many industries, it is still necessary to be able to access company folders and hardware that exists within the LAN. When outside of that LAN, one of the best ways to gain that access is with the help of a VPN. Many VPN solutions are costly, and/or challenging to set up and manage. Fortunately, for the open source/Linux community, there is a solution that is actually quite simple to set up, configure, and manage. OpenVPN is that solution and here you will learn how to set up the server end of that system.


I will be setting OpenVPN up on a Ubuntu 11.04, using Public Key Infrastructure with a bridged Ethernet interface. This setup allows for the quickest route to getting OpenVPN up and running, while maintaining a modicum of security.


The first step (outside of having the operating system installed) is to install the necessary packages. Since I will installing on Ubunutu, the installation is fairly straightforward:

Open up a terminal window.Run sudo apt-get install openvpn to install the OpenVPN package.Type the sudo password and hit Enter.Accept any dependencies.

There is only one package left to install — the package that allows the enabling of bridged networking. Setting up the bridge is simple, once you know how. But before the interface can be configured to handle bridged networking, a single package must be installed. Do the following:

Install the necessary package with the command sudo apt-get install bridge-utils.Edit the /etc/network/interfaces file to reflect the necessary changes (see below).Restart networking with the command sudo /etc/init.d/networking restart .

Open up the /etc/network/interfaces file and make the necessary that apply to your networking interface, based on the sample below:

auto loiface lo inet loopbackauto br0iface br0 inet static address 192.168.100.10 network 192.168.100.0 netmask 255.255.255.0 broadcast 192.168.100.255 gateway 192.168.100.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off

Make sure to configure the bridge section (shown above) to match the correct information for your network. Save that file and restart networking. Now it's time to start configuring the VPN server.


The OpenVPN server will rely on certificate authority for security. Those certificates must first be created and then placed in the proper directories. To do this, follow these steps:

Create a new directory with the command sudo mkdir /etc/openvpn/easy-rsa/.Copy the necessary files with the command sudo cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa/.Change the ownership of the newly copied directory with the command sudo chown -R $USER /etc/openvpn/easy-rsa/.Edit the file /etc/openvpn/easy-rsa/vars and change the variables listed below.

The variables to edit are:

export KEY_COUNTRY="US"export KEY_PROVINCE="KY"export KEY_CITY="Louisville"export KEY_ORG="Monkeypantz"export KEY_EMAIL=" This e-mail address is being protected from spambots. You need JavaScript enabled to view it "

Once the file has been edited and saved, we'll run several commands must be entered in order to create the certificates:

cd /etc/openvpn/easy-rsa/source vars./clean-all./build-dh./pkitool --initca./pkitool --server servercd keyssudo openvpn --genkey --secret ta.keysudo cp server.crt server.key ca.crt dh1024.pem ta.key /etc/openvpn/

The clients will need to have certificates in order to authenticate to the server. To create these certificates, do the following:

cd /etc/openvpn/easy-rsa/source vars./pkitool hostname

Here the hostname is the actual hostname of the machine that will be connecting to the VPN.


Now, certificates will have to be created for each host needing to connecting to the VPN. Once the certificates have been created, they will need to be copied to the respective clients. The files that must be copied are:

/etc/openvpn/ca.crt/etc/openvpn/ta.key/etc/openvpn/easy-rsa/keys/hostname.crt (Where hostname is the hostname of the client)./etc/openvpn/easy-rsa/keys/hostname.key (Where hostname is the hostname of the client).

Copy the above using a secure method, making sure they are copied to the /etc/openvpn directory.


It is time to configure the actual VPN server. The first step is to copy a sample configuration file to work with. This is done with the command sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/. Now decompress the server.conf.gz file with the command sudo gzip -d /etc/openvpn/server.conf.gz. The configuration options to edit are in this file. Open server.conf up in a text editor (with administrative privileges) and edit the following options:

local 192.168.100.10dev tap0up "/etc/openvpn/up.sh br0"down "/etc/openvpn/down.sh br0"server-bridge 192.168.100.101 255.255.255.0 192.168.100.105 192.168.100.200push "route 192.168.100.1 255.255.255.0"push "dhcp-option DNS 192.168.100.201"push "dhcp-option DOMAIN example.com"tls-auth ta.key 0 # This file is secretuser nobodygroup nogroup

If you're unsure of any of the options, here:

The local address is the IP address of the bridged interface.The server-bridge is needed in the case of a bridged interface.The server will push out the IP address range of 192.168.100.105-200 to clients.The push directives are options sent to clients.

Before the VPN is started (or restarted) a couple of scripts will be necessary to add the tap interface to the bridge (If bridged networking is not being used, these scripts are not necessary.) These scripts will then be used by the executable for OpenVPN. The scripts are /etc/openvpn/up.sh and /etc/openvpn/down.sh.

#!/bin/sh#This is /etc/openvpn/up.shBR=$1DEV=$2MTU=$3/sbin/ifconfig $DEV mtu $MTU promisc up/usr/sbin/brctl addif $BR $DEV#!/bin/sh#This is/etc/openvpn/down.shBR=$1DEV=$2/usr/sbin/brctl delif $BR $DEV/sbin/ifconfig $DEV down

Both of the scripts will need to be executable, which is done with the chmod command:

sudo chmod 755 /etc/openvpn/down.shsudo chmod 755 /etc/openvpn/up.sh

Finally, restart OpenVPN with the command sudo /etc/init.d/openvpn restart. The VPN server is now ready to accept connections from clients (the topic of my next tutorial.)


One thing that is a must for a VPN is that the machine hosting the VPN has to be accessible to the outside world — assuming users are coming in from the outside world. This can be done by either giving the server an external IP address or by routing traffic from the outside in with NAT rules (which can be accomplished in various ways). It will also be critical to employ best security practices (especially if the server has an external IP address) to prevent any unwanted traffic or users from getting into the server.


View the original article here

Read On

Instant Messaging in the Enterprise with Openfire

0 komentar

Used responsibly, instant messaging (IM) offers the benefit of instant communication and collaboration on the corporate intranet. However, many companies, fearing IM’s adverse affect on productivity, tweak their corporate firewalls to block all ports ferrying IM traffic. A better approach is to control the IM server by bringing it in-house. The Java-based cross-platform Openfire application makes it easy to host your own instant messaging server.


View the original article here

Read On

It’s a Wrap! (LinuxCon Japan 2011)

0 komentar

LinuxCon Japan 2011 just concluded in early June. While many industry and cultural groups have canceled scheduled conferences and performances in Japan due to the triple tragedy of March 11, the Linux Foundation moved forward with its annual meeting. Turnout was great – reportedly 500 strong – and the technical program was strong. There were also some great opportunities for socializing and renewing acquaintances. The Compliance Mini-Summit drew an impressive audience of about 40 people for a four-hour program that included presentations and a panel session from open source compliance leaders: Shane Coughlan of Opendawn discussed the evolution of FOSS governance Sunil Kumar D and Timo Jokiaho of Huawei shared a corporate perspective on GPL compliance Bill McQuaide of Black Duck Software and Steve Grandchamp of OpenLogic, respectively, shared their insights into corporate adoption of FOSS and their wisdom and experience about FOSS compliance and governance Phil Koltun of the Linux Foundation discussed ways to get started with a compliance program Tsugikazu Shibata, a member of the LF Board of Directors, joined Bill and Steve to provide a Japanese perspective on compliance program implementation. LinuxCon Japan also enjoyed a lively kickoff session, with Linus Torvalds reminiscing about 20 years of Linux kernel work, as prompted by questions from Greg Kroah-Hartman and the audience.   Part of the dialogue was...


LinuxCon Japan 2011 just concluded in early June.  While many industry and cultural groups have canceled scheduled conferences and performances in Japan due to the triple tragedy of March 11, the Linux Foundation moved forward with its annual meeting.  Turnout was great – reportedly 500 strong – and the technical program was strong.  There were also some great opportunities for socializing and renewing acquaintances.


The Compliance Mini-Summit drew an impressive audience of about 40 people for a four-hour program that included presentations and a panel session from open source compliance leaders:

Shane Coughlan of Opendawn discussed the evolution of FOSS governanceSunil Kumar D and Timo Jokiaho of Huawei shared a corporate perspective on GPL complianceBill McQuaide of Black Duck Software and Steve Grandchamp of OpenLogic, respectively, shared their insights into corporate adoption of FOSS and their wisdom and experience about FOSS compliance and governancePhil Koltun of the Linux Foundation discussed ways to get started with a compliance programTsugikazu Shibata, a member of the LF Board of Directors, joined Bill and Steve to provide a Japanese perspective on compliance program implementation.

LinuxCon Japan also enjoyed a lively kickoff session, with Linus Torvalds reminiscing about 20 years of Linux kernel work, as prompted by questions from Greg Kroah-Hartman and the audience.   Part of the dialogue was of particular pertinence to those of us involved with compliance work.  A member of the audience asked, regarding the kernel:  “Are you still happy with the license or do you think it needs an upgrade or do you regret having chosen the GPL back then?”  Linus’ response was worth transcribing:



“I’m very happy with the GPL.  The reason – the original Linux license – I don’t know how many people know this – probably most – I did not actually start out with the GPL.  I started out with my own personal license that I wrote that was, like, one paragraph and the license – I have it somewhere – but it basically said you can charge no money for this.  You have to give source code back.  And that was it. And it was not a license that would probably ever stand up in court, or at least it wasn’t well known.  And then the “no money can change hands” turned out to be a problem very early on.  Even in, like, early ’92, you had small distributions that would copy floppies for people at Unix user’s groups or selling them in Byte Magazine or something like that.  And they wanted to charge, like, five bucks for the service of copying two or four or twelve floppies at that time.  And they said “I really need to charge money for this because it’s my time and my floppies.”   So I said OK, I will change the license.  I looked around and I thought the GPL version 2 was exactly what I was looking for, saying that I give this out because I like doing it, but I want people who make changes and improvements … I  want those changes and improvements to come back to me under the same license.   And I think it’s a very fair license.  I think it’s a license that is also very successful. And I think it’s something that really speaks to people at a very deep level, the whole fairness notion that I give you something, you give me something back.  And I’m very happy with the license.  It’s worked very well.”


If you’re interested in the entire hour-long discussion with Linus, check out the video at the Linux Foundation site.  (The GPL remarks come at around the 53-minute mark.) For more LF resources on compliance, including white papers, webinars, self-assessment checklist, and open source tools, go to the Linux Foundation’s open compliance program webpage.  The Linux Foundation offers on-site and confidential training on how to implement a compliance program.  For more information, please see our training descriptions or send email to This e-mail address is being protected from spambots. You need JavaScript enabled to view it .


View the original article here

Read On

KQ ZFS Linux Is No Longer Actively Being Worked On

0 komentar Selasa, 21 Juni 2011

Remember KQ Infotech? KQ Infotech was the Indian company that ported the ZFS file-system to Linux as an out-of-tree kernel module (after deriving the code from the LLNL ZFS Linux work) and KQ's interesting methods of engagement in our forums. The company was successful in delivering an open-source ZFS module...


Remember KQ Infotech? KQ Infotech was the Indian company that ported the ZFS file-system to Linux as an out-of-tree kernel module (after deriving the code from the LLNL ZFS Linux work) and KQ's interesting methods of engagement in our forums. The company was successful in delivering an open-source ZFS module for Linux that performed semi-well and didn't depend upon FUSE (the file-systems for user-space module) like other implementations. However, this ZFS Linux code appears to no longer be worked on by KQ Infotech...


View the original article here

Read On

LexisNexis Will Open-Source Its Hadoop Alternative for Handling Big Data

0 komentar

LexisNexis announced today that it will open-source its High Performance Computing Cluster (HPCC) technology, as well as offer an enterprise version with commercial support.


LexisNexis announced today that it will open-source its High Performance Computing Cluster (HPCC) technology, as well as offer an enterprise version with commercial support. The company is positioning HPCC Systems, developed internally by its Risk Solutions unit, as an alternative to Apache Hadoop. A virtual machine for testing purposes will be available soon, and code will be available in a few weeks.


View the original article here

Read On

Linaro Non-Profit is Rapidly Hitting Embedded Linux Milestones

0 komentar

For years, many Linux users wished for it to achieve a level of success on the desktop that in never did achieve; however, a funny thing happened on the way to that state of affairs: Linux succeeded off the desktop. Linux is growing very rapidly on servers, and already powers much of the server infrastructure behind the Internet and many corporate networks. Linux is also gaining traction as infrastructure within mobile operating systems such as Android, and the cloud-centric OS Google Chrome. One remaining non-desktop arena where Linux does very well is in embedded systems and applications. On that front, Linaro, a non-profit organization concentrating on embedded Linux, is maturing.


View the original article here

Read On

Linus Jumps Ahead to 3.0

0 komentar

Last week it looked like we were, finally, going to get a version bump from 2.6 to 2.8. Instead, Linus Torvalds has bitten the bullet and tagged the first release candidate of the next kernel to 3.0.

That's right — it looks like the next kernel release is going to go all the way to 11, er, 3.0. If you missed the discussion last week, this isn't because the kernel is gaining massive new functionality (as it did from the 1.x to 2.0.x series), but because "it will get released close enough to the 20-year mark, which is excuse enough for me." Sounds like a good enough reason here, too.

To be clear, 3.0 will not be a radical change. According to Torvalds, "Sure, we have the usual two thirds driver changes, and a lot of random fixes, but the point is that 3.0 is *just* about renumbering, we are very much *not* doing a KDE-4 or a Gnome-3 here. No breakage, no special scary new features, nothing at all like that. We've been doing time-based releases for many years now, this is in no way about features. If you want an excuse for the renumbering, you really should look at the time-based one ("20 years") instead."

Want to test the new kernel, check for it in the /pub/linux/kernel/v3.0 directory, though the git tree is still linux-2.6.git for now.

If we follow the "once per decade" model, it looks like we'll have Linux 4.0 sometime in 2020.


View the original article here

Read On

Linux 3.1 Kernel Looking To Bring New KVM Option

0 komentar

Back in April I reported on a lightweight QEMU-free Linux KVM host tool. This written-from-scratch solution is designed to just boot guest Linux images with the Kernel-based Virtual Machine (KVM) while being just a few thousand lines of code. A second version of the Native Linux KVM Tool has been...


Back in April I reported on a lightweight QEMU-free Linux KVM host tool. This written-from-scratch solution is designed to just boot guest Linux images with the Kernel-based Virtual Machine (KVM) while being just a few thousand lines of code. A second version of the Native Linux KVM Tool has been released and it's being targeted for inclusion into the Linux 3.1 kernel...


the original article here

Read On

LinuxCon North America Is Coming, and the Schedule Is Set

0 komentar

The Linux Foundation is out with its schedule of events for this year's LinuxCon North America conference. It's slated for August 17th to 19th in Vancouver, Canada.


The Linux Foundation is out with its schedule of events for this year's LinuxCon North America conference. It's slated for August 17th to 19th in Vancouver, Canada. It will take place at the Hyatt Regency hotel in Vancouver, and you can register now. The lineup of speakers looks to be outstanding. Linus Torvalds is speaking, as are Eben Moglen, Red Hat CEO Jim Whitehurst, Eucalyptus Systems' CEO and MySQL pundit Marten Mickos, and many others.


In addition to keynotes, roundtable panels and 75 conference sessions, LinuxCon will feature a range of tutorials, lightning talks, and other events. There will be a number of developer lounges available.


View the original article here

Read On

Manage Passwords, Encryption Keys, and More with Seahorse

0 komentar Senin, 20 Juni 2011

You've got half a dozen passwords for work, encryption keys, and SSH keys — how do you keep them all straight? If you're on Linux, you have an excellent option in the form of Seahorse. It's easy to use, and you'll be able to tackle all your credentials with little effort.


You've got half a dozen passwords for work, encryption keys, and SSH keys — how do you keep them all straight? If you're on Linux, you have an excellent option in the form of Seahorse. It's easy to use, and you'll be able to tackle all your credentials with little effort.


Generally speaking, most average users assume a password is nothing more than something that gets in their way of getting to their "stuff" quickly. Most Linux users, on the other hand, know that passwords are the keys to the kingdom and encryption keys are keys to the universe. Like me, many Linux users have passwords and encryption keys for multiple uses. Because of this, a tool for the management of those keys can make life so much easier. One such tool is Seahorse. Seahorse is the GNOME application charged with managing encryption keys and passwords and I want to demonstrate how both passwords and encryption keys can be easily managed with this tool.


Although Seahorse was intended to be used on the GNOME desktop, it can be used with other desktops (such as KDE or Enlightenment). However, since Seahorse was created for GNOME it will not properly integrate into KDE applications. That is fine, because encryption and secure shell keys as well as passwords can still be easily managed. If Seahorse is not already installed (It should be found in the Start > Settings menu in KDE and Applications > Accessories in GNOME) simply open up the Add/Remove Software tool, search for Seahorse, and install.


Since most all users deal with passwords, the first (and most logical) place to start is password management. Seahorse is an incredible tool for the management of passwords. What can be done?

Store passwords.Remove a password from cache.Add details about a password.Create a new password.Create new password keyrings.

The default keyring is the Login keyring, which is where keys are stored. When Seahorse is fired up it will offer a very simple interface (see Figure 1). From that one main pane, expand the keyring entry to list out all of the various passwords that are stored. This may, at first, seem like a glaring security issue in and of itself, since no password has been entered to reach this point. Fear not, this keyring can be (and should be) locked and unlocked. To lock the keyring, do the following:


More than one keyring can be retained within Seahorse for further expansion.

Right click the target keyring.Select Lock from the menu.If you have not added a lock password, add one when prompted.

To unlock the keyring, follow the same steps, only select Unlock and then enter the locking password. It is also possible to change the keyring password, but the old password will be required to to do. If you are going to take advantage of Seahorse, I highly recommend you lock all keyrings with strong passwords.


Adding a password to this keyring will highlight how important it is to have that keyring locked. So, assuming the keyring lives in a locked state, here is the process for adding a password:

Click File > New.From the new window, select Stored Password and click Continue.In Add Password window select the keyring this password should belong to from the drop-down.Give the password a description.Enter the password in the password field.Click the Add button.

The password has been successfully saved. Now, to view that password, follow these steps:

Right click the keyring and click Unlock.Enter the unlock password and click OK.Expand the keyring to view the newly created password.Double click the password entry to be viewed.In the new window (see Figure 2) expand the Password field.To view the password check the box for Show password.

The next time this password entry is opened, the Show password check box will once again be unchecked.


Encryption keys is where Seahorse really shines. Not only can encryption keys be created and managed from within Seahorse, so to can secure shell keys. These two features along make Seahorse worth using.


In order to create a PGP encryption key, follow these steps:

Open Seahorse.Click File > New.In the resulting window, select PGP Key and click Continue.Fill out the settings in the new window (Name, Email Address, Comment).If advanced options (such as Encryption type, Key strength, and Expiration date) are needed, expand the Advanced section and fill out that information.Click Create.Type and confirm the passphrase to be used for the key and click OK.Do some work on your computer to help with the generating of the random seed and wait for the key to be created.

One that is done, go back to the Seahorse main window and click on the Personal Keys tab (see Figure 3), to see all personal keys managed by Seahorse.


From this tab keys can be signed, exported, and more.


Signing an encryption key allows the recipient of the key to verify the authenticity of the key. To digitally sign a key, do the following:

Select the key to sign from the Personal Keys tab.Click the Sign Key button in the toolbar.Fill out the necessary information in resulting window (making sure to select how carefully the key has been checked and the correct signer for the key.)Enter the passphrase for the key.Click OK.

The key is now signed. It is possible to sign a key with multiple email addresses. But the only addresses available to sign with will be those that are already associated with keys in Seahorse. This can be helpful if more than one key is associated with one user (but different email addresses).


The public keys can also be exported (in the form of .asc files) which can then be handed out to those who need to send encrypted email which can then be decrypted with that encryption key's private key. These keys should only be handed out to trusted users.


Many users like to publish their public keys so they are easier to distribute. Seahorse has a built in mechanism for publishing keys (so nothing is really necessary outside of simply creating the keys, signing the keys, and then publishing the keys. To publish keys click Remote > Sync and Publish Keys. From the new window click on the Key Servers button and then, from the Key Servers Preferences window, select the Keyserver the key should be published to (from the drop-down), click Close, and finally click Sync. All keys should then be available on the Keyserver selected.


Secure Shell is one of the best tools for remotely logging into Linux/UNIX machines because it is far more secure than telnet. But even with that extra security, it's always best to add yet another layer. This can be achieved with the help of secure shell keys that can be handled by Seahorse. Here are the steps:

Open Seahorse.Click File > New.From the resulting window select Secure Shell Key.Give the key a description and click Create And Setup.Enter (and confirm) a passphrase for the key and click OK.In the new window enter the address (IP or FDQN) of the machine to receive the key and the login name on the remote machine.

If all went as planned, instead of a password prompt, a bash prompt should appear ready for work. This, of course, requires that public keys are guarded, otherwise the obvious security hole will make itself quite apparent. There is a known bug in Seahorse that affects some installations and causes the originating machine to not be able to send the ssh key to the target (during creation.) The development team is aware of this bug and will (hopefully) have it fixed soon. If that bug rears it's ugly head, as a temporary stop-gap, use ssh-copy-id username@address where username is the remote username and address is either the IP address or domain of the recipient.


The reality is, no matter how many security tools are used, if they are not used wisely those tools will not help much. Even when employing encryption caution must be used to ensure keys do not wind up in the wrong hands. But when encryption is used properly, it is a great tool that will lend a level of security otherwise not found. And, when encryption is needed, thankfully there are tools like Seahorse to make the management of those keys a cinch.


View the original article here

Read On

Meeks: LibreOffice Progress to 3.4.0

0 komentar

Michael Meeks digs in to the changes that went into LibreOffice 3.4, including better translation support, merging changes from OpenOffice.org (part of which was a "multi-million-line" OOo cleanup patch), adding more build bots, and more. One major area of work was in doing some cleanup to reduce the size of LibreOffice: "First - ridding ourself of sillies - there is lots of good work in this area, eg. big cleanups of dead, and unreachable code, dropping export support from our (deprecated for a decade) binary filters and more. I'd like to highlight one invisible area: icons. Lots of volunteers worked on this, at least: Joseph Powers, Ace Dent, Joachim Tremouroux and Matus Kukan. The problem is that previously OO.o had simply tons of duplication, of icons everywhere: it had around one hundred and fifty (duplicate) 'missing icon' icons as an example. It also duplicated each icon for a 'high contrast' version in each theme (in place of a simple, separate high contrast icon theme), and it also propagated this effective bool highcontrast all across the code bloating things. All of that nonsense is now gone, and we have a great framework for handling eg. low-contrast disabilities consistently."


View the original article here

Read On

My Highlights from the Newly Announced LinuxCon Schedule

0 komentar

Today we announced the full schedule for LinuxCon North America that will take place in Vancouver from August 17 - 19th. This year we had even more of a challenge than usual in putting together the program. Why? There were so many great submissions. (I’d like to take this opportunity to thank all of you who submitted a talk and please don’t be discouraged if you didn’t make it this year. You had a lot of competition and we would love to see your submissions at subsequent events or in subsequent years. Check our events schedule for more events in Europe, Brazil and elsewhere.) This year is the 20th anniversary of Linux, which we have already been celebrating with a special video and contests. But this year’s speaker line up for LinuxCon will be a celebration on its own, with topics and speakers from across the industry, across the community and across the globe. While we do have a focus on enterprise Linux...


Today we announced the full schedule for LinuxCon North America that will take place in Vancouver from August 17 - 19th. This year we had even more of a challenge than usual in putting together the program. Why? There were so many great submissions. (I’d like to take this opportunity to thank all of you who submitted a talk and please don’t be discouraged if you didn’t make it this year. You had a lot of competition and we would love to see your submissions at subsequent events or in subsequent years. Check our events schedule for more events in Europe, Brazil and elsewhere.)


This year is the 20th anniversary of Linux, which we have already been celebrating with a special video and contests. But this year’s speaker line up for LinuxCon will be a celebration on its own, with topics and speakers from across the industry, across the community and across the globe. While we do have a focus on enterprise Linux development and administration, I hope our speaker line up reflects the reach of Linux in technology and culture. Here are some of the sessions I am most looking forward to:

 What’s Inside Benchmarks? Wim Coekaerts is the Senior Vice President of Linux and Virtualization Engineering for Oracle. He is responsible for Oracle’s complete desktop to datacenter virtualization product line and the Oracle Unbreakable Linux support program. This is a great opportunity to learn from a leader in the Linux industry. Wim isn’t a suit who dabbles in Linux; he’s the real deal.A Conversation with Linus Torvalds. We’re extremely lucky to have Linus take part in LinuxCon. This will be a great discussion between two lively developers: Linus and Greg KH. And the big question: will Linus wear a tux to the LinuxCon Gala?Linux: a short retrospective and an opinion on the future. There are people who claim they are thought leaders and then there are those like Dr. Irving Wladawsky-Berger who shape the future of technology with their insight. Dr. Wladawsky-Berger is Chairman Emeritus of the IBM Academy of Technology, and Visiting Professor of Engineering Systems at the Massachusetts Institute of Technology. This will be thought provoking and a truly special opportunity to hear from a master.Linux: How it Runs the World of Finance. Without Linux there would be no high frequency trading. Christoph Lameter is a well recognized expert in high performance computing and Linux on wall street. This talk will open your eyes about just how important Linux is to our economy.Linux filesystem and storage tuning. This is an indepth tutorial given by a kernel expert, Christoph Hellwig. This session will deliver real benefit to advanced system administrations. I think this session alone would give enough value to justify the trip to LinuxCon. It’s a rare opportunity to learn from the best.

View the original article here

Read On

New!! Updated Intel(r) AMT Linux Drivers

0 komentar

Linux/AMT Developers have no doubt been waiting a long time for this.  Updated MEI and LMS drivers for Linux.  They can be download  HERE.  (and below as well)


Are you a Linux Developer interested in writing tools for supporting Intel AMT?  We would love to hear from you.

Here is what these drivers do:

Intel® Active Management Technology (Intel® AMT) Linux support includes two components that allow interaction between the Intel® AMT FW and the Linux OS: Intel® MEI (Intel® Management Engine Interface) driver and LMS (Local Management Service) driver. Intel® MEI driver allows application to communicate with the FW using host interface, and LMS driver allows applications to access the Intel® AMT FW via the local Intel® Management Engine Interface (Intel® MEI).

Intel® Management Engine Interface driver:The Intel® MEI driver allows applications to access the Intel® Management Engine FW via the host interface (as opposed to a network interface). The Intel® MEI driver is meant to be used mainly by the Local Manageability Service (LMS). Messages from the Intel® MEI driver are sent to the systems log (i.e. /var/log/messages). Once the Intel® MEI driver is running, an application can open a file to it, connect to an application on the firmware side, and send and receive messages to that application.

View the original article here

Read On

Node.js PaaS Nodejitsu Open-Sources Several Tools

0 komentar

Nodejitsu, the original Node.js platform-as-a-service, has open-sourced several of its tools, some of which are used in its own production stack. These could be useful to those running their own Node.js servers or private clouds. Some of the tools are very simple, like forever, which ensures that a script runs continuously. Others are more involved, such as the application server haibu and the cloud deployment tool jitsu.


View the original article here

Read On

NVIDIA Linux Driver Now Does GL_EXT_x11_sync_object

0 komentar

NVIDIA's Linux/Unix engineering team has issued a new Linux beta driver in the 275.xx series. To succeed the first 275.xx Linux beta that was put out a few weeks back, NVIDIA has released the 275.09.04 Beta. There's only a few changes in this beta released today, but among them is...


NVIDIA's Linux/Unix engineering team has issued a new Linux beta driver in the 275.xx series. To succeed the first 275.xx Linux beta that was put out a few weeks back, NVIDIA has released the 275.09.04 Beta. There's only a few changes in this beta released today, but among them is support for the GL_EXT_x11_sync_object extension...


View the original article here

Read On

openSUSE conference looking for sponsors

0 komentar Minggu, 19 Juni 2011

Like the previous years, the openSUSE conference team is looking for sponsors. The conference has grown 35% last year and we expect it to grow even more this year so financial help is needed!


Like the previous years, the openSUSE conference team is looking for sponsors. The conference has grown 35% last year and we expect it to grow even more this year so financial help is needed!


View the original article here

Read On

The Official X.Org Notes For Ubuntu 11.10

0 komentar

This shouldn't be news for anyone who has followed the Phoronix articles for Ubuntu 11.10, particularly from the UDS Budapest event, but here's the official X.Org plans for this next Ubuntu Linux release...


This shouldn't be news for anyone who has followed the Phoronix articles for Ubuntu 11.10, particularly from the UDS Budapest event, but here's the official X.Org plans for this next Ubuntu Linux release...


View the original article here

Read On

Things You Can't Do With a GUI: Finding Stuff on Linux

0 komentar

What's better, a graphical interface or the Linux command line? Both of them. They blend seamlessly on Linux so you don't have to choose. A good graphical user interface (GUI) has a logical, orderly flow, helps guide you to making the right command choices, and is reasonably fast and efficient. Since this describes a minority of all GUIs, I still live on the command line a lot. The CLI has three advantages: it's faster for many operations, it's scriptable, and it is many times more flexible. Linux's Unix heritage means you can string together commands in endless ways so they do exactly what you want.


What's better, a graphical interface or the Linux command line? Both of them. They blend seamlessly on Linux so you don't have to choose. A good graphical user interface (GUI) has a logical, orderly flow, helps guide you to making the right command choices, and is reasonably fast and efficient. Since this describes a minority of all GUIs, I still live on the command line a lot. The CLI has three advantages: it's faster for many operations, it's scriptable, and it is many times more flexible. Linux's Unix heritage means you can string together commands in endless ways so they do exactly what you want.


Here is a collection of some of my favorite finding-things command line incantations.


In graphical file managers like Dolphin and Nautilus you can right-click on a folder and click Properties to see how big it is. But even on my quad-core super-duper system it takes time, and for me it's faster to type the df or dh commands than to open a file manager, navigate to a directory, and then pointy-clicky. How big is my home directory?

$ du -hs ~748G /home/carla

How much space is left on my hard drive or drives? This particular incantation is one of my favorites because it uses egrep to exclude temporary directories, and shows the filesystem types:

$ df -hT | egrep -i "file|^/"Filesystem Type Size Used Avail Use% Mounted on/dev/sda2 ext4 51G 3.6G 32G 11% //dev/sda3 ext4 136G 2.3G 127G 2% /home/dev/sda1 ext3 244G 114G 70G 63% /home/carla/photoshare/dev/sdb2 ext3 54G 5.8G 45G 12% /home/carla/music

What files were changed on this day, in the current directory?

$ ls -lrt | awk '{print $6" "$7" "$9 }' | grep 'May 22' May 22 file_a.txtMay 22 file_b.txt

Using a simple grep search displays complete file information:

$ ls -lrt | grep 'May 22' -rw-r--r-- 1 carla carla 383244 May 22 20:21 file_a.txt-rw-r--r-- 1 carla carla 395709 May 22 20:23 file_b.txt

Or all files from a past year:


ls -lR | grep 2006


Run complex commands one section at a time to see how they work; for example, start with ls -lrt, then ls -lrt | awk '{print $6" "$7" "$9 }', and so on. To avoid hassles with upper- and lower-case filenames, use grep -i for a case-insensitive search.


Want to sort files by creation date? You can't in Linux, but you can in FreeBSD. Want to specify a different directory? Use ls -lrt directoryname.


Which files were changed in the last three minutes? This is quick slick way to see what changed after making changes to your system:


find / -mmin -3


You can specify a time range, like what changed in the current directory between three and six minutes ago?


find . -mmin +3 -mmin -6


The dot means current directory.


Need to track down disk space hogs? This is probably one of the top ten tasks even in this era of terabyte hard drives. This lists the top five largest directories or files in the named directory, including the top level directory:

$ du -a directoryname | sort -nr | head -n 5119216208.55389884./photos40650788./Photos37020884./photos/200720188284./carla

Omit the -a option to list only directories.


It is well worth getting acquainted with the find command because it can do everything except make good beer. This nifty incantation finds the five biggest files on your system, and sorts them from largest to smallest, in bytes:

# find / -type f -printf '%s %p\n' |sort -nr| head -51351655936 /home/carla/sda1/carla/.VirtualBox/Machines/ubuntu-hoary/Snapshots/{671041dd-700c-4506-68a8-7edfcd0e3c58}.vdi1332959240 /home/carla/sda1/carla/51mix.wav1061154816 /proc/kcore962682880 /home/carla/sda1/Photos/2007-sept-montana/video_ts/vts_01_4.vob962682880 /home/carla/sda1/photos/2007/2007-sept-montana/video_ts/vts_01_4.vob

You really don't need to include the /proc pseudo-filesystem, since it occupies no disk space. Use the wholename and prune options to exclude it:


find / -wholename '/proc' -prune -o -type f -printf '%s %p\n' |sort -nr| head -5


There is potential gotcha, and that is that find will recurse into all mounted filesystems, including remote filesystems. If you don't want it to do this then add the -xdev option:


find / -xdev -wholename '/proc' -prune -o -type f -printf '%s %p\n' |sort -nr| head -5


Another potential gotcha with -xdev is find will only search the filesystem the command is run from, and no other filesystem mounts, not even local ones. So if your filesystem is spread over multiple partitions or hard drives on one computer, and you want to search all of them, don't use -xdev. I'm sure there is a clever way to distinguish between local and remote filesystems, and when I figure it out I'll share it.


Now let's string together a splendid find incantation to convert those large indigestible blobs of bytes into a nice readable format:

# find / -type f -print0| xargs -0 ls -s | sort -rn | awk '{size=$1/1024; printf("%dMb %s\n", size,$2);}' | head -51290Mb /home/carla/sda1/carla/.VirtualBox/Machines/ubuntu-hoary/Snapshots/{671041dd-700c-4506-68a8-7edfcd0e3c58}.vdi1272Mb /home/carla/sda1/carla/51mix.wav918Mb /home/carla/sda1/Photos/2007-sept-montana/video_ts/vts_01_4.vob918Mb /home/carla/sda1/photos/2007/2007-sept-montana/video_ts/vts_01_4.vob918Mb /home/carla/sda1/Photos/2007-sept-montana/video_ts/vts_01_1.vob

Yes, I know, you can do many of these things in graphical search applications. To me they are slow and clunky, and it's a lot faster to replay searches from my Bash history, or copy them from my cheat sheet. I even have some aliased in Bash, for example I use that last long find incantation a lot. So I have this entry aliased to find5 in my .bashrc:


alias find5='find / -wholename '/proc' -prune -o -wholename '/sys' -prune -o -type f -print0| xargs -0 ls -s | sort -rn | awk '{size=$1/1024; printf("%dMb %s\n", size,$2);}' | head -5'


In this example I have excluded both the /proc and the /sys directories.


The locate is very fast because it creates a database of all of your filenames. You need to update it periodically, and many distros do this automatically. To update it manually simply run the updatedb command as root. locate and grep are powerful together. For example, find all .jpg files that are 1024 pixels wide:


locate *.jpg|grep 1024


Search for image files in three different formats for an application:


locate claws-mail|grep -iE "(jpg|gif|ico)"


Well here we are at the end already! Thanks for reading, and please consult the fine man pages for these commands to learn what the different options mean.


View the original article here

Read On