...making Linux just a little more fun!
Kat Tanaka Okopnik [kat at linuxgazette.net]
I seem to recall that a while back, we were discussng the intricacies of typing Indian languages. While checking my gmail account, Google informed me of their latest gee-whiz awesome:
http://gmailblog.blogspot.com/2009/03/typing-in-indian-languages.html
-- Kat Tanaka Okopnik Linux Gazette Mailbag Editor [email protected]
[ Thread continues here (2 messages/1.37kB) ]
J. Bakshi [j.bakshi at unlimitedmail.org]
Hello,
I have a working configuration at .icewm/toolbar to take screen shot
---------------------------------------- prog "screenshot" /usr/share/pixmaps/khelpcenter-16.xpm scrot -s -q 100 -e 'feh $f' ------------------------------------------------
I have put the same configuration in a shell script and it also has the u+x permission. I manually executed it and it worked as expected. Now I have create a rule in .icewm/keys like
--------------------------------- # Super is the microsoft key at key board key "Super+g" grun key "Super+Power" gmrun key "Super+s" /home/joy/call ----------------------------------
But this time it is not working any more. Though the configuration from toolbar as well as the script is working well. My other key combination is also working well except the above screenshot. What might be the problem here ?
Thanks
PS: Kindly CC me
[ Thread continues here (20 messages/31.85kB) ]
J. Bakshi [j.bakshi at unlimitedmail.org]
Hello,
Hope you all are well.
Recently my attention is on udev rules. I was never bothered about it but during google search to find out how to automount my pendrive; I have found udev as the solution. I admit that I don't understand it properly but my intention to mount my pendrive to a pre-defined mount point pushed me to do some experiment with it. I have first collected the required information by *udevinfo -a -p $(udevinfo -q path -n /dev/sda)*
looking at device '/block/sda':
KERNEL=="sda" SUBSYSTEM=="block" DRIVER=="" ATTR{range}=="16" ATTR{removable}=="1" ATTR{size}=="15687680" ATTR{stat}==" 77 421 2292 1436 0 0 0 0 0 956 1436" ATTR{capability}=="13"
looking at parent device
'/devices/pci0000:00/0000:00:10.3/usb4/4-5/4-5:1.0/host7/target7:0:0/7:0:0:0': KERNELS=="7:0:0:0" SUBSYSTEMS=="scsi" DRIVERS=="sd" ATTRS{device_blocked}=="0" ATTRS{type}=="0" ATTRS{scsi_level}=="3" ATTRS{vendor}=="JetFlash" ATTRS{model}=="Transcend 8GB " ATTRS{rev}=="8.07"
<snip>
lots of other stuff
</snip>
Then I constructed my rules as
10-usb.rules -------------------------------- SUBSYSTEM=="block", KERNEL=="sda", ATTRS{vendor}=="JetFlash", ATTRS{model}=="Transcend 8GB ", ACTION=="add", RUN+="mount -t reiserfs /dev/sda2 /mnt/pen" -------------------------------------
/mnt/usb is already there. From /var/log/syslog I can confirm that 10-usb.rules has been read with out any error
--------------------------------------- debian udevd[2874]: parse_file: reading '/etc/udev/rules.d/10-usb.rules' as rules file -----------------------------------------------
but after inserting my pendrive it is not auto mounted at /mnt/pen.
I have no clue if my rule is wrong or any thing else is preventing the auto mount. My box is debian lenny.
Could any one please enlighten me ?
Thanks,
PS: please CC to me
[ Thread continues here (3 messages/7.29kB) ]
Robos [robos at muon.de]
Am 30.03.2009 4:54 Uhr, schrieb [email protected]:
Hi List and Arild, mind if I chime in?
> Ben I have been experimenting. > First off ; I am quite familiar with cut and past in Windows. > However this Linux webmail does not seem to allow me to open two windows > or emails simultaneously. Every time I try to write you a reply, the > webmail vanishes as soon as I open the second window. When I try to > backtrack by clicking the back arrow, the first window has vanished. There > is no sign of the message I began in Drafts. Scrolling jumps me right out > of webmail and I have to log back in again and start over. > Apparently webmail is not designed to work like this. At least not on what > my ISP uses.
Something is fishy there. But let's forget that then and try (yet another) way of connecting computers.
On the ubuntu machine:
1) open a terminal. If you know how, fine, else, press Alt-F2, a box should appear, type in "xterm" there and hit the enter button.
2) inside the terminal, run this command
/sbin/ifconfig
It should spit out some lines, in front something like
eth0 yada yada .... inet addrIP-Address>The IP-Address should be something like 192.168.0.1 or 10.0.0.1 or so.
3) note the IP-Address down on a piece of paper
4) run this command:
netstat -tlpnagain, some lines get spit out. Look for a line where it says 22 in the 4th column, under "Local Address". If you see a line, fine, else run this command
sudo aptitude install openssh-serverEnter your normal user password when it prompts for a password.
On your windows machine:
1) download putty, from here: http://the.earth.li/~sgtatham/putty/latest/x86/putty.exe 2) run the program 3) In the line where it says "Host name" enter the written down IP-Address 4) Either hit Enter or click "open" at the bottom of the window 5) a black window opens 6) if you get prompted for a user (login as enter your user name on the ubuntu box, hit enter and do the same with the password.
You can now copy-paste stuff from windows to linux by doing this:
1) On the windows machine, in the webmail-thing: copy a line 2) bring the black putty window to the front and push the right mouse button in there. It should paste the copied stuff.
Mind you, you cannot run xoscope like this, because they don't work on a command line (like you have in that putty window). But Ben's command (dpkg ...) works in there.
By the way: you should be able to copy-paste stuff in ubuntu like in windows if you use the "long" way: mark the text you want to copy and press the right mouse button on that and select "copy" from the menu. Pasting in (most) terminals should work the same, right mouse button in the window where you normally type and select "paste"
Hope this helps,
regards
Udo Puetz
[ In reference to "Gems from the Mailbag" in LG#161 ]
Chris Bannister [mockingbird at earthlight.co.nz]
On Tue, Mar 31, 2009 at 09:43:41AM -0300, Deividson Okopnik wrote:
> 2009/3/31 Ben Okopnik <[email protected]>: > > On Mon, Mar 30, 2009 at 11:12:46PM -0700, Rick Moen wrote: > > > > Thanks for saying that, Rick - I was just telling Kat much the same > > thing while reading it. We'll preserve it, in standard LG fashion, for > > the education of the young. > > > > Is there enough time for it to figure on LG? > > Thats so cool it should be turned into an article
The taxi driver says, "Where do you want to go today?"
-- Chris. ====== I contend that we are both atheists. I just believe in one fewer god than you do. When you understand why you dismiss all the other possible gods, you will understand why I dismiss yours. -- Stephen F Roberts
Amit Saha [amitsaha.in at gmail.com]
Hello all,
Perhaps this is too simple to be a 2c-tip, but still a tip.
The following script (either run by hand) or in a script starts up a HTTP server and makes available all the files and directories from which it is started, so as to make them available over the network:
python -m SimpleHTTPServer &
By default, it starts off the server on '8000' but that can be changed using:
python -m SimpleHTTPServer 9090, which starts the server on 9090, and can be accessed via the browser or any HTTP client using:
firefox http://127.0.0.1:8000/
More on the Python module at http://docs.python.org/library/simplehttpserver.html
Thanks, Amit
-- http://amitksaha.blogspot.com http://amitsaha.in.googlepages.com/ *Bangalore Open Java Users Group*:http:www.bojug.in "Recursion is the basic iteration mechanism in Scheme" --- http://c2.com/cgi/wiki?TailRecursion
[ Thread continues here (7 messages/8.60kB) ]
By Deividson Luiz Okopnik and Howard Dyckoff
Contents: |
Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to [email protected]. Deividson can be contacted via Twitter here: http://twitter.com/deivid_okop
In the days leading up to Linux Collaboration Summit, the Linux Foundation (LF) announced it will host the industry's open Linux-based mobile project, Moblin. The announcement had been planned for the Summit but leaked early in a NY Times blog. http://bits.blogs.nytimes.com/2009/04/01/intel-hands-over-the-keys-to-its-linux-operating-system/.
Created in 2007, Moblin is an open source project that supports Linux-based software and is optimized for mobile devices including netbooks, mobile Internet devices (MIDs), in-vehicle infotainment systems, and other embedded devices. Moblin is a technology framework that delivers visually rich Internet and media experience on low power devices.
In the current recessionary economic climate, these markets are among the fastest-growing in computing and Linux is increasingly considered the OS of choice for vendors who require better margins, faster time to market and custom branding. Started by Intel, the target processor has been the Intel Atom. Under the Linux Foundation, more processor families may be added, including the popular the ARM processor. This would make MobLin more strategic and less of an Intel-only play. However, most of the lead developers remain Intel employees.
The first developer meeting for the Moblin project under the Linux Foundation was the Collaboration Summit in April. Moblin 2, currently under development, was a focus there.
At the Summit, Intel revealed that the Moblin platform will be capable of a 5 second boot-up and that there are future plans for a goal of 2 seconds.
Fifteen operating system vendors have committed to distribute Moblin-based products, including Asianux, Canonical, DeviceVM, gOS, MontaVista, Novell, and Wind River.
For more information, please visit http://www.moblin.org.
The openSUSE Build Service will be added to the Linux Developer Network (LDN), according to the openSUSE Project and the Linux Foundation at the Linux Collaboration Summit in April.
The Linux Foundation will be providing an interface to the openSUSE Build Service via the Linux Developer Network site, allowing the creation of packages for all major Linux distributions via LDN. The build service enables developers to create packages for CentOS, Debian, Fedora, Mandriva, Red Hat Enterprise Linux and Ubuntu, in addition to openSUSE and SUSE Linux Enterprise. The addition of the openSUSE Build Service to the LDN compliments LDN's popular AppChecker application, which enables developers to create portable applications for Linux.
The openSUSE project is also releasing 1.6 version of the build service that allows compiling packages for the ARM platform, used for embedded devices. The support for cross-architecture build support means that developers can create RPM or Debian packages for openSUSE, Ubuntu, Debian, and Fedora. This work has been contributed by 5e DataSoft GmbH, working as part of the openSUSE community to add support for embedded devices based on ARM.
Joe 'Zonker' Brockmeier, openSUSE community manager, said, "This is the culmination of years of work by the openSUSE Project. The openSUSE Build Service has always been intended as a tool that would accelerate the general adoption of Linux. It's gratifying to see the build service becoming part of the Linux Developer Network and being embraced by the larger community.".
Jurgen Geck, chief technology officer at Open-Xchange, said, "openSUSE Build Service enables us to concurrently build Open-Xchange for all of the leading Linux platforms - making the process extremely efficient and guaranteeing a final product that is broadly compatible. The service is free, its underlying software infrastructure is released under GPL, so there is no lock in.".
The latest release of the build service also includes support for building openSUSE appliances, live CDs, installable USB images, Xen images, and VMware images. Developers can now create their own custom openSUSE distribution using the build service.
The Linux Foundation is a nonprofit consortium dedicated to fostering the growth of Linux. For more information, please visit http://www.linux-foundation.org.
A twenty-something graphic designer won a trip to Tokyo for a pro-Linux video as part of the "We're Linux" video contest sponsored by the Linux Foundation.
Amitay Tweeto, a 25-year-old graphic designer from Israel, beat out 90 contest entrants to win the grand prize for his video "What Does It Mean To Be Free?" Tweeto will receive a trip to Tokyo, Japan, to participate in the Linux Foundation's Japanese Linux Symposium in October 2009.
The "We're Linux" video contest started in December and encouraged Linux enthusiasts to create one-minute videos showcasing what Linux means to them and to get new users to try it. The contest attracted a wide variety of submissions and drew more than 100,000 combined views of the entries.
Two runners-up were also recognized. The winning videos can be viewed on the Linux Foundation video site:
A combination of community votes and a panel of judges determined the winners. More information can be found here.
The Linux Professional Institute (LPI), the world's premier Linux certification organization (), announced the release of new versions of the popular LPIC-1 and LPIC-2 certification exams.
These new exams are available worldwide in English, German and Japanese through the Prometric and VUE testing networks. Chinese, French, Portuguese and Spanish versions of the LPIC-1 exams will be available through LPI Master Affiliates at special events and training partner exam labs.
G. Matthew Rice, Director of Product Development, who led the revision effort, expressed gratitude to the many volunteer IT professionals around the world who participated in the exam development process and provided input into a new Job Task Analysis, revised objectives and new items for the exams. "This global effort has meant that our exams are much more sensitive to non-English exam candidates and include a greater amount of questions around localization, internationalization and accessibility issues. For those involved in our program since the beginning this is an accomplishment of particular significance on our 10th Anniversary," said Mr. Rice.
Key exam changes include the following:
New Content:
LPIC level re-focus/consistent focus:
Improved objective weighting, numbering and other format changes to assist courseware developers and students in exam preparation
Minimized Content Duplication between exam levels
For detailed information on the LPIC program please see: http://www.lpi.org/eng/certification/the_lpic_program.
For more information on changes to LPIC-1 and LPIC-2 exams please see LPI's exam development wiki at: https://group.lpi.org/publicwiki/bin/view/Examdev/L1And2ChangeSummary.
For an executive summary of LPIC-1 and LPIC-2 changes please see: http://www.lpi.org/eng/content/download/1158/8034/file/REVISEDLPIC1&2.pdf.
The newest version of the Linux kernel includes new functionality and new files systems. A WiMAX network stack is also included for point-to-point connection.
The new kernel features the btrfs, the B-tree FS originally from Oracle, and squashFS, a read-only file system. Partly a response to Sun's ZFS, btrfs is built for massive, enterprise-level applications with support for files of up to 16 exabytes in size and up to 264 files in a single volume. The new file system comes with capabilities for snapshots, object-level mirroring and striping, and internal compression.
The squashfs consumes fewer resources and is often used by Live CD versions of Debian, Ubuntu, Fedora and other distros as well as some embedded systems.
Kernel 2.6.29 adds kernel-based mode setting (KMS) which allows the kernel to control the graphics hardware and allows better initialization of displays by setting screen resolution earlier in the boot process. KMS can allow an X-server to run without root privileges.
Kernel 2.6.30 is expected to have support for the power-saving features in modern WiFi chip sets and additional fastboot code.
Kernel 2.6.29 and Kernel 2.6.30-RC2 are available at http://www.kernel.org.
At the RSA Conference in San Francisco, the Trusted Computing Group (TCG) announced a certification program to ensure that implementations of the TCG specifications are complete and consistent.
The program initially will focus on implementations of the TPM, which is the core of the TCG's security architecture for PCs and other computing devices. The TPM is used in millions of PCs, servers and embedded systems to secure passwords, digital keys and certificates used to protect data, email and networks.
For more details on this program, please visit TCG's website area on TCG certfication (http://www.trustedcomputinggroup.org/certification).
Also at the RSA Conference, TCG announced it has formed two subgroups to help foster further adoption of self-encryption drives and network security based on the Trusted Network Connect framework.
The group also made available the final Storage Architecture Core Specification. The specification was previously introduced as a draft to the storage industry and has now been finalized. This specification provides details about how to implement and utilize trust and security services on storage devices. To review the specification or get more information, go to https://www.trustedcomputinggroup.org.
In another major consolidation in the computer industry, one effecting several open source projects, Sun Microsystems and Oracle Corporation announced in April that they had a definitive agreement under which Oracle will acquire Sun common stock for $9.50 per share in cash. The transaction is valued at approximately $7.4 billion, or $5.6 billion net of Sun's cash and debt.
This will leave Oracle as the steward of many Open Source projects, including Java, MySQL, OpenOffice, OpenSolaris, and the GlassFish application server. Sun has championed its ZFS next-generation filesystem while Oracle has helped foster the btrfs project for Linux. More layoffs will follow the merger completion and many of these former Sun Employees are key contributors to these projects. There is also overlap on the Xen-based hypervisors and management consoles developed by Oracle and Sun, OVM and xVM respectively. Oracle expects to see long-term strategic customer advantages by owning two key Sun software assets: Java and Solaris. Java is one of the computer industry's best-known brands and most widely deployed technologies, and it is the most important software Oracle has ever acquired. Oracle Fusion Middleware, Oracle's fastest growing business, is built on top of Sun's Java language and software. Oracle claims it can now ensure continued innovation and investment in Java technology for the benefit of customers and the Java community.
The Sun Solaris operating system has historically been the leading platform for the Oracle database, Oracle's largest business. With the acquisition of Sun, Oracle can optimize the Oracle database for some of the unique, high-end features of Solaris. Oracle also remains committed to Linux and other open platforms.
But the fate of Sun's extensive server and storage business remains unclear. Industry pundit speculate that Oracle may experiment with fully integrated sales of servers and software, seeing how key partners like Dell and HP respond, or shop the hardware groups around for buyers like Fujitsu. IBM probably would have made a stronger commitment to Sun's higher end hardware.
Oracle CEO Larry Ellison, in a conference call following the announcement, said that acquiring Sun could enable Oracle to develop fully integrated systems. Since Oracle had partnered with HP late last year to produce and cross sell the high-end Database Machine and Exadata Storage Server, Oracle may now want to sell more integrated hardware and software systems.
Oracle believes it can run Sun's businesses at "higher margins" and net $1-2 billion annually.
"The acquisition of Sun transforms the IT industry, combining best-in-class enterprise software and mission-critical computing systems," said Oracle CEO Larry Ellison. "Oracle will be the only company that can engineer an integrated system - applications to disk - where all the pieces fit and work together so customers do not have to do it themselves. Our customers benefit as their systems integration costs go down while system performance, reliability and security go up."
"Oracle and Sun have been industry pioneers and close partners for more than 20 years," said Sun Chairman Scott McNealy. "This combination is a natural evolution of our relationship and will be an industry-defining event."
Oracle had been maintaining it own Java Application Server as well as promoting the former BEA Web Logic Server. Sun brings with it GlassFish, in both open source and commercial flavors. Will Oracle maintain this as a 3rd Java server option?? GlassFish is the reference platform for Java EE applications. More importantly, will the ownership of Java by so large a software company as Oracle begin to deter other ISVs from committing new resources to Java application development?
This acquisition makes Oracle bigger, but will it be better?? Is Oracle trying to preserve the revenue its made running on Sun hardware by becoming the provider of Sun hardware in perpetuity?? Sun's effort to find a buyer had left several CIOs feeling uncertain about their Sun investments. Moving to Linux on X86 hardware also meant probably moving to their databases to MySQL or Postgres as Oracle replacements. Now Oracle becomes a one-stop shop and the proverbial 'one throat to choke'.
Oracle has been moving the traditional user conferences of its acquisition products into the huge tent of Oracle OpenWorld. Will JavaOne and the MySQL user conference experience the same fate?? Oracle OpenWorld already has over 40,000 participants. Adding JavaOne could push that to over 50,000 and may exceed the meeting capacity of downtown San Francisco.
Purchasing MySQL may have earned Sun some community cred, but it only netted some $38 million from the investment in 2008. Does Oracle expect to increase this significantly or merelyup-sell customers on the features of Oracle??
The Board of Directors of Sun Microsystems has unanimously approved the transaction. It is anticipated to close this summer, subject to Sun stockholder approval, certain regulatory approvals and customary closing conditions.
In April, Canonical announced that Ubuntu 9.04 Netbook Remix was free to download along with the simultaneous releases of Ubuntu 9.04 Desktop Edition and Ubuntu 9.04 Server Edition.
Users can now download the complete Ubuntu Netbook Remix to a USB flash drive directly from Ubuntu.com. Users can then install and run Ubuntu Netbook Remix on a wide range of the most popular netbook machines available in the market today.
Ubuntu 9.04 Netbook Remix has been fully tested for use on a range of netbook models including:
Ubuntu 9.04 Desktop Edition is also available, and delivers shorter boot speeds, some as short as 25 seconds. Enhanced suspend-and-resume features also give users more time between charges along with immediate access after hibernation. Intelligent switching between WiFi and 3G environments has been broadened to support more wireless devices and 3G cards, resulting in a smoother experience for most users.
Ubuntu 9.04 also features OpenOffice.org 3.0, a complete office suite that is compatible with Microsoft Office.
Ubuntu 9.04 Server Edition enhancements include improved virtualization with the latest KVM features, clustering support in Samba file server and easier mail server setup with out-of-the-box Dovecot-Postfix integration.
Canonical has also worked to extend the range of enabled servers for Ubuntu 9.04, with 45 of the most popular mid-range servers from IBM, Dell and Sun and HP tested in the Canonical labs.
Ubuntu 9.04 Server Edition will also preview Ubuntu Enterprise Cloud (UEC). Ubuntu is the first commercially-supported distribution to enable businesses to build cloud environments inside their firewalls. With Ubuntu 9.04 Server Edition, organisations can explore the benefits of cloud computing without the data or security issues associated with moving data to an external cloud provider. Following a successful beta program last year, Ubuntu Server Edition 9.04 will also be fully available on Amazon Elastic Compute Cloud (EC2).
Ubuntu Server includes a range of new and updated features to boost efficiencies for system administrators running large systems in production environments. New fully supported features:
Ubuntu 9.04 Server, Desktop and Netbook Remix can be accessed in a number of ways:
Visit http://www.ubuntu.com/getubuntu for a free download;
Visit http://shop.canonical.com to purchase a CD, flash drive or DVD;
Visit http://shipit.canonical.com to request a free CD
In March, Novell announced the general availability of SUSE Linux Enterprise 11, a mission-critical Linux platform with complete support from Novell and its global partner ecosystem.
SUSE Linux Enterprise 11 contains major enhancements to SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop (SLED and SLES) and delivers two new extensions: SUSE Linux Enterprise Mono Extension, the product that enables customers to run fully supported Microsoft .NET-based applications on Linux, and SUSE Linux Enterprise High Availability Extension, a clustering product that ensures uptime for mission-critical application.
SUSE Linux Enterprise 11 runs on the leading hardware platforms and will be certified and supported for Amazon's Elastic Compute Cloud (EC2). In addition, SUSE Linux Enterprise 11 (SLE 11) has been optimized to run at near-native performance on all major hypervisors, including VMware ESX, Microsoft Hyper-V and Xen.
Novell is delivering SUSE Linux Enterprise JeOS (Just enough Operating System) and a suite of tools that enables ISVs to assemble a virtual appliance with just the pieces of SLE necessary to support their specific application. Uniquely, appliances that pass Novell's supportability algorithm will receive technical support for their custom JeOS configuration of SLE 11.
For deploying Linux in the cloud, SLE 11 will be certified and supported in Amazon Elastic Compute Cloud (EC2). Currently, customers deploying Linux desktops have been able to obtain SUSE Linux Enterprise Desktop preloaded from leading hardware vendors, including Dell and HP.
Novell also announced the availability of Novell ZENworks Linux Management 7.3, which extends policy-driven automation to SLE 11. Managing both desktop and server systems, ZENworks Linux Management makes it easy to deploy, manage and maintain Linux resources with advanced policies for desktops and servers. Buyers of SLE 11 will also be able to download licensed copies of the Likewise application for joining Active Directory domains in April or May.
Novell has engineered SLE 11 to work seamlessly with Microsoft Windows in the areas of cross-platform virtualization, systems management, identity/directory federation, document compatibility, Moonlight (Microsoft Silverlight on Linux) and desktop accessibility for the disabled. In addition to new support for Silverlight, SLE 11 features the ability to play Windows multimedia file formats and the latest version of OpenOffice.org Novell Edition that supports a range of Microsoft Office file formats.
SUSE Linux Enterprise Mono Extension is a new product that provides the first commercial support for the open source Mono project's application platform, enabling enterprises to seamlessly run .NET applications on Linux without having to recompile. SUSE Linux Enterprise Mono Extension allows organizations to consolidate their .NET applications onto Linux, dramatically saving costs. Novell is also offering the SUSE Linux Enterprise Mono Extension for customers performing mainframe-based workload consolidation.
SLE 11 now supports the swap-over-NFS (Network File System) protocol to leverage remote storage for local server needs and avoid costly application downtime.
SLED 11 and SLES 11, SUSE Linux Enterprise Mono Extension and Novell ZENworks Linux Management 7.3 are available now. SUSE Linux Enterprise JeOS will be available in April and SUSE Linux Enterprise High Availability Extension will be available in the second quarter of this year. Later in 2009, Novell plans to release updates to SUSE Linux Enterprise Point of Service, SUSE Linux Enterprise Real Time Extension and SUSE Linux Enterprise Thin Client.
Novell is also featuring SLE 11 as part of its around the world data center evolution tour. The first events will be in Boston and Irvine, Calif., and will also showcase solutions from PlateSpin Workload Management. For more information and to register for the tour, visit http://www.novell.com/events/tours/datacenter.
Opinions in the Blogosphere on SLED 11 and SLES 11 run both hot and cold. See:
http://blogs.computerworld.com/novells_marriage_of_linux_and_windows
http://blogs.zdnet.com/perlow/?p=9716
http://content.zdnet.com/2346-17924_22-280506.html
After an engineering freeze and RC updates in April, the Fedora community planned to release the preview for Fedora 11 at the end of April. GA for Fedora 11 is expected at the end of May.
The F11-Beta-x86_64-Live-KDE.iso was re-issued on bit-torrent as well as to the mirrors. This image was accidentally composed with 32bit packages instead of 64bit packages. There also was a correction to the checksum on the mirrors.
More information on Fedora and the upcoming release is available at http://fedoraproject.org.
MEPIS LLC has released SimplyMEPIS 8.0.06, an update to the community edition of MEPIS 8.0. SimplyMEPIS 8.0 utilizes a Debian Lenny stable foundation enhanced with a Long Term Support kernel, key package upgrades, and the MEPIS Assistant applications to create an up-to-date, ready to use desktop computer system.
The updated components on the SimplyMEPIS ISOs include recent updates from the Debian Lenny pool and also Linux kernel 2.6.27.21, Firefox 3.0.9, jbidwatcher 2.0.1, and gutenprint 5.2.3. In addition, minor tweaks have been applied to the MEPIS Installer and the MEPIS utilities.
Recently the MEPIS package pool has received new updates for Thunderbird 2.0.0.19, shorewall 4.2.6, tightvncserver 1.3.9, openswan 2.6.20, libvirt 0.6.2 virtinst 0.400.3, virt-manager 0.7.0, qemu 0.10.2 and webmin 1.460.
Founded in 2002, MEPIS LLC develops and maintains MEPIS Linux as a foundation that allows MEPIS business partners to build and deploy virtualized data center, secure server and desktop solutions.
ISO images of MEPIS community releases are published to the 'released' subdirectory at the MEPIS Subscriber's Site and at MEPIS public mirrors, and more information (and the download links) can be found on the project's page, at http://www.mepis.org/
In March, Protecode launched Release 2.0 of its flagship software governance products to enable the safe adoption of open source software and the life-time management of enterprise code portfolios. The Enterprise IP Analyzer is a software solution that analyzes and accurately identifies all code in any directory, producing customizable reports on the licensing and copyright obligations, as well as other attributes of the binary or source code.
The Developer IP Assistant, available as an Eclipse plug-in, is the industry's first preventive tool for automated real-time software IP management. Release 2.0 provides enhancements to its IP reports and increases the depth and breadth of the pedigree discovery process in enterprise code portfolios.
Prominent among the new and enhanced features of Release 2.0 are:
For more information, see: http://www.protecode.com/enterprise-ip-analyzer.php.
The key announcements at the April MySQL User conference included MySQL 5.4, a new version designed to deliver significant performance and scalability improvements to MySQL applications, and MySQL Cluster 7.0, a new release of its high-availability open source database software for real-time, mission-critical applications. New features include significantly enhanced performance and scalability; support for popular LDAP directories; and simplified cluster back-up and maintenance. Information on MySQL Cluster 7.0 - including downloads, evaluation guides, and performance benchmarks - is available now at http://www.mysql.com/cluster.
A preview version of MySQL 5.4 is available now for download at http://www.mysql.com/5.4.
MySQL 5.4 includes performance and scalability improvements enabling the InnoDB storage engine to scale up to 16-way x86 servers and 64-way CMT servers. MySQL 5.4 also includes new subquery optimizations and JOIN improvements, resulting in 90% better response times for certain queries. These performance and scalability gains are transparent and don't require any additional application or SQL coding to take advantage of them.
In the conference's opening keynote, Karen Tegan Padir, vice president of Sun's MySQL and Software Infrastructure Group, addressed the MySQL community:
"Without any modifications to your applications, MySQL 5.4 will transparently increase the performance and scalability of your applications, to enable them to scale under more demanding user and data processing loads. MySQL 5.4 is also better suited for scale-up deployments on SMP systems. Please download today's preview version and send us your feedback - we want this to be the fastest, highest-quality release of MySQL ever."
"Our initial tests of MySQL 5.4 show our application performance is up to 40% faster right out-of-the-box," said Phil Hildebrand, manager of Database & Deployments at thePlatform (www.theplatform.com). "We'll continue to follow this release closely for additional improvements."
Based on community feedback, an estimated release date for the GA version will be announced later this year. The preview version of MySQL 5.4 is currently available for download at http://www.mysql.com/5.4 for 64-bit versions of the Linux and Solaris 10 Operating Systems.
MySQL Cluster combines the world's most popular open source database with a fault tolerant "shared nothing" architecture, enabling organizations to deploy real-time mission-critical database applications reaching 99.999% ("five nines") availability. MySQL Cluster 7.0 can deliver predictable, millisecond response times while servicing tens of thousands of transactions per second. Support for in-memory and disk based data, automatic data partitioning with load balancing and the ability to add nodes to a running cluster with zero downtime allows almost unlimited database scalability to handle the most unpredictable workloads.
MySQL Cluster 7.0 features a number of new carrier-grade enhancements, including:
MySQL Cluster 7.0 is scheduled to be generally available this quarter under the GPL open source license for a range of popular operating systems, including Red Hat Enterprise Linux, SuSE Enterprise Linux Server, Solaris 10 Operating System, and Macintosh OS X.
Talkback: Discuss this article with The Answer Gang
Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.
Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Events.
By Bob Hepple
... make your bash(1) scripts as pretty as your C code, but somehow there's
never time. If it's just for your own use, then maybe you don't care -
until you forget what foobar
does, and you try foobar
-h
. Uh oh, yes, now I remember: -h
stands for "halt".
Whoops.
This article explains a simple way to spruce up your scripts so that they act predictably in the Unix way concerning options and arguments. But first of all, a little discussion of the problems the bash(1) developer faces.
When it does matter, when other people are going to be using your script, or when you can't rely on your own memory (don't worry, it'll happen to you too one day), it's a pain to code responses in bash(1) to all the possible inputs that people can make. Other people can be pretty inventive in breaking things with syntax you never imagined.
Then there's the matter of standards: Did you even know that there's a GNU standard for this kind of stuff?
For example, does your script support long options (--verbose
)
as well as short ones (-V
)? Long options are great for making
scripts self-documenting, short ones for power users (and those of us still
typing on glass teletypes).
Is your script flexible about spacing around option arguments? e.g.,
-n 123
should be equivalent to -n123
and to
--number 123
and --number=123
.
Can long options be given as the smallest unique substring, e.g.,
--numb
as a shorthand for --number
?
Can you fold short options together, e.g., -t -s -u
as
-tsu
?
Does -h, --help
consistently display help (to stdout, thank
you very much, so that the user can pipe it into a pager program like
less(1))?
How about -v,--verbose
or -V, --version
? ... and
so on. These are all things that users (OK, Unix-savvy users) expect - so
adhering to these conventions makes everyone's life easier.
But it's really quite tedious and surprisingly hard to get all this right when you write your own bash(1) code, and it's not surprising that so few scripts end up being user-hardened.
Where it really hurts is when you try to maintain that bash(1) code. Even with getopt(1) this can be a pain (actually the syntax for getopt(1) itself is pretty arcane), because the getopt(1) command structure requires you to repeat the option letters and strings in three places:
This repetition introduces wide margins for bugs to creep into the code.
In compiled languages, there are wrapper functions to put around getopt(3) that help considerably in reducing the labour and opportunity for error, in writing this sort of code. They include GNU's own argp(3) and Red Hat's popt(3).
In python(1) you can use OptionParse.
For bash(1) scripts, there hasn't been anything since getoptx(1) died out - but, in all my own scripts for the past few years, I've been using a library that I wrote: process-getopt(1). It's a wrapper around getopt(1) that makes life considerably easier for bash(1) scripters, maintainers, and users.
As an example, here's a tiny script that uses it to process its command line:
#!/bin/bash PROG=$(basename $0) VERSION="1" USAGE="A tiny example" # call process-getopt functions to define some options: source ./process-getopt SLOT_func() { [ "$1" ] && SLOT="yes"; } # callback for SLOT option add_opt SLOT "boolean option" s "" slot TOKEN_func() { [ "$1" ] && TOKEN="$2"; } # callback for TOKEN option add_opt TOKEN "this option takes a value" t n token number add_std_opts # define the standard options --help etc: # The next 4 lines call the callbacks and remove the options from the command line: TEMP=$(call_getopt "$@") || exit 1 eval set -- "$TEMP" process_opts "$@" shift "$?" # remove the options from the command line # The hard lifting is done - $@ just contains our arguments: echo "SLOT=$SLOT" echo "TOKEN=$TOKEN" echo "args=$@"
... and you're done. Here's the sort of output you get without any further coding:
$ tiny --help Usage: tiny [-shVvq-] [--slot --help --version --verbose --quiet] [-t,--token= ] A tiny example Options: -s, --slot boolean option -t n, --token=number this option takes a value -h, --help print this help and exit -V, --version print version and exit -v, --verbose do it verbosely -q, --quiet do it quietly (negates -v) -- explicitly ends the options
Here's an example of using the options and arguments on the command line:
$ tiny -s --token="my token" arg1 arg2 SLOT=yes TOKEN=my token args=arg1 arg2
process-getopt(1) works with bash-2.04 and later and lives at:
http://sourceforge.net/projects/process-getopt
It's pretty easy to convert your existing scripts to use process-getopt(1): Follow the samples and manuals here:
http://bhepple.freeshell.org/oddmuse/wiki.cgi/process-getopt
Here's a direct link to the manual:
http://bhepple.freeshell.org/scripts/process-getopt.pdf
Enjoy!
Talkback: Discuss this article with The Answer Gang
Bob Hepple is the youngest grumpy old man at Promptu Corp on the Gold Coast in Australia and earned his UNIX stripes at Hewlett-Packard in 1981. Since then he's worked in Asia and Australia for various UNIX vendors and crypto companies - but always in UNIX and GNU/Linux.
Originally a Geordie land-surveyor, he now thinks he's dinky-di after 30 years of Oz - but apparently the pommie accent gives the game away.
In the beginning, there was compress. People used it on their data files, and it was good. Their files became smaller, and more data could be crammed onto disc platters than ever before. The joys of LZW compression were enjoyed by many.
In 1992, something better came along: gzip. It used a different compression algorithm (LZ77 plus Huffman coding) that provided even smaller files. As a bonus, it was free of any pesky patent-encumbered algorithms. The masses were even happier, as they could cram even more data into their systems, and no royalty payments for patents were required.
In 1996, Julian Seward released bzip2, which used a combination of the Burrows-Wheeler transform and other compression techniques to achieve compression performance even better than gzip's. It required more CPU power and more memory, but, with the ever-escalating capabilities of computers, this became less and less of an issue over time.
For many years, gzip and bzip2 were the de facto compression standards in the world of free software, with gzip being used on time-sensitive compression tasks, and bzip2 being used when maximum file compression was desired.
However, in the year 2000, something new came along. Igor Pavlov released a program called 7-zip, which featured a new algorithm called LZMA. This algorithm provided very high compression ratios, although it did require major RAM and CPU time.
Unfortunately, there were two problems that made 7-zip less than ideal for Linux/BSD/Unix users. The first is that it was written for Microsoft Windows. Eeek! This was thankfully addressed in 2004, with the release of a cross-platform port called p7zip. A second problem is that 7-zip (and p7zip) used a file format called .7z. This is a multi-file archive format similar in functionality to .zip files. Unfortunately, with its Windows-based roots, the .7z file format made no provision for Unix-style permissions, user/group information, access control lists, or other such information. These limitations are show-stoppers for people doing backups on multi-user systems.
Then in 2004, Igor Pavlov released the LZMA SDK (Software Development Kit). Though intended for application writers, this development kit also contained a little gem of a command-line utility called lzma_alone. This program could be used much like gzip and bzip2, to create .lzma files. When combined with tar, this provided excellent file compression with proper Unix compability.
Less than a year after the release of the LZMA SDK, Lasse Collin released the LZMA Utils. This was initially a set of wrapper scripts around lzma_alone that provided lzma (with command-line options very similar to those of gzip and bzip2) instead of the less common p7zip-style options used by lzma_alone. Later lzma releases were entirely in C. Then, in 2009, Lasse Collin released the XZ Utils, xz being the main utility. This new utility continues to use LZMA compression, but, instead of producing raw LZMA data streams, it wraps the resulting data stream in a well-defined file format containing various magic bytes, stream flags, and cyclic redundancy checks. Thus was born the .xz file format.
In 2008, Antonio Diaz released a similar utility called lzip. Like xz, it uses LZMA compression, but, instead of creating .xz files, it creates .lz files. This format is different in detail, but has many of the same features as .xz files, such as magic bytes, cyclic redundancy checks, etc. Additionally, lzip can create multi-member files, and can split output into multiple volumes.
As of this writing, there are now four command-line utilities (and three file formats) that use LZMA, providing excellent file compression results: lzma_alone by Igor Pavlov, lzma and xz by Lasse Collin, and lzip by Antonio Diaz. Does this mean we're in for a VHS/Betamax-style format war? It's hard to say. (Fortunately, you're not limited to using just one. These are utilities, not VCRs. There's plenty of room for all of them on your hard drive. I have all four on mine.)
I myself prefer lzma_alone, as it's maintained by the person who actually invented the LZMA algorithm and understands it best. However, the file format is minimal, and xz and lzip offer significant advantages with their magic bytes and data integrity checks. It's also difficult to build lzma_alone, and it has no manpage. The XZ Utils are easiest to build (as it features a modern autotools-based configure script), but it currently lacks manpages for the main xz and lzma utilities. Lzip falls in between. It requires some manual hacking to get compiler flags like you want, but it does contain a nice manpage.
At some point, one of these may become the predominant way of using LZMA compression, but, as of today, you can expect to see all three file formats out there "in the wild". I know I have.
How do these utilities compare, as to compression performance? It turns out, there's little difference. Here's a table of results, showing how lzma_alone, xz, lzip, bzip2, gzip, and compress perform on the source tarball from ghostscript-8.64. (I skipped Lasse Collin's lzma, since it's just a symlink to xz, now.) Exact versions were lzma_alone-4.65, xz-4.999.8beta, lzip-1.5, bzip2-1.0.5, gzip-1.3.12, and ncompress-4.2.4.2.
gs.tar 65003520 bytes (original file) gs.tar.lzma 12751330 bytes (159.05s to compress, 1.48s to decompress) gs.tar.xz 12768488 bytes (155.17s to compress, 1.54s to decompress) gs.tar.lz 12804165 bytes (161.12s to compress, 1.97s to decompress) gs.tar.bz2 16921504 bytes ( 14.72s to compress, 3.45s to decompress) gs.tar.gz 19336239 bytes ( 7.31s to compress, 0.63s to decompress) gs.tar.Z 29467629 bytes ( 2.39s to compress, 0.78s to decompress)
Compression results on all three LZMA-based utilities were quite similar, with lzma_alone doing the best by a whisker. All three did much better than bzip2, gzip, and compress, though taking much longer. Lzip decompression was about 30% slower than the other two LZMA-based utilities, but it's still markedly faster than Bzip2.
How can you take advantages of these new utilities? Well, if you're lucky, your distribution will have one or more of these available as pre-compiled packages. I run Debian (Lenny) 5.0, which has lzma_alone and an earlier version of Lasse Collin's LZMA Utils (which contains lzma, but not xz) available. For those not provided by your distribution, you'll have to download the source code and compile it yourself. Here are links to the three major programs:
http://www.7-zip.org/sdk.html (for lzma_alone) http://tukaani.org/xz/ (for xz and lzma) http://www.nongnu.org/lzip/lzip.html (for lzip)
For those who wish to build lzma_alone, I offer this tarball: lzma_alone_patches.tar.bz2, which contains some minimal patches, build instructions, and a manpage. To use it, you'll still need to download the original LZMA SDK from the Web site mentioned above. As for the XZ Utils and Lzip, they are quite straightforward to build and install.
How can you convert existing tarballs from one file compression scheme to another? With Unix pipes, of course. The following examples show how:
gzip -c -d source.tar.gz | lzma_alone e -si source.tar.lzma bzip2 -c -d source.tar.bz2 | lzma -c > source.tar.lzma gzip -c -d source.tar.gz | xz -c > source.tar.xz bzip2 -c -d source.tar.bz2 | lzip -c > source.tar.lz
And how can you decompress these incredibly compact new tarballs?
lzma_alone d source.tar.lzma -so | tar -xvf - tar --use-compress-program=lzma -xvf source.tar.lzma tar --use-compress-program=xz -xvf source.tar.xz tar --use-compress-program=lzip -xvf source.tar.lz
For those who have many tarballs to convert, you might consider downloading and installing the littleutils package. This package contains three scripts (to-lzma, to-xz, and to-lzip) that will convert multiple gzip- and bzip2-compressed files into .lzma, .xz, or .lz format, respectively. The -k option is particularly useful, as it will delete the original file only if the new one is smaller. Otherwise the original file will be preserved. To convert an entire directory of tarballs to .lzma format, simply type the following:
to-lzma -k *.tar.gz *.tar.bz2
After that shameless plug for my own software, I'll conclude this article by urging people to start using at least one of these LZMA-based compression utilities. Particularly if you distribute compressed tarballs. LZMA-compressed files take less time to download, and take less time to decompress (at least compared to bzip2). Even in a world of broadband Internet connections, multi-gigahertz processors, and cavernous hard drives, these utilities will save time and space.
Talkback: Discuss this article with The Answer Gang
Brian Lindholm is a Virginia Tech graduate and middle-aged mechanical engineer who started programming in BASIC on a TRS-80 Model I (way back in 1980). In the late eighties, he moved to Pascal and C on an IBM PC-compatible.
Over the years, Brian became increasingly disgruntled with the instability and expense of the various Microsoft operating systems. In particular, he hated not being in full control of his system. MOST fortunately for him, however, he had a college roommate who ran Linux (way back in the Linux 0.9 and Slackware 1.0 days). That introduction was all he needed.
Over the years, he's slowly learned more and more, and now manages to keep his Debian system running happy and stable (even through four major upgrades: 2.2 to 3.0 to 3.1 to 4.0 to 5.0). [A point of note: His Debian system has NEVER crashed on its own. EVER. Only power failures, attempts to boot off the wrong partition, errant hits of the reset button, a cracked DVD, and a particularly flaky IDE Zip drive ever managed to take it down.] He loves VIM and has found Perl amazingly useful at work.
In his non-Linux life, Brian helps design power generation equipment (big power plant stuff) for a living, occasionally records live music for people, reads too much science fiction, and gets out on the Appalachian Trail as often as he can.
By Joey Prestia
In information technology, security is never a result of "just one thing"; in other words, there is not a panacea for digital security. The best security practices result from multiple applications and practices to protect your system, such as iptables firewalling, Mandatory Access Controls such as SELinux, and Discretionary Access Controls such as permissions and rights. Last, consider coupling TCP Wrappers to complement your firewall rules.
[ As security guru Bruce Schneier often points out, "security is not a product; it's a process". The largest part of security is not even related to the tools you use; it's the awareness of the continual necessity of staying informed, updated, and at the top of the game. The tools are one of the means to that end. -- Ben ]
TCP Wrappers work in the manner of a host-based Access Control List. They are used to filter out network access to Internet Protocol (IP) servers that are running Linux, Unix, or BSD. They will allow host or network addresses to be used as indicators to filter and implement a layer of access control. They additionally extend the capabilities of xinetd-controlled daemons. By using this technique, connection attempts can be logged, restricted, and messages returned. This can add an extra layer of security in your environment. TCP Wrappers also allow run-time reconfiguration without restarting or reloading the services they protect.
Connections that are permitted are simply allowed; those that are denied will fail. Some services will report a particular error message, as do SSH and vsftpd. Here is an example of an exchange using TCP Wrappers to protect services.
An example of SSH being disallowed from the client's perspective:
[root@alien ~] ssh [email protected] ssh_exchange_identification: Connection closed by remote host
An example of FTP being disallowed:
[root@alien ~] ftp 192.168.0.15 Connected to 192.168.0.15. 421 Service not available.
When connections are attempted to a service using TCP wrappers, the following occurs (the following steps are important because order matters, and rules are processed line-by-line):
To uncover what processes/daemons use TCP Wrappers, do the following:
strings -f <program_name> | grep hosts_access
An example of output from this command:
[root@alien ~] strings -f /usr/sbin/* | grep hosts_access /usr/sbin/in.tftpd: hosts_access /usr/sbin/sshd: hosts_access /usr/sbin/stunnel: hosts_access /usr/sbin/stunnel: See hosts_access(5) manual for details /usr/sbin/tcpd: hosts_access_verbose /usr/sbin/vsftpd: hosts_access /usr/sbin/xinetd: hosts_access [root@alien ~] strings -f /sbin/* | grep hosts_access /sbin/portmap: hosts_access_verbose
When using TCP Wrappers, bear certain things in mind. First, order matters. Second, the search will stop with the first match. Any changes to /etc/hosts.allow or /etc/hosts.deny take effect immediately without having to restart any services. As with iptables, order is crucial in these rules. Let's take a look at the commands for setting rules. The basic formats of commands in /etc/hosts.allow and /etc/host.deny are:
daemon_list : client_list : shell_command daemon_list - This is a list of one or more daemon process names or wildcards client_list - This is a list of one or more host names, host addresses, patterns or wildcards that will be matched against the client host name or address.
Examples of /etc/hosts.allow and /etc/hosts.deny rules:
/etc/hosts.allow
# # hosts.allow This file describes the names of the hosts # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # My new rule below sshd : 192.168.0.14
/etc/hosts.deny
# # hosts.deny This file describes the names of the hosts # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular, # you should know that NFS uses portmap! # # My new rule below ALL : ALL
Observe the results of my new rules:
From station14.example.com (192.168.0.14)
Last login: Tue Apr 7 06:45:09 2009 from station14 [root@station15 ~]#
From alien.example.com (192.168.0.247)
[root@alien ~]# ssh 192.168.0.15 ssh_exchange_identification: Connection closed by remote host [root@alien ~]#
(WARNING: Always leave a blank line containing an 'enter' character in both /etc/hosts.allow and /etc/hosts.deny! Simply go to the end of your last line in insert mode within the vi editor, and hit the carriage return (<enter>) key. Failure to perform this step may cause unwanted consequences with TCP Wrappers.)
Here are some rule examples included below. Which file you put the rule in determines how the rule works, unless you add an 'allow' or a 'deny' at the end of the rule.
sshd : .example.com vsftpd : 192.168. ALL : 192.168.1.0/255.255.255 sshd : station1.example.com : allow sshd : station15.example.com : deny vsftpd : ALL EXCEPT *.hacker.org
The 'twist' directive is used to replace the service with a selected command. It is commonly used to set up honeypots. Another use for it is to send messages to connecting clients. The 'twist' command must be used at the end of a rule line. Here is an example of using 'twist' in /etc/hosts.deny to send a message to a host that has abused FTP services, via the echo command:
Example using twist in /etc/hosts.deny
vsftpd : station6.example.com \ : twist /bin/echo "Service suspended for abuse!"
The 'spawn' directive causes a child process to be launched. This can be very handy when used to generate special access log files. You can also run custom scripts in the background with this directive that will be unseen by the user. In this example, we will create a timestamp in a custom log so we can monitor FTP connections:
Example in /etc/hosts.allow
vsftpd : .example.com \ : spawn /bin/echo $(/bin/date) access granted to %h>>/var/log/ftp_access.log
These are what character expansions you can take advantage of, when used in either /etc/hosts.allow or /etc/hosts.deny.
% EXPANSIONS The following expansions are available within shell commands: %a The client host address %A The server's host address %c Client information: user@host, user@address, a host name, or just an address %d The daemon process name %h The client host name or address, if the host name is unavailable %H The server host name or address, if the host name is unavailable %n The client host name (or "unknown" or "paranoid") %N The server host name (or "unknown" or "paranoid") %p The daemon process id %s Server information: daemon@host, daemon@address, or just a daemon name %u The client user name (or "unknown") %% Expands to a single "%" character. (Characters in % expansions that may confuse the shell are replaced by underscores)
TCP Wrappers make a great complement to your current security measures. Remember: always thoroughly test any security implementation before moving to a production platform!
Talkback: Discuss this article with The Answer Gang
Joey was born in Phoenix and started programming at the age fourteen on a Timex Sinclair 1000. He was driven by hopes he might be able to do something with this early model computer. He soon became proficient in the BASIC and Assembly programming languages. Joey became a programmer in 1990 and added COBOL, Fortran, and Pascal to his repertoire of programming languages. Since then has become obsessed with just about every aspect of computer science. He became enlightened and discovered RedHat Linux in 2002 when someone gave him RedHat version six. This started off a new passion centered around Linux. Currently Joey is completing his degree in Linux Networking and working on campus for the college's RedHat Academy in Arizona. He is also on the staff of the Linux Gazette as the Mirror Coordinator.
More XKCD cartoons can be found here.
Talkback: Discuss this article with The Answer Gang
I'm just this guy, you know? I'm a CNU graduate with a degree in physics. Before starting xkcd, I worked on robots at NASA's Langley Research Center in Virginia. As of June 2007 I live in Massachusetts. In my spare time I climb things, open strange doors, and go to goth clubs dressed as a frat guy so I can stand around and look terribly uncomfortable. At frat parties I do the same thing, but the other way around.