Sunday, December 31, 2006

memes that damage debian

The Debian project is afflicted with several damaging memes. One that is causing problems at the moment is the idea that life should be fair. Unfortunately life is inherently unfair, it's not fair that those of us who were born in first-world countries (the majority of Debian developers) have so many more opportunities than most people who are born in "developing" countries, and things just continue to be unfair as you go through life. Unfair things will happen to you, deal with it and do what is necessary to have a productive life!

When one developer has regular long-term disputes with many other developers the conclusion is that the one person who can't get along is at fault. We can debate about whether one or two significant disputes are a criteria for this or whether having a dozen developers disagreeing with them is the final point. But the fact is that if there is a large group of people who work together well and an individual who can't work with any of them then there is only one realistic option, the individual needs to go find some people that they can work with - they can resign or be expelled. The fact that something slightly unfair might have happened a year ago is no reason for pandering to an anti-social developer. The fact that expelling a developer for being anti-social is unfair to them is no reason for damaging the productivity of all Debian developers.

Another problemmatic meme is the idea that we have to tolerate everyone - even those who are intolerant (known as the Limp Liberal meme). When someone has no tolerance for others (EG being racist or practicing sexual discrimination) then they have no place in a community such as Debian. They need to be removed sooner rather than later. All Debian developers know the problems caused by deferring such expulsion.

The final damaging meme that I have observed is you can't force a volunteer to do any work. On it's own that statement is OK, but the interpretation commonly used in Debian is you can't take their job away from them either. The most common example of this is when a developer is not maintaining a package and someone else does an NMU (non-maintainer upload) to fix a bug, the developer then flames the person who did this (usually to fix a severe bug). It seems to be believed that a Debian developer owns their packages and has a right to prevent other people from working on them. This attitude also extends to all other aspects of Debian, there are many positions of responsibility in Debian that are not being adequately performed and for which volunteers are offering to help out but being refused.

The idea of the GPL is that when a program is not being developed adequately it can be taken over by another person. However when that program is in a Debian package the developer who owns it can refuse to allow this.

Saturday, December 30, 2006

more about vista security

While reading the discussion of Vista security on Bruce Schneier's blog it occurred to me that comparing the issues of DRM that face MS with the issues faced by SE Linux developers provides some benefits.

SE Linux is designed to enable the owner of a computer to effectively enforce security policies to protect their system integrity and the confidentiality of their data. Some of the SE Linux users (military users) use all the available features, but most people only use the targeted policy which provides a sub-set of the system integrity and data confidentiality protections that are available to give a greater ease of use.

Fedora and Red Hat Enterprise Linux ship with the SE Linux targeted policy enabled by default. This policy is not something that most users notice. The main program that is commonly used which might have an issue with the default SE Linux policy is Apache. Fedora and RHEL systems that do not run Apache (which is most of them) can run SE Linux with almost no costs.

setsebool -P httpd_disable_trans 1
/etc/init.d/httpd restart
It seems clear to me that there is no good reason for disabling SE Linux by default. There are reasons for running a particular daemon in the unconfined_t domain via the FOO_disable_trans boolean. EG to run Apache without restrictions you would type the above commands.

In spite of the SE Linux targeted policy being so easy to use, the fact that it prevents certain daemon compromises from allowing immediate control of the system, and also prevents some kernel exploits from working, there are still some people who turn it off when installing Fedora and RHEL systems and advise others to do so.

Given this great fear of some people to use a technology that is specifically designed for their own benefit I find it difficult to imagine that any users will be inclined to accept the MS DRM technology that is specifically designed to go against their interests.

ESR claims that the 64bit transition is the critical period for Linux to move on the desktop. While he makes many interesting observations I don't find his argument convincing. Firstly current P4 CPUs with PAE can handle significantly more than 4G of RAM (64G is possible now). Secondly it's quite possible to run a 64bit kernel with 32bit applications, this means that you can use a 64bit kernel to access >64G of RAM with each 32bit application getting direct access to something slightly less than 4G of virtual memory. As ESR's point seems based on total system memory the 4G per application doesn't seem to be an issue. As an aside the only applications I've used which could benefit from >4G are database servers (in the broadest interpretation - consider an LDAP server as a database) and JVM environments. Again it would not be difficult to ship an OS with mostly 32bit code that has a 64bit kernel, JVM, and database servers.

I hope that Vista will be a good opportunity for mass transition to Linux. Vista offers little that users desire, many things that will hurt them, and is expensive too!

With Vista you pay more for the OS and more for the hardware (Vista has the usual excessive Windows hardware requirements plus the extra hardware for TPM encryption) without providing anything substantial in return.

What I want to see next is support for Security Enhanced X to protect desktop environments against hostile X clients. This will make a significant difference to the security of Linux desktop environments and provide another significant benefit for choosing Linux over Windows. While MS is spending their effort in making their OS act against the best interests of the users we will keep making Linux enforce the access controls that users need to protect their systems. Hopefully Linux users will choose to use SE-X, but if they don't they are given the freedom to make that choice - unlike the poor sods who use Windows.

Friday, December 29, 2006

email disclaimers

Andre Pang blogs about the annoyance of email disclaimers. For a while I had a .sig indicating that it was a condition of sending email to me that the sender agrees to legalistic terms in their .sig being inapplicable to me.

220 smtp.sws.net.au ESMTP Postfix - by sending email to this server you agree that any legalistic sig in your message does not apply to anyone who receives the message through this service.

Now I have changed my Postfix greeting to the above. Anyone who sends me mail agrees that their .sig does not apply to me. Suggestions for improvements to the above text are welcome.

source dump blog

Inspired by Julien Goodwin's post I created a new blog for myself named Source Dump. Source is different to other blog content in that updates to fix bugs may be required (generally I believe that ideally blog posts should not be edited once published and in the rare cases of editing being necessary all such edits should be appended to the end), and in that it may be longer than is suitable for a Planet feed. Finally Planet and other aggregators may mess up source code. If the source is only visible through my blog then I can be reasonably sure that it is usable by everyone who sees it.

So when I have source code to publish that is related to blog postings or mailing list postings I now have a place for it (I haven't yet made a posting).

I will not be submitting my Source Dump blog for syndication anywhere, but if anyone wants to syndicate it they are welcome to do so. I don't desire that it not be syndicated, I merely recommend that it not be syndicated for the convenience of readers due to the potential large size of postings and the potential for postings to be broken by aggregation.

music for children

Adam Rosi-Kessel made an interesting post about They Might Be Giants producing children's music because their original fan base are now old enough to have children.

From casual inspection of the crowds at events such as Linux Conf AU it seems to me that many serious Linux people are also at the right age to have young children, and several blogs that are syndicated on Linux Planets provide evidence of this. Therefore it seems that there is a market for Linux related children's music.

Many aspiring artists complain about the difficulty of establishing a reputation. I think that if someone was to release OGG and FLAC recordings of a children's version of the Free Software Song under a Creative Commons license then they would get some immediate publicity through the blog space and Linux conferences which could then be used to drive commercial sales of children's music.

While on the topic, it would be good to have a set of children's songs and nursery rhymes to teach children from a young age about the community standards that we share in the Free Software community. There is no shortage of propaganda that opposes our community standards, the idea that sharing all music and software is a crime is being widely promoted to children.

Tuesday, December 26, 2006

google reader

From a suggestion on my previous blog entry I decided to test out google reader.

The first problem was that it caused Konqueror to SEGV in etch, I filed a bug report and switched to Firefox.

Next to add my feeds I had to either export them in OPML format or add them one at a time, there is no support for pasting in a list of URLs. If I was writing a RSS syndication program I would also make it parse the config files of some of the common programs, parsing a Planet config file is pretty easy.

I added a feed for a friend who's server seems to be down. While doing so I tried to add another feed, the google reader accepted the command to add the second feed but didn't actually do so - it was fortunate that I was pasting it in not typing it...

The killer issue is that it seems to be impossible to merge feeds. I want to read both Planet Linux Australia and Planet Debian, there are some people who are on both planets (EG me). So it makes no sense to do anything other than display both of them in the same view.

At this time it seems that google reader is unsuitable for my use. However it is a fairly slick system and I imagine that it would work quite well for people who have different needs to me. If you want to read the blogs of a few friends then it probably works really well. It just seems not to work well for a set of meshed communities (Debian developers and Linux users in Australia for example).


Please let me know if I somehow missed some configuration options to make google reader do what I want.

planet - resource use

I just noticed that /usr/bin/planetplanet is using about 120M of RAM. This isn't currently a problem as I'm running it on a machine with 256M of RAM, however I would like to run my web server on a 96M Xen instance. 120M for planetplanet is probably going to cause bad performance on a web server with 96M of physical RAM allocated.

This is a serious problem for me as the Xen server in question can't be upgraded any more (the motherboard has as much RAM as it can handle).

Are there any other free syndication programs that use less memory?

Are there any good free syndication services that I could use instead of running my own planet?

Monday, December 25, 2006

DOSing Windows Vista

Chris Samual writes a good summary of Peter Gutmann's analysis of the cost of Vista (in terms of DRM).

The following paragraph in the article however seemed more interesting to me:
Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640x480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found.

Now just imagine that you want to cause widespread disruption - a DOS (Denial Of Service) attack against Windows users. What better option than to cause most of them to have hardware that's not acceptable to the OS? I expect that there will be many instances of security holes in drivers and hardware being concealed by MS because they can't afford the PR problems resulting from making millions of machines cease functioning. But just imagine that someone finds hardware vulnerabilities in a couple of common graphics drivers or pieces of hardware and publicly releases exploits shortly before a major holiday. If it's public enough (EG posted to a few mailing lists and faxed to some major newspapers) then MS would be forced to invoke the DRM measures or lose face in a significant way. Just imagine the results of stopping 1/3 of machines working just before Christmas!

Of course after the first few incidents people will learn. It shouldn't be difficult to configure a firewall to prevent all access to MS servers so that they can't revoke access, after all our PCs are important enough to us that we don't want some jerk in Redmond just turning them off. Of course disabling connections to MS would also disable security updates - but we all know that usability is more important than security to the vast majority of users (witness the number of people who happily keep using a machine that they know to be infected with a virus or trojan).

If this happens and firewalling MS servers becomes a common action, I wonder if MS will attempt the typical malware techniques of using servers in other countries with random port numbers to get past firewalls. Maybe Windows updates could be spread virally between PCs, this method would allow infecting machines that aren't connected to the net via laptops.

Finally, I recommend that people who are interested in such things read Eastern Standard Tribe by Cory Doctorow, he has some interesting ideas about people DOSing corporations that they work for which seem surprisingly similar to what MS is doing to itself. I'll post about my own observations of corporations DOSing themselves in the near future.

Sunday, December 24, 2006

MythTV

I've just been trying to set up a MythTV system, I've had the hardware for this for a while but until my TV broke I hadn't found the time to work on it.

My planned hardware was a P3 system for both frontend and backend. However the Debian MythTV packages take over 250M of memory for each of the frontend and backend programs (more than 500M for Myth programs alone), combine that with some memory use for MySQL, for the X server, and for the rest of a functional Linux system and a machine with 512M of RAM does not perform well. Unfortunately it seems that most (if not all) desktop P3 systems only support 512M of RAM (at least none of my collection of P3 machines does). P3 server systems support considerably more RAM (I used to have a SMP P3 server with 1G of RAM), but a server class machine is not what you want in your lounge room. P4 systems support more RAM but also take more than twice the electricity of P3 systems. The extra electricity use is a waste for a machine that will probably run 24*7, and also requires extra cooling (and therefore probably louder fans which is bad for a machine used for playing audio). Given that I desire the capabilities of a P4 in terms of RAM support and electricity use that is not excessive, my options are one of the recent Intel Core CPUs or AMD64 (both of which apparently use less electricity - not sure whether they would use as little as a P3) or a CPU designed for low power use (maybe a Via). Either option costs more than I prefer to spend (I want to use a second-hand machine valued at $129).

Of course if I could get Myth to use less memory then that would allow me to run both parts of MythTV on the one machine. Please offer any advice you can think of in regard to minimising memory use for Myth on Debian.

Given that I am unlikely to be able to optimise memory use as much as I desire it seems that I will have to go to a split model and try using an existing Pentium-D server machine as the backend and the P3 as a frontend. The up-side of this is that I can turn off the frontend machine when it's not being used (the server runs 24*7 anyway). I will need to get more RAM for the Pentium-D machine, but I had wanted to do that anyway.

The next thing I will have to do is to write SE Linux policy for MythTV and find out why /usr/lib/libmp3lame.so.0.0.0 needs both execmod and execstack (and either fix the library or write policy to permit that accss). I'm happy to have a MythTV server program running on a stand-alone machine with unrestricted access to all resources, but when it's going to run on a machine that is more important to me I need to lock it down.

Now to the design of MythTV. It's unfortunate that they don't support better multitasking options. One feature I would like to see is the ability to play MP3s in the background and then have the music pause whenever TV is selected. When I watch a live show (such as the news) I would like to listen to MP3s before it starts and then at the end of the show (and maybe during commercial breaks) have the MP3 playing resume where it left off. This is exactly the traditional lounge-room functionality we are all used to, when a commercial starts you mute the TV and un-pause the CD player! The advantage of MythTV is that you could do both with a single button - or even automatically with advert recognition!

The method of selecting MP3 files to play is also a little cumbersome and the option to add a song after the current one in the play list doesn't seem to work. Also it's a pity that there is no convenient option to sort the music list by artist, genre, or sone name - there are grouping options but they are all separate and I would like to change the sorting via a single key press.

The startup sequence of MythTV regenerates some images (which takes a few seconds), it seems that this is something that could be cached for a faster startup. Also when I add new MP3 files to the store I have to manually request a re-scan. It's a pity that it can't just check the size and time-stamp of the directories which contain MP3 files and do an automatic re-scan if they change.

Saturday, December 23, 2006

installing Debian Etch

A few days ago I installed Debian/Etch on my Thinkpad. One of the reasons for converting from Fedora to Debian is that I need to run Xen and Fedora doesn't support non-PAE machines with Xen. Ironically it's hardware supplied to me by Red Hat (Thinkpad T41p) that is lacks PAE support and forces me to switch to Debian. I thought about just buying a new dual-core 64bit laptop, but that seems a bit extravagant as my current machine works well for everything else.

Feeling adventurous I decided to use the graphical mode of the installer. I found it a little confusing, at each stage you can double-click on an item or click on the continue button to cause the action to be performed. The partitioning section was a little unclear too, but given that it has more features than any other partitioning system I've seen I wasn't too worried (options of creating a degraded RAID array and for inserting a LUKS encryption layer at any level are really nice). The option to take a screen-shot at any time was also a handy feature (I haven't yet inspected the PNG files to see what they look like).

Another nice feature was the way that the GUI restarts after a crash. While it was annoying that the GUI started crashing on me (and would have prevented a less experienced user from completing the install) the fact that it didn't entirely abort meant that I could work around the problem.

I have not yet filed any bug reports against the installer because I have not done a repeatable install (there is a limit to how much testing I will do on my most important machine). In the next few days I plan to do a few tests of the graphical installer on test hardware for the operations that are important to me and file appropriate bug reports. I encourage others to do the same, the graphical mode of the installer and the new encryption and RAID features are significant improvements to Debian and we want them to work well.

I have realised that it won't be possible to get SE Linux as good as I desire before the Etch release, even if the release is delayed again. I'm not sure how many fixes can go in after the release (I hope that we could move to a model similar to RHEL - but doubt that it will happen). So I now plan to maintain my own repository of Etch SE Linux packages and for other packages which need changes to make them work in the best possible manner with SE Linux. I will append something like ".se1" to the version of the packages in question, this means that they will be replaced if a security update is released for the official package. Apart from the SE Linux policy packages (for which any security updates will surely involve me) the changes I am going to make will not be major and will be of less importance than a security update.

I will also add other modified and new packages to my repository that increase the general security of Etch. Apart from SE Linux all the changes I intend to host will be minimal cost issues (IE they won't break things or increase the difficulty of sys-admin tasks), and the SE Linux related changes will not break anything on non-SE systems. So someone who wants general security improvements without using SE Linux might still find my repository useful.

another visual migraine

This morning while travelling to work by tram I had another visual migraine. It was a little worse than last time, not only did everything I focussed on appear to shimmer, but things went a bit grey at my peripheral vision. I had a headache as well although it was very mild (not the typical migraine headache).

It was convenient that the vision problems almost exactly matched the time of my tram journey so that it didn't cause me to waste much time. One visual migraine every three months is something that won't inconvenience me much. I just hope that I don't get other migraine symptoms in future.

Friday, December 22, 2006

encryption speed - Debian vs Fedora

I'm in the process of converting my Fedora/rawhide laptop to Debian.

On Fedora the AES encrypted filesystems deliver about 38MB/s read speed according to dd. On Debian the speed is 2.4MB/s when running Xen and 2.7MB/s when not running Xen. The tests were done on the same block device.

Debian uses a SMP kernel (there are no non-SMP kernels in Debian), but I don't expect this to give an order of magnitude performance drop. Both systems use i686 optimised kernels.

Update: As suggested I replaced the aes module with the aes_586 module. Unfortunately it made no apparent difference.

Update2: As suggested by a comment I checked the drive settings with hdparm and discovered that my hard drive was not using DMA. After I configured the initramfs to load the piix driver first it all started working correctly. Thanks for all the suggestions, I'll post some benchmarks of encryption performance in a future blog entry.

Thursday, December 21, 2006

hybrid Lexus is best luxury car

The Lexus GS 450 hybrid petrol/electric car has been given the award for Australia's best luxury car!

The judging for this contest rated fuel efficiency as low importance, because luxury car owners traditionally aren't very concerned about such things. The Lexus won because of it's quiet engine (can't beat an electric motor at low speed), high performance (3.5L petrol engine that outperforms mode 4L engines because of the electric motor assistance), safety, security, and other factors.

There has been an idea that hybrid cars are only for people who want to protect the environment at all costs. The result of this contest proves that idea to be false. The Lexus won by simply being a better luxury car, the features that benefit the environment also give a smoother and quieter ride and higher performance - which are factors that are very important to that market segment! Also it wasn't even a close contest, the nearest rival achieved an aggregate score of 9% less (a significant difference as there was a mere 2.5% difference in score between the 2nd place and 5th place).

This of course shouldn't be any surprise. The high torque that electric motors can provide at low speed is well known - it's the reason for Diesel-electric hybrid power systems in locomotives. It was only a matter of time before similar technology was introduced for cars for exactly the same reasons. The next development will be hybrid Diesel-electric trucks.

Tuesday, December 19, 2006

interesting things

/tmp /mnt/bind bind bind 0 0

Today I discovered that the above syntax works in the /etc/fstab file. This enables a bind mount of /tmp to /mnt/bind which effectively makes /mnt/bind a hard link to /tmp. The same result can be achieved by the following command, but last time I tried (quite some time ago) it didn't seem to work in /etc/fstab - but now it works in both SUSE and Debian.

mount --bind /tmp /mnt/bind

Also I recently discovered that 0.0.0.0 is an alias to 127.0.0.1. So for almost any command that takes an IP address you can use either address with equal results (apart from commands which interpret the string and consider 0.0.0.0 to be invalid). I can't think of any benefit to using this, and challenge the readers to post a comment (or make their own blog post if they so wish) demonstrating it's utility.

AMD developer Center

This morning I received an email from the AMD Developer Center advising me that I need to fill out their NDA if I want access to their development machines.

I have a vague recollection that when AMD64 was first released I was very keep to get access to such hardware and had applied to AMD for access to their machines.

Of course now the second-hand market is full of AMD64 machines and I've got one in my server room so it's not as useful as it once was. I don't even know why AMD would still run a developer center given that everyone who wants AMD64 machines can cheaply buy as many as they want and organizations such as Sourceforge and Debian provide access to such machines for their members.

While I appreciate what AMD is doing, it probably would be best if companies could adopt a standard timeout for electronic correspondence. If someone doesn't follow up for X months then you should assume that they are not interested.

Sunday, December 17, 2006

comment-less blogs

Are comment-less blogs missing the spirit of blogging?

It seems to me that the most significant development about blogging is the idea that anyone can write. Prior to blogs news-papers were the only method of writing topical articles for a mass audience. To be able to write for a news-paper you had to be employed there or get a guest writing spot (not sure how you achieve this but examples are common).

Anyone can start a blog, if there is a community that you are part of which has a planet then it's not difficult to get your blog syndicated and have some reasonable readership. Even the most popular planets have less readers than most small papers, but that combined with the ease of forwarding articles gives a decent readership.

It seems to me that the major characteristic that separates a blog from an online newspaper is the low entry requirements, anyone can create one.

Every news-paper that is remotely worth reading has a letters column to publish feedback from readers. Of course it's heavily moderated and getting even 50% of your letters published is something to be proud of. But it does create a limited forum to discuss the articles that are published.

It seems to me that creating a blog and denying the readers the ability to comment on it is in some ways making the blog less open than a news-paper column. When such blogs are aggregated in a community planet feed it seems that they go against the community spirit. It also drives people to make one-line blog posts in response, which I regard as a bad thing.

The comments on my blog are generally of a high quality, I've had a few anonymous flame comments - but you have to learn to deal with a few flames if you are going to use the net, and people who are afraid to publish their real name to a flame don't deserve much attention. I've had one comment which might have been an attempt to advertise a product (so I deleted it just to be safe). But apart from that the comments are generally very good. I've learned quite a few useful things from blog comments, sometimes I mention having technical problems in blog posts and blog comments provide the solution. Other times they suggest topics for further writing.

There are facilities for moderated blog comments that some people use. If you have a really popular blog then it's probably a good idea to moderate the comments to avoid spam, but I'm not that popular yet and most people who blog will never be so popular. At this time blog moderation would be more trouble for me than it's worth.

In conclusion I believe that the web should be about interactive communication in all areas, it should provide a level playing field where the participation of all individuals is limited only by time and ability. Refusing comments on blogs is a small step away from that goal.

what defines a well operating planet?


At OSDC Mary Gardiner gave a talk titled The Planet Feed Reader: Better Living Through Gravity. During the course of the presentation she expressed the opinion that short dialog based blog entries are a sign of a well running planet.

Certainly if blog posts respond to each other then there is a community interaction, and if that is what you desire from a planet then it can be considered a good thing. Mary seemed focussed on planets for internal use rather than for people outside the community which makes the interaction more important.

However I believe that planets are not a direct substitute for mailing lists. On a mailing list you can reply to a message agreeing with it and expect that the same people who saw the original message will see your reply. Blogs however are each syndicated separately so a blog post in response to someone else's blog should be readable on it's own. A one line post saying "John is right" provides little value to people who don't know who John is, especially if you don't provide a link to John's post that you agree with.

On Planet Debian there have been a few contentious issues discussed where multiple people posted one-line blog entries. I believe that the effective way to communicate their opinions would either be to write a short essay (maybe 2-3 paragraphs) explaining their opinion and the reasons for it, or if they have no new insight to contribute then they should summarise the discussion.

I believe that a planet such as Planet Debian or Planet Linux Australia should not only be a forum for people who are in the community but also an introduction to the community for people who are outside. AOL posts don't help in this regard.

One final thing to note is that blogs already do have a feature for allowing "me too" responses, it's the blog comment facility...

PS Above is a picture of day 59 of the beard, it was taken on the 5th of December (I've been a little slack with beard pictures).

Saturday, December 16, 2006

quantum evolution

On several occasions in discussions about life etc friends have mentioned the theory that quantum mechanics dictates the way our cells work. In the past I have not been convinced. However this site http://www.surrey.ac.uk/qe/Outline.htm has a very well written description of the theory which is very compelling.

Friday, December 15, 2006

some questions about disk encryption

On a mailing list some questions were asked about disk encryption, I decided to blog the answer for the benefit of others:

What type of encryption would be the strongest? the uncrackable if you will? im not interested in DES as this is a US govt recommendation - IDEA seems good but what kernel module implements this?


The US government (which incidentally employs some of the best cryptologists in the world) recommends encryption methods for data that is important to US interests (US military and banking operations for starters). Why wouldn't you want to follow those recommendations? Do you think that they are putting back-doors in their own systems?

If they were putting in back-doors do you think that they would use them (and potentially reveal their methods) for something as unimportant as your data?

I think that if the US military wanted to apply a serious effort to breaking the encryption on your data then you would have an assortment of other things to worry about, most of which would be more important to you than the integrity of your data.

I've read some good things about keeping a usb key for system boot so that anything on the computer itself is unreadable without the key - but thats simply just a physical object - I'd like both the system to ask for the passphrase for the key as well as needing the usb key

I believe that can be done with LUKS, however it seemed broken last time I experimented with it so I've stuck with the older operation of cryptsetup.

What kind of overheads does something like this entangle? - will my system crawl because of the constant IO load of the disk?

My laptop has a Pentium-M 1.7GHz and a typical laptop drive. The ratio of CPU power to hard drive speed is reasonable. For most operations I don't notice the overhead of encryption, the only problem is when performing CPU intensive IO operations (such as bzip compression of large files). When an application and the kernel both want to use a lot of CPU time then things can get slow.

More recent machines have a much higher ratio of CPU power to disk IO as CPU technology has been advancing much faster than disk technology. A high-end desktop system might have 2-3x the IO capacity
of my machine, but a single core would have 2-3x the computer power of the CPU in my laptop and for any system you might desire nowadays 2 cores is the minimum. Single-core machines are still on sale and still work well for many people - I am still deploying Pentium-3 machines in new installations, but for machines that make people drool it's all dual-core in laptops and one or two dual-core CPUs in desktop systems (with quad core CPUs on sale soon).

If you want to encrypt data on a P3 system with a RAID array (EG a P3 server) then you should expect some performance loss. But for a typical modern desktop system you shouldn't expect to notice any overhead.

Thursday, December 14, 2006

IDE hard drives

I just lent two 80G IDE drives to a friend, and he re-paid me with 160G drives. Generally I don't mind people repaying hardware loans with better gear (much better than repaying with the same gear after a long delay and depreciation), but this concerns me.

My friend gave me the 160G drives because he can't purchase new 80G drives any more, his supplier has nothing smaller than 160G. I have some very reliable machines that I don't want to discard which won't support 160G drives - I'm not even sure that they would boot with them! Now I'm going to have to stock-pile 40G disks.

The machines I am most concerned about are my Cobalt machines. They are nice little servers that are quiet and use only 20W of electricity!

It's a pity that there aren't any cheap flash storage devices that connect to an IDE bus. If I could get my Cobalt machines running with flash storage they would be even more quiet and energy efficient while not being at risk of mechanical damage, and I doubt that flash storage will exceed 40G of capacity for a while.

Update: I've set a new personal record for rapid comments on a blog entry, all telling me that it is possible to get CF to IDE adapters. Thanks for the information, I appreciate it and will consider it for some machines. The problem however is that the price of a CF to IDE adapter plus the cost of a CF card of suitable size is moderately high (more than the cheaper hard drives), while CF capacity generally is only just usable for a mainstream Linux distribution.

These factors combine to make CF-IDE devices an option for only certain corner cases, not really an option to replace all the hard drives in machines that matter to me. I will probably use it for at least one of my Cobalt machines though.

Update2: Julien just informed me of the new Samsung flash-based laptop drives that will have capacities up to 16G (or 32G according to other web sites). I'm now trying to discover where to buy them.