Sunday, December 31, 2006

memes that damage debian

The Debian project is afflicted with several damaging memes. One that is causing problems at the moment is the idea that life should be fair. Unfortunately life is inherently unfair, it's not fair that those of us who were born in first-world countries (the majority of Debian developers) have so many more opportunities than most people who are born in "developing" countries, and things just continue to be unfair as you go through life. Unfair things will happen to you, deal with it and do what is necessary to have a productive life!

When one developer has regular long-term disputes with many other developers the conclusion is that the one person who can't get along is at fault. We can debate about whether one or two significant disputes are a criteria for this or whether having a dozen developers disagreeing with them is the final point. But the fact is that if there is a large group of people who work together well and an individual who can't work with any of them then there is only one realistic option, the individual needs to go find some people that they can work with - they can resign or be expelled. The fact that something slightly unfair might have happened a year ago is no reason for pandering to an anti-social developer. The fact that expelling a developer for being anti-social is unfair to them is no reason for damaging the productivity of all Debian developers.

Another problemmatic meme is the idea that we have to tolerate everyone - even those who are intolerant (known as the Limp Liberal meme). When someone has no tolerance for others (EG being racist or practicing sexual discrimination) then they have no place in a community such as Debian. They need to be removed sooner rather than later. All Debian developers know the problems caused by deferring such expulsion.

The final damaging meme that I have observed is you can't force a volunteer to do any work. On it's own that statement is OK, but the interpretation commonly used in Debian is you can't take their job away from them either. The most common example of this is when a developer is not maintaining a package and someone else does an NMU (non-maintainer upload) to fix a bug, the developer then flames the person who did this (usually to fix a severe bug). It seems to be believed that a Debian developer owns their packages and has a right to prevent other people from working on them. This attitude also extends to all other aspects of Debian, there are many positions of responsibility in Debian that are not being adequately performed and for which volunteers are offering to help out but being refused.

The idea of the GPL is that when a program is not being developed adequately it can be taken over by another person. However when that program is in a Debian package the developer who owns it can refuse to allow this.

Saturday, December 30, 2006

more about vista security

While reading the discussion of Vista security on Bruce Schneier's blog it occurred to me that comparing the issues of DRM that face MS with the issues faced by SE Linux developers provides some benefits.

SE Linux is designed to enable the owner of a computer to effectively enforce security policies to protect their system integrity and the confidentiality of their data. Some of the SE Linux users (military users) use all the available features, but most people only use the targeted policy which provides a sub-set of the system integrity and data confidentiality protections that are available to give a greater ease of use.

Fedora and Red Hat Enterprise Linux ship with the SE Linux targeted policy enabled by default. This policy is not something that most users notice. The main program that is commonly used which might have an issue with the default SE Linux policy is Apache. Fedora and RHEL systems that do not run Apache (which is most of them) can run SE Linux with almost no costs.

setsebool -P httpd_disable_trans 1
/etc/init.d/httpd restart
It seems clear to me that there is no good reason for disabling SE Linux by default. There are reasons for running a particular daemon in the unconfined_t domain via the FOO_disable_trans boolean. EG to run Apache without restrictions you would type the above commands.

In spite of the SE Linux targeted policy being so easy to use, the fact that it prevents certain daemon compromises from allowing immediate control of the system, and also prevents some kernel exploits from working, there are still some people who turn it off when installing Fedora and RHEL systems and advise others to do so.

Given this great fear of some people to use a technology that is specifically designed for their own benefit I find it difficult to imagine that any users will be inclined to accept the MS DRM technology that is specifically designed to go against their interests.

ESR claims that the 64bit transition is the critical period for Linux to move on the desktop. While he makes many interesting observations I don't find his argument convincing. Firstly current P4 CPUs with PAE can handle significantly more than 4G of RAM (64G is possible now). Secondly it's quite possible to run a 64bit kernel with 32bit applications, this means that you can use a 64bit kernel to access >64G of RAM with each 32bit application getting direct access to something slightly less than 4G of virtual memory. As ESR's point seems based on total system memory the 4G per application doesn't seem to be an issue. As an aside the only applications I've used which could benefit from >4G are database servers (in the broadest interpretation - consider an LDAP server as a database) and JVM environments. Again it would not be difficult to ship an OS with mostly 32bit code that has a 64bit kernel, JVM, and database servers.

I hope that Vista will be a good opportunity for mass transition to Linux. Vista offers little that users desire, many things that will hurt them, and is expensive too!

With Vista you pay more for the OS and more for the hardware (Vista has the usual excessive Windows hardware requirements plus the extra hardware for TPM encryption) without providing anything substantial in return.

What I want to see next is support for Security Enhanced X to protect desktop environments against hostile X clients. This will make a significant difference to the security of Linux desktop environments and provide another significant benefit for choosing Linux over Windows. While MS is spending their effort in making their OS act against the best interests of the users we will keep making Linux enforce the access controls that users need to protect their systems. Hopefully Linux users will choose to use SE-X, but if they don't they are given the freedom to make that choice - unlike the poor sods who use Windows.

Friday, December 29, 2006

email disclaimers

Andre Pang blogs about the annoyance of email disclaimers. For a while I had a .sig indicating that it was a condition of sending email to me that the sender agrees to legalistic terms in their .sig being inapplicable to me.

220 smtp.sws.net.au ESMTP Postfix - by sending email to this server you agree that any legalistic sig in your message does not apply to anyone who receives the message through this service.

Now I have changed my Postfix greeting to the above. Anyone who sends me mail agrees that their .sig does not apply to me. Suggestions for improvements to the above text are welcome.

source dump blog

Inspired by Julien Goodwin's post I created a new blog for myself named Source Dump. Source is different to other blog content in that updates to fix bugs may be required (generally I believe that ideally blog posts should not be edited once published and in the rare cases of editing being necessary all such edits should be appended to the end), and in that it may be longer than is suitable for a Planet feed. Finally Planet and other aggregators may mess up source code. If the source is only visible through my blog then I can be reasonably sure that it is usable by everyone who sees it.

So when I have source code to publish that is related to blog postings or mailing list postings I now have a place for it (I haven't yet made a posting).

I will not be submitting my Source Dump blog for syndication anywhere, but if anyone wants to syndicate it they are welcome to do so. I don't desire that it not be syndicated, I merely recommend that it not be syndicated for the convenience of readers due to the potential large size of postings and the potential for postings to be broken by aggregation.

music for children

Adam Rosi-Kessel made an interesting post about They Might Be Giants producing children's music because their original fan base are now old enough to have children.

From casual inspection of the crowds at events such as Linux Conf AU it seems to me that many serious Linux people are also at the right age to have young children, and several blogs that are syndicated on Linux Planets provide evidence of this. Therefore it seems that there is a market for Linux related children's music.

Many aspiring artists complain about the difficulty of establishing a reputation. I think that if someone was to release OGG and FLAC recordings of a children's version of the Free Software Song under a Creative Commons license then they would get some immediate publicity through the blog space and Linux conferences which could then be used to drive commercial sales of children's music.

While on the topic, it would be good to have a set of children's songs and nursery rhymes to teach children from a young age about the community standards that we share in the Free Software community. There is no shortage of propaganda that opposes our community standards, the idea that sharing all music and software is a crime is being widely promoted to children.

Tuesday, December 26, 2006

google reader

From a suggestion on my previous blog entry I decided to test out google reader.

The first problem was that it caused Konqueror to SEGV in etch, I filed a bug report and switched to Firefox.

Next to add my feeds I had to either export them in OPML format or add them one at a time, there is no support for pasting in a list of URLs. If I was writing a RSS syndication program I would also make it parse the config files of some of the common programs, parsing a Planet config file is pretty easy.

I added a feed for a friend who's server seems to be down. While doing so I tried to add another feed, the google reader accepted the command to add the second feed but didn't actually do so - it was fortunate that I was pasting it in not typing it...

The killer issue is that it seems to be impossible to merge feeds. I want to read both Planet Linux Australia and Planet Debian, there are some people who are on both planets (EG me). So it makes no sense to do anything other than display both of them in the same view.

At this time it seems that google reader is unsuitable for my use. However it is a fairly slick system and I imagine that it would work quite well for people who have different needs to me. If you want to read the blogs of a few friends then it probably works really well. It just seems not to work well for a set of meshed communities (Debian developers and Linux users in Australia for example).


Please let me know if I somehow missed some configuration options to make google reader do what I want.

planet - resource use

I just noticed that /usr/bin/planetplanet is using about 120M of RAM. This isn't currently a problem as I'm running it on a machine with 256M of RAM, however I would like to run my web server on a 96M Xen instance. 120M for planetplanet is probably going to cause bad performance on a web server with 96M of physical RAM allocated.

This is a serious problem for me as the Xen server in question can't be upgraded any more (the motherboard has as much RAM as it can handle).

Are there any other free syndication programs that use less memory?

Are there any good free syndication services that I could use instead of running my own planet?

Monday, December 25, 2006

DOSing Windows Vista

Chris Samual writes a good summary of Peter Gutmann's analysis of the cost of Vista (in terms of DRM).

The following paragraph in the article however seemed more interesting to me:
Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640x480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found.

Now just imagine that you want to cause widespread disruption - a DOS (Denial Of Service) attack against Windows users. What better option than to cause most of them to have hardware that's not acceptable to the OS? I expect that there will be many instances of security holes in drivers and hardware being concealed by MS because they can't afford the PR problems resulting from making millions of machines cease functioning. But just imagine that someone finds hardware vulnerabilities in a couple of common graphics drivers or pieces of hardware and publicly releases exploits shortly before a major holiday. If it's public enough (EG posted to a few mailing lists and faxed to some major newspapers) then MS would be forced to invoke the DRM measures or lose face in a significant way. Just imagine the results of stopping 1/3 of machines working just before Christmas!

Of course after the first few incidents people will learn. It shouldn't be difficult to configure a firewall to prevent all access to MS servers so that they can't revoke access, after all our PCs are important enough to us that we don't want some jerk in Redmond just turning them off. Of course disabling connections to MS would also disable security updates - but we all know that usability is more important than security to the vast majority of users (witness the number of people who happily keep using a machine that they know to be infected with a virus or trojan).

If this happens and firewalling MS servers becomes a common action, I wonder if MS will attempt the typical malware techniques of using servers in other countries with random port numbers to get past firewalls. Maybe Windows updates could be spread virally between PCs, this method would allow infecting machines that aren't connected to the net via laptops.

Finally, I recommend that people who are interested in such things read Eastern Standard Tribe by Cory Doctorow, he has some interesting ideas about people DOSing corporations that they work for which seem surprisingly similar to what MS is doing to itself. I'll post about my own observations of corporations DOSing themselves in the near future.

Sunday, December 24, 2006

MythTV

I've just been trying to set up a MythTV system, I've had the hardware for this for a while but until my TV broke I hadn't found the time to work on it.

My planned hardware was a P3 system for both frontend and backend. However the Debian MythTV packages take over 250M of memory for each of the frontend and backend programs (more than 500M for Myth programs alone), combine that with some memory use for MySQL, for the X server, and for the rest of a functional Linux system and a machine with 512M of RAM does not perform well. Unfortunately it seems that most (if not all) desktop P3 systems only support 512M of RAM (at least none of my collection of P3 machines does). P3 server systems support considerably more RAM (I used to have a SMP P3 server with 1G of RAM), but a server class machine is not what you want in your lounge room. P4 systems support more RAM but also take more than twice the electricity of P3 systems. The extra electricity use is a waste for a machine that will probably run 24*7, and also requires extra cooling (and therefore probably louder fans which is bad for a machine used for playing audio). Given that I desire the capabilities of a P4 in terms of RAM support and electricity use that is not excessive, my options are one of the recent Intel Core CPUs or AMD64 (both of which apparently use less electricity - not sure whether they would use as little as a P3) or a CPU designed for low power use (maybe a Via). Either option costs more than I prefer to spend (I want to use a second-hand machine valued at $129).

Of course if I could get Myth to use less memory then that would allow me to run both parts of MythTV on the one machine. Please offer any advice you can think of in regard to minimising memory use for Myth on Debian.

Given that I am unlikely to be able to optimise memory use as much as I desire it seems that I will have to go to a split model and try using an existing Pentium-D server machine as the backend and the P3 as a frontend. The up-side of this is that I can turn off the frontend machine when it's not being used (the server runs 24*7 anyway). I will need to get more RAM for the Pentium-D machine, but I had wanted to do that anyway.

The next thing I will have to do is to write SE Linux policy for MythTV and find out why /usr/lib/libmp3lame.so.0.0.0 needs both execmod and execstack (and either fix the library or write policy to permit that accss). I'm happy to have a MythTV server program running on a stand-alone machine with unrestricted access to all resources, but when it's going to run on a machine that is more important to me I need to lock it down.

Now to the design of MythTV. It's unfortunate that they don't support better multitasking options. One feature I would like to see is the ability to play MP3s in the background and then have the music pause whenever TV is selected. When I watch a live show (such as the news) I would like to listen to MP3s before it starts and then at the end of the show (and maybe during commercial breaks) have the MP3 playing resume where it left off. This is exactly the traditional lounge-room functionality we are all used to, when a commercial starts you mute the TV and un-pause the CD player! The advantage of MythTV is that you could do both with a single button - or even automatically with advert recognition!

The method of selecting MP3 files to play is also a little cumbersome and the option to add a song after the current one in the play list doesn't seem to work. Also it's a pity that there is no convenient option to sort the music list by artist, genre, or sone name - there are grouping options but they are all separate and I would like to change the sorting via a single key press.

The startup sequence of MythTV regenerates some images (which takes a few seconds), it seems that this is something that could be cached for a faster startup. Also when I add new MP3 files to the store I have to manually request a re-scan. It's a pity that it can't just check the size and time-stamp of the directories which contain MP3 files and do an automatic re-scan if they change.

Saturday, December 23, 2006

installing Debian Etch

A few days ago I installed Debian/Etch on my Thinkpad. One of the reasons for converting from Fedora to Debian is that I need to run Xen and Fedora doesn't support non-PAE machines with Xen. Ironically it's hardware supplied to me by Red Hat (Thinkpad T41p) that is lacks PAE support and forces me to switch to Debian. I thought about just buying a new dual-core 64bit laptop, but that seems a bit extravagant as my current machine works well for everything else.

Feeling adventurous I decided to use the graphical mode of the installer. I found it a little confusing, at each stage you can double-click on an item or click on the continue button to cause the action to be performed. The partitioning section was a little unclear too, but given that it has more features than any other partitioning system I've seen I wasn't too worried (options of creating a degraded RAID array and for inserting a LUKS encryption layer at any level are really nice). The option to take a screen-shot at any time was also a handy feature (I haven't yet inspected the PNG files to see what they look like).

Another nice feature was the way that the GUI restarts after a crash. While it was annoying that the GUI started crashing on me (and would have prevented a less experienced user from completing the install) the fact that it didn't entirely abort meant that I could work around the problem.

I have not yet filed any bug reports against the installer because I have not done a repeatable install (there is a limit to how much testing I will do on my most important machine). In the next few days I plan to do a few tests of the graphical installer on test hardware for the operations that are important to me and file appropriate bug reports. I encourage others to do the same, the graphical mode of the installer and the new encryption and RAID features are significant improvements to Debian and we want them to work well.

I have realised that it won't be possible to get SE Linux as good as I desire before the Etch release, even if the release is delayed again. I'm not sure how many fixes can go in after the release (I hope that we could move to a model similar to RHEL - but doubt that it will happen). So I now plan to maintain my own repository of Etch SE Linux packages and for other packages which need changes to make them work in the best possible manner with SE Linux. I will append something like ".se1" to the version of the packages in question, this means that they will be replaced if a security update is released for the official package. Apart from the SE Linux policy packages (for which any security updates will surely involve me) the changes I am going to make will not be major and will be of less importance than a security update.

I will also add other modified and new packages to my repository that increase the general security of Etch. Apart from SE Linux all the changes I intend to host will be minimal cost issues (IE they won't break things or increase the difficulty of sys-admin tasks), and the SE Linux related changes will not break anything on non-SE systems. So someone who wants general security improvements without using SE Linux might still find my repository useful.

another visual migraine

This morning while travelling to work by tram I had another visual migraine. It was a little worse than last time, not only did everything I focussed on appear to shimmer, but things went a bit grey at my peripheral vision. I had a headache as well although it was very mild (not the typical migraine headache).

It was convenient that the vision problems almost exactly matched the time of my tram journey so that it didn't cause me to waste much time. One visual migraine every three months is something that won't inconvenience me much. I just hope that I don't get other migraine symptoms in future.

Friday, December 22, 2006

encryption speed - Debian vs Fedora

I'm in the process of converting my Fedora/rawhide laptop to Debian.

On Fedora the AES encrypted filesystems deliver about 38MB/s read speed according to dd. On Debian the speed is 2.4MB/s when running Xen and 2.7MB/s when not running Xen. The tests were done on the same block device.

Debian uses a SMP kernel (there are no non-SMP kernels in Debian), but I don't expect this to give an order of magnitude performance drop. Both systems use i686 optimised kernels.

Update: As suggested I replaced the aes module with the aes_586 module. Unfortunately it made no apparent difference.

Update2: As suggested by a comment I checked the drive settings with hdparm and discovered that my hard drive was not using DMA. After I configured the initramfs to load the piix driver first it all started working correctly. Thanks for all the suggestions, I'll post some benchmarks of encryption performance in a future blog entry.

Thursday, December 21, 2006

hybrid Lexus is best luxury car

The Lexus GS 450 hybrid petrol/electric car has been given the award for Australia's best luxury car!

The judging for this contest rated fuel efficiency as low importance, because luxury car owners traditionally aren't very concerned about such things. The Lexus won because of it's quiet engine (can't beat an electric motor at low speed), high performance (3.5L petrol engine that outperforms mode 4L engines because of the electric motor assistance), safety, security, and other factors.

There has been an idea that hybrid cars are only for people who want to protect the environment at all costs. The result of this contest proves that idea to be false. The Lexus won by simply being a better luxury car, the features that benefit the environment also give a smoother and quieter ride and higher performance - which are factors that are very important to that market segment! Also it wasn't even a close contest, the nearest rival achieved an aggregate score of 9% less (a significant difference as there was a mere 2.5% difference in score between the 2nd place and 5th place).

This of course shouldn't be any surprise. The high torque that electric motors can provide at low speed is well known - it's the reason for Diesel-electric hybrid power systems in locomotives. It was only a matter of time before similar technology was introduced for cars for exactly the same reasons. The next development will be hybrid Diesel-electric trucks.

Tuesday, December 19, 2006

interesting things

/tmp /mnt/bind bind bind 0 0

Today I discovered that the above syntax works in the /etc/fstab file. This enables a bind mount of /tmp to /mnt/bind which effectively makes /mnt/bind a hard link to /tmp. The same result can be achieved by the following command, but last time I tried (quite some time ago) it didn't seem to work in /etc/fstab - but now it works in both SUSE and Debian.

mount --bind /tmp /mnt/bind

Also I recently discovered that 0.0.0.0 is an alias to 127.0.0.1. So for almost any command that takes an IP address you can use either address with equal results (apart from commands which interpret the string and consider 0.0.0.0 to be invalid). I can't think of any benefit to using this, and challenge the readers to post a comment (or make their own blog post if they so wish) demonstrating it's utility.

AMD developer Center

This morning I received an email from the AMD Developer Center advising me that I need to fill out their NDA if I want access to their development machines.

I have a vague recollection that when AMD64 was first released I was very keep to get access to such hardware and had applied to AMD for access to their machines.

Of course now the second-hand market is full of AMD64 machines and I've got one in my server room so it's not as useful as it once was. I don't even know why AMD would still run a developer center given that everyone who wants AMD64 machines can cheaply buy as many as they want and organizations such as Sourceforge and Debian provide access to such machines for their members.

While I appreciate what AMD is doing, it probably would be best if companies could adopt a standard timeout for electronic correspondence. If someone doesn't follow up for X months then you should assume that they are not interested.

Sunday, December 17, 2006

comment-less blogs

Are comment-less blogs missing the spirit of blogging?

It seems to me that the most significant development about blogging is the idea that anyone can write. Prior to blogs news-papers were the only method of writing topical articles for a mass audience. To be able to write for a news-paper you had to be employed there or get a guest writing spot (not sure how you achieve this but examples are common).

Anyone can start a blog, if there is a community that you are part of which has a planet then it's not difficult to get your blog syndicated and have some reasonable readership. Even the most popular planets have less readers than most small papers, but that combined with the ease of forwarding articles gives a decent readership.

It seems to me that the major characteristic that separates a blog from an online newspaper is the low entry requirements, anyone can create one.

Every news-paper that is remotely worth reading has a letters column to publish feedback from readers. Of course it's heavily moderated and getting even 50% of your letters published is something to be proud of. But it does create a limited forum to discuss the articles that are published.

It seems to me that creating a blog and denying the readers the ability to comment on it is in some ways making the blog less open than a news-paper column. When such blogs are aggregated in a community planet feed it seems that they go against the community spirit. It also drives people to make one-line blog posts in response, which I regard as a bad thing.

The comments on my blog are generally of a high quality, I've had a few anonymous flame comments - but you have to learn to deal with a few flames if you are going to use the net, and people who are afraid to publish their real name to a flame don't deserve much attention. I've had one comment which might have been an attempt to advertise a product (so I deleted it just to be safe). But apart from that the comments are generally very good. I've learned quite a few useful things from blog comments, sometimes I mention having technical problems in blog posts and blog comments provide the solution. Other times they suggest topics for further writing.

There are facilities for moderated blog comments that some people use. If you have a really popular blog then it's probably a good idea to moderate the comments to avoid spam, but I'm not that popular yet and most people who blog will never be so popular. At this time blog moderation would be more trouble for me than it's worth.

In conclusion I believe that the web should be about interactive communication in all areas, it should provide a level playing field where the participation of all individuals is limited only by time and ability. Refusing comments on blogs is a small step away from that goal.

what defines a well operating planet?


At OSDC Mary Gardiner gave a talk titled The Planet Feed Reader: Better Living Through Gravity. During the course of the presentation she expressed the opinion that short dialog based blog entries are a sign of a well running planet.

Certainly if blog posts respond to each other then there is a community interaction, and if that is what you desire from a planet then it can be considered a good thing. Mary seemed focussed on planets for internal use rather than for people outside the community which makes the interaction more important.

However I believe that planets are not a direct substitute for mailing lists. On a mailing list you can reply to a message agreeing with it and expect that the same people who saw the original message will see your reply. Blogs however are each syndicated separately so a blog post in response to someone else's blog should be readable on it's own. A one line post saying "John is right" provides little value to people who don't know who John is, especially if you don't provide a link to John's post that you agree with.

On Planet Debian there have been a few contentious issues discussed where multiple people posted one-line blog entries. I believe that the effective way to communicate their opinions would either be to write a short essay (maybe 2-3 paragraphs) explaining their opinion and the reasons for it, or if they have no new insight to contribute then they should summarise the discussion.

I believe that a planet such as Planet Debian or Planet Linux Australia should not only be a forum for people who are in the community but also an introduction to the community for people who are outside. AOL posts don't help in this regard.

One final thing to note is that blogs already do have a feature for allowing "me too" responses, it's the blog comment facility...

PS Above is a picture of day 59 of the beard, it was taken on the 5th of December (I've been a little slack with beard pictures).

Saturday, December 16, 2006

quantum evolution

On several occasions in discussions about life etc friends have mentioned the theory that quantum mechanics dictates the way our cells work. In the past I have not been convinced. However this site http://www.surrey.ac.uk/qe/Outline.htm has a very well written description of the theory which is very compelling.

Friday, December 15, 2006

some questions about disk encryption

On a mailing list some questions were asked about disk encryption, I decided to blog the answer for the benefit of others:

What type of encryption would be the strongest? the uncrackable if you will? im not interested in DES as this is a US govt recommendation - IDEA seems good but what kernel module implements this?


The US government (which incidentally employs some of the best cryptologists in the world) recommends encryption methods for data that is important to US interests (US military and banking operations for starters). Why wouldn't you want to follow those recommendations? Do you think that they are putting back-doors in their own systems?

If they were putting in back-doors do you think that they would use them (and potentially reveal their methods) for something as unimportant as your data?

I think that if the US military wanted to apply a serious effort to breaking the encryption on your data then you would have an assortment of other things to worry about, most of which would be more important to you than the integrity of your data.

I've read some good things about keeping a usb key for system boot so that anything on the computer itself is unreadable without the key - but thats simply just a physical object - I'd like both the system to ask for the passphrase for the key as well as needing the usb key

I believe that can be done with LUKS, however it seemed broken last time I experimented with it so I've stuck with the older operation of cryptsetup.

What kind of overheads does something like this entangle? - will my system crawl because of the constant IO load of the disk?

My laptop has a Pentium-M 1.7GHz and a typical laptop drive. The ratio of CPU power to hard drive speed is reasonable. For most operations I don't notice the overhead of encryption, the only problem is when performing CPU intensive IO operations (such as bzip compression of large files). When an application and the kernel both want to use a lot of CPU time then things can get slow.

More recent machines have a much higher ratio of CPU power to disk IO as CPU technology has been advancing much faster than disk technology. A high-end desktop system might have 2-3x the IO capacity
of my machine, but a single core would have 2-3x the computer power of the CPU in my laptop and for any system you might desire nowadays 2 cores is the minimum. Single-core machines are still on sale and still work well for many people - I am still deploying Pentium-3 machines in new installations, but for machines that make people drool it's all dual-core in laptops and one or two dual-core CPUs in desktop systems (with quad core CPUs on sale soon).

If you want to encrypt data on a P3 system with a RAID array (EG a P3 server) then you should expect some performance loss. But for a typical modern desktop system you shouldn't expect to notice any overhead.

Thursday, December 14, 2006

IDE hard drives

I just lent two 80G IDE drives to a friend, and he re-paid me with 160G drives. Generally I don't mind people repaying hardware loans with better gear (much better than repaying with the same gear after a long delay and depreciation), but this concerns me.

My friend gave me the 160G drives because he can't purchase new 80G drives any more, his supplier has nothing smaller than 160G. I have some very reliable machines that I don't want to discard which won't support 160G drives - I'm not even sure that they would boot with them! Now I'm going to have to stock-pile 40G disks.

The machines I am most concerned about are my Cobalt machines. They are nice little servers that are quiet and use only 20W of electricity!

It's a pity that there aren't any cheap flash storage devices that connect to an IDE bus. If I could get my Cobalt machines running with flash storage they would be even more quiet and energy efficient while not being at risk of mechanical damage, and I doubt that flash storage will exceed 40G of capacity for a while.

Update: I've set a new personal record for rapid comments on a blog entry, all telling me that it is possible to get CF to IDE adapters. Thanks for the information, I appreciate it and will consider it for some machines. The problem however is that the price of a CF to IDE adapter plus the cost of a CF card of suitable size is moderately high (more than the cheaper hard drives), while CF capacity generally is only just usable for a mainstream Linux distribution.

These factors combine to make CF-IDE devices an option for only certain corner cases, not really an option to replace all the hard drives in machines that matter to me. I will probably use it for at least one of my Cobalt machines though.

Update2: Julien just informed me of the new Samsung flash-based laptop drives that will have capacities up to 16G (or 32G according to other web sites). I'm now trying to discover where to buy them.

Sunday, December 10, 2006

The Squirrel and the Grasshopper

There's a story going around the neo-con blogs titled "The Squirrel and the Grasshopper". It was forwarded to me by a business associate with the claim that it's "right on the money". It's strange that someone could be considered to be "right on the money" for Australia when essentially the same text is posted in the UK, New Zealand, and Sweden (from a 30 second google search - I'm sure that the neo-cons in other countries have posted it too).

The following are the neo-con ideas promoted by the story in question:

  1. To mis-represent a local main-stream political party that is known for representing workers (the ALP in the case of Australia) as being extremist and associated with Greenpeace (an organization that is out of favor at the moment and disliked by many people who vote for main-stream parties).
  2. To spread the "liberal press" lie that wing-nuts like to believe. Any analysis of the press will show that most multi-national media organizations are quite biased towards the right-wing groups.
  3. Making false claims about the legal system to drive support for recent fascistic legal changes. In the case of Australia this means allowing employers to lay off employees and immediately re-hire them at a lower rate, allowing employees to be laid off if factory equipment breaks down, and for almost any reason you can imagine. Driving the idea that the judges are incompetent and therefore imposing legislation to remove judicial discretion is an important step in removing civil rights.
  4. Claims that the government is communist and takes the property from the middle-classes and gives it to unworthy people. In fact the opposite is true (for Australia at least). Large companies and wealthy individuals are routinely given community property. The toll roads are the best example of this, the government closes public roads that can be used as an alternative to a toll road, and then politicians get paid off after they leave office. Far from taking money from people who work (as the neo-con propaganda claims) the government allows big corporations to do so with impunity. The Australian government (as many governments in first-world countries) has been becoming increasingly fascistic recently.
  5. The claim that asylum seekers are terrorists. If the government wanted to stop terrorism then they would cease involvement with those parts of the world. However only plebians (people like us) are likely to be hurt by terrorism so the government has little motivation to stop it - it's good for winning elections! By joining the invasion of Iraq the Australian government helped al Quaeda establish new training bases while also giving al Quaeda (and related organizations) a reason to target Australia. Also whenever a war is started people will be forced to leave their homes and seek asylum else-where. If you don't want asylum seekers seeking entry to your country then you don't want to mess up other countries and force people to flee.
  6. Support for the "war on drugs". That war has been at best a stale-mate and generally a loss for a century now. The approach that is being adopted experimentally of legal supply of hard drugs to addicts seems to have more promise. Incidentally the breaches in border security that are established for the purpose of drug smuggling are available for any illegal purpose that pays enough - if al Quaeda wanted to smuggle weapons into a first-world country they would probably get drug dealers to do it for them. I'll blog more on this topic in future.

There you have it, The Squirrel and the Grasshopper covered all of the neo-con propaganda bases apart from the pro-Christian angle.

For the benefit of anyone who is thinking of forwarding on a "parable" in future, the first thing you might want to do is a google search on it. Search for comments and also search for who is promoting it. If a message you are considering forwarding is being promoted by people who are obviously racist or who discriminate against people on the basis of religion then you might consider whether you want to associate yourself with them by forwarding the message.

Saturday, December 09, 2006

Debian SE Linux policy bug

checkmodule -m -o local.mod local.te
semodule_package -o local.pp -m local.mod
semodule -u local.pp


Save the following policy as local.te and then run the above commands to make semodule work correctly and to also allow restorecon to access the console on boot.

module local 1.0;

require {
class chr_file { read write };
class fd use;
type restorecon_t;
type tmpfs_t;
type initrc_t;
type semanage_t;
role system_r;
};

allow restorecon_t tmpfs_t:chr_file { read write };
allow semanage_t initrc_t:fd use;

Friday, December 08, 2006

SE Linux on Debian in 5 minutes

Following from my 5 minute OSDC talk yesterday on 5 security improvements needed in Linux distributions I gave a 5 minute talk on installing SE Linux on Debian etch. To display the notes I formatted them such that they were in 24 line pages and used less at a virtual console to display them. The ultra-light laptop I was using has only 64M of RAM which isn't enough for a modern X environment and I couldn't be bothered getting something like Familiar going on it.

After base install you install the policy and the selinux-basics package:

# apt-get install selinux-basics selinux-policy-refpolicy-targeted
The following extra packages will be installed:
checkpolicy libsemanage1 mime-support policycoreutils python python-minimal
python-selinux python-semanage python-support python2.4 python2.4-minimal
selinux-utils
Suggested packages:
python-doc python-tk python-profiler python2.4-doc logcheck syslog-summary
The following NEW packages will be installed:
checkpolicy libsemanage1 mime-support policycoreutils python python-minimal
python-selinux python-semanage python-support python2.4 python2.4-minimal
selinux-basics selinux-policy-refpolicy-targeted selinux-utils
0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
Need to get 6362kB of archives.
After unpacking 41.5MB of additional disk space will be used.
Do you want to continue [Y/n]?

The package install process also configures the policy for the machine. The next step is to label the filesystems, this took 26 seconds on my Celeron 500MHz laptop with 20,000 files on an old IDE disk. The time is in proportion to number of files, often bottlenecked on CPU. A more common install might have 5* as many files with a 5* faster CPU so 30 seconds is probably common for labelling. See the following:

# fixfiles relabel

Files in the /tmp directory may be labeled incorrectly, this command
can remove all files in /tmp. If you choose to remove files from /tmp,
a reboot will be required after completion.

Do you wish to clean out the /tmp directory [N]? y
Cleaning out /tmp
/sbin/setfiles: labeling files under /
matchpathcon_filespec_eval: hash table stats: 14599 elements, 14245/65536 buckets used, longest chain length 2
/sbin/setfiles: labeling files under /boot
matchpathcon_filespec_eval: hash table stats: 19 elements, 19/65536 buckets used, longest chain length 1
/sbin/setfiles: Done.

The next step is to edit /boot/grub/menu.list to enable SE Linux, auditing, and put it in enforcing mode:

title   Debian GNU/Linux, kernel 2.6.17-2-686
root (hd0,1)
kernel /vmlinuz-2.6.17-2-686 root=/dev/x selinux=1 audit=1 ro enforcing=1
initrd /initrd.img-2.6.17-2-686

Then reboot.

After rebooting view the context of your shell, note that the login shell will have a domain of unconfined_t when the targeted policy is used:
# id -Z
system_u:system_r:unconfined_t

Now let's view all processes that are confined:
# ps axZ |grep -v unconfined_t|grep -v kernel_t|grep -v initrc_t
LABEL PID TTY STAT TIME COMMAND
system_u:system_r:init_t 1 ? Ss 0:02 init [2]
system_u:system_r:udev_t 1999 ? S.s 0:01 udevd --daemon
system_u:system_r:syslogd_t 3306 ? Ss 0:00 /sbin/syslogd
system_u:system_r:klogd_t 3312 ? Ss 0:00 /sbin/klogd -x
system_u:system_r:apmd_t 3372 ? Ss 0:00 /usr/sbin/acpid -c /etc
system_u:system_r:gpm_t 3376 ? Ss 0:00 /usr/sbin/gpm -m /dev/i
system_u:system_r:crond_t 3402 ? Ss 0:00 /usr/sbin/cron
system_u:system_r:local_login_t 3423 tty1 Ss 0:00 /bin/login --
system_u:system_r:local_login_t 3424 tty2 Ss 0:00 /bin/login --
system_u:system_r:getty_t 3425 tty3 Ss+ 0:00 /sbin/getty 38400 tty3
system_u:system_r:getty_t 3426 tty4 Ss+ 0:00 /sbin/getty 38400 tty4
system_u:system_r:getty_t 3429 tty5 Ss+ 0:00 /sbin/getty 38400 tty5
system_u:system_r:getty_t 3430 tty6 Ss+ 0:00 /sbin/getty 38400 tty6
system_u:system_r:dhcpc_t 3672 ? S.s 0:00 dhclient3 -pf /var/run/
The initial install of policy inserts modules to match installed software, if you install new software then you need to add new modules with the semodule command:

# semodule -i /usr/share/selinux/refpolicy-targeted/apache.pp
security: 3 users, 7 roles, 824 types, 67 bools
security: 58 classes, 11813 rules
audit(1165532434.664:21): policy loaded auid=4294967295
# semodule -i /usr/share/selinux/refpolicy-targeted/bind.pp
security: 3 users, 7 roles, 836 types, 68 bools
security: 58 classes, 12240 rules
audit(1165532467.874:22): policy loaded auid=4294967295

Note that the security and audit messages come from the kernel via printk, it is displayed on console login but you need to view the system log if logged in via ssh or running an xterm. Now you have to relabel the files that are related to the new policy:

# restorecon -R -v /etc /usr/sbin /var/run /var/log
restorecon reset /etc/bind context system_u:object_r:etc_t->system_u:object_r:named_zone_t
restorecon reset /etc/bind/named.conf context system_u:object_r:etc_t->system_u:object_r:named_conf_t
[...]
restorecon reset /etc/apache2 context system_u:object_r:etc_t->system_u:object_r:httpd_config_t
restorecon reset /etc/apache2/httpd.conf context system_u:object_r:etc_runtime_t->system_u:object_r:httpd_config_t
[...]
restorecon reset /usr/sbin/named context system_u:object_r:sbin_t->system_u:object_r:named_exec_t
restorecon reset /usr/sbin/apache2 context system_u:object_r:sbin_t->system_u:object_r:httpd_exec_t
restorecon reset /usr/sbin/rndc context system_u:object_r:sbin_t->system_u:object_r:ndc_exec_t
restorecon reset /usr/sbin/named-checkconf context system_u:object_r:sbin_t->system_u:object_r:named_checkconf_exec_t
[...]
restorecon reset /var/run/bind context system_u:object_r:var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/bind/run context system_u:object_r:var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/bind/run/named.pid context system_u:object_r:initrc_var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/motd context system_u:object_r:initrc_var_run_t->system_u:object_r:var_run_t
restorecon reset /var/run/apache2 context system_u:object_r:var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/run/apache2/cgisock.3558 context system_u:object_r:var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/run/apache2.pid context system_u:object_r:initrc_var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/log/apache2 context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t
restorecon reset /var/log/apache2/error.log context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t
restorecon reset /var/log/apache2/access.log context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t

The -v option to restorecon causes it to give verbose output concerning it's operations. Often you won't do it in real use, but it's good to illustrate the use.

Now you have to restart the daemons:

# killall -9 apache2
# /etc/init.d/apache2 start
Starting web server (apache2)....
# /etc/init.d/bind9 restart
Stopping domain name service...: bind.
Starting domain name service...: bind.

Apache and BIND now run in confined domains, see the following ps output:

system_u:system_r:httpd_t   3833 ?     Ss     0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t 3834 ? S 0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t 3839 ? Sl 0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t 3841 ? Sl 0:00 /usr/sbin/apache2 -k start
system_u:system_r:named_t 3917 ? Ssl 0:00 /usr/sbin/named -u bind

It's not particularly difficult. I covered the actual install of SE Linux in about 1.5 minutes. I had considered just ending my talk there on a note of "it's so easy I don't need 5 minutes to talk about it" but decided that it was best to cover something that you need to do once it's installed.

If you want to know more about SE Linux then ask on the mailing list (see http://www.nsa.gov/selinux for subscription details), or ask on #selinux on freenode.

Thursday, December 07, 2006

some advice for job seekers

A member of the free software community recently sent me their CV and asked for assistance in getting a job. Some of my suggestions are globally applicable so I'm blogging them.

Firstly I recommend that a job seeker doesn't publish their CV on the net in an obvious place. Often you want to give different versions to different people, and you don't necessarily want everyone to know about the work you do. I can't imagine any situation in which a potential employer might view a CV on the net if it's available but not ask for one if it isn't there. If you are intensively looking for work (IE you are currently between jobs) then I recommend having a copy of your CV in a hidden URL on your site. This means that if you happen to meet a potential employer you can give them a URL so that they can get your CV quickly, but the general public can't view it. A final problem with publishing your CV is that it may cause disputes with former colleagues (EG if you describe yourself as the most skilled programmer in the team then a former colleague who believes themself to be more skillful might disagree).

Next don't put your picture on your CV. In some jurisdictions it's apparently illegal for a hiring officer to consider your appearance. If there are many CVs put forward for the position then it may be easier to just discard yours because of this. There is absolutely no benefit to having the picture, unless of course you are applying for a job as an actor. Incidentally I've considered applying for work as an movie extra. The amount of effort involved is often minimal (EG pretend to drink beer in the back of a bar scene) and the pay is reasonable. It seems like a good thing to do when between computer contracts.

I write my CV in HTML and refuse to convert it. If a recruiting agent can't manage to use IE to print my CV then they are not competent enough to represent me. If a hiring manager can't manage to view my CV with IE then I don't want to report to them. However I recommend against using HTML features that make a document act in any way unlike a word-processor file. There should be no frames or CSS files so there is only one file to email, and the text should be all on one page so the PGDN and PGUP keys can scroll through all the content. Tables, bold, and italic are good, fonts are a minor risk. Colors are bad.

Recruiting agents will often demand that your CV be targeted for the position that you are applying for. I often had complaints such as "I see only sys-admin skills not programming". To solve this I wrote my CV in M4 and used a Makefile to generate multiple versions at the same time. If a recruiter wants a version of my CV emphasising C programming and using Linux then I've already got one ready!

These are just a few thoughts on the topic based on a CV that I just saw. I may write more articles about getting jobs in the computer industry if there is interest.

OSDC

Yesterday I gave a presentation at OSDC in Melbourne about my Postal mail server benchmark suite. The paper was about my new benchmark program BHM for testing the performance of mail relay systems and some of the things I learned by running it. I will put the paper on my Postal site in the near future and also I'll release a new version of Postal with the code in question very soon.

Today at OSDC I gave a 5 minute talk on 5 things that need to be improved in the security of Linux distributions.

  1. The fact that unprivileged programs often inherit the controlling tty of privileged programs which permits them to use the TIOCSTI ioctl to insert characters in the keyboard buffer. I noted that with runuser and a subtle change to su things have been significantly improved in this regard in Fedora, but other distributions need work (and Fedora can go further in this regard).
  2. A polyinstantiated /tmp should be an option that is easy to configure for a novice sys-admin. There have been too many attacks on data confidentiality and system integrity based on file name race conditions in /tmp, this needs to be fixed and must be fixable by novice sys-admins.
  3. The capability system needs to be extended. 31 capabilities is not enough and the significant number of operations that are permitted by CAP_SYS_ADMIN leads to granting excessive privilege to programs.
  4. The use of Xen on servers such that a domU is always used for applications should become common. Then if a compromise is suspected there will be better options for investigation.
  5. SE Linux needs to be used more, particularly the strict policy and MCS. Use of the strict policy often reveals security flaws in other programs.
I'll blog about each of these in detail at some future time.

xen

I'm currently working on a little Debian Xen server, and I encountered a few problems that aren't documented.

The first problem I found was that serial ports don't work with a default Xen setup (as documented in a previous blog entry). However the solution to this turns out to be putting xencons=off on the kernel command-line for the dom0 kernel. This allows the dom0 kernel to see all the serial hardware. If I had wanted to use the serial ports from a domU then something else would need to be done, but as I have no need for this I didn't investigate the matter any further. Thanks to Brian May for discovering this for me.

Next in the early test relreses of etch the udev hotplug interface isn't enabled. So I had to add kernel.hotplug=/sbin/udevsend to the /etc/sysctl.conf file. A newer version of udev appears to fix this without modifications to the kernel.hotplug setting though. The symptom of this was the error "Error: Device 768 (vbd) could not be connected. Hotplug scripts not working" where 768 is the number for /dev/hda (the same message would occur with number 769 if you use /dev/hda1). I choose to use unpartitioned virtual disks such as /dev/hda and /dev/hdb in Xen because there is no benefit in partitions (my Xen instances are not doing enough to need more filesystems than there are virtual IDE disks) and because I don't desire the fake partition table thing that Xen apparently does.

Wednesday, November 29, 2006

Hans Reiser

According to this article in the San Francisco Chronicle Hans Reiser pled "not guilty" to the charge of murdering his wife. This isn't particularly exciting news as all previous indications were that he was going to do so.

However one noteworthy fact from the article is that they are setting up an education fund for his children. Regardless of whether Hans is convicted or not, his children will still be in a bad situation and in need of assistance. While there are plenty of other worthy charities needing donations, if you are considering donating towards a Linux related cause then you might want to consider the children of a kernel coder.

Monday, November 27, 2006

when you can't get along with other developers

Many years ago I was involved in a free software development project with write access to the source tree. For reasons that are not relevant to this post (and which I hope all the participants would regard as trivial after so much time has passed) I had a disagreement with one of the more senior developers. This disagreement continued to the stage where I was threatened with expulsion from the project.

At that time I was faced with a decision, I could have tried to fight the process, and I might have succeeded and kept my position in that project. But doing so would have wasted a lot of time from many people, and might have caused enough productivity loss for enough people to outweigh my contributions to the project for the immediate future. But this didn't seem very productive.

So I requested that my write access to the source tree be removed as I was going to leave the project and unused accounts are a security risk.

I never looked back, I worked on a number of other projects after that time (of which SE Linux is one) and the results of those projects were good for everyone. If I had stayed in the project where things weren't working out then it would have involved many flames, distraction from productive work for everyone, and generally not much good.

The reason I mention this now (after many years have passed) is because in another project I see someone else faced with the same choice I made who is making the wrong decision. The people who are on the same private mailing list as me will all know who I am referring to. The individual in question is appearently suffering health problems as a result of stress caused by their inability to deal with the situation where they can't get along with other people.

My advice to that person was to leave gracefully and find something else to work on. If you don't get along with people and make a big fuss about it then they will only be more glad when they finally get rid of you. Running flame-wars over a period of 6 months to try and get accepted by a team that you don't get along with will not do any good, but it will convince observers that removing you is a good idea.

Sunday, November 26, 2006

supporting an electrion campaign

Yesterday I handed out "how to vote" cards for the Greens at the state election. It did seem to be a significant waste to have so much paper produced. Slightly more than half the voters who visited my polling booth took cards from all parties, which was obviously of little use. There is some useful information to be gained from reading the cards from all parties, but nothing that you can analyse during the short period spent waiting in line. I expect that most people decide who to vote for before they get anywhere near the polling booth and just accept the cards because they feel that it may be rude to reject them. While ironically some people who didn't like the Greens refused to accept a card from me and told me that they didn't want it with the impression that they would offend me, I'd rather save the trees and not give cards to people who don't want to use them...

I spoke to a representative of the Family First party who tried to convince me that the Greens should be against homosexuality because the Greens are "against unnatural things", he also claimed that people who choose not to have children (being gay is apparently choosing not to have children) are selfish - unless of course they are a celibate priest. He also managed to offend a supporter of the ALP in two different ways which led to an amusing heated debate and then left before I could have any more fun. For the reference of other Family First people, I've pasted in the dictionary definitions of "homo" and "hetero", when used as prefixes those Greek derived words mean "like attracting like" and "opposites attract". An example of such usage is the term "homo-charged electrets" used in electronics.

From The Collaborative International Dictionary of English v.0.48 [gcide]:
Hetero- \Het"er*o-\ [Gr. "e`teros other.]
A combining form signifying other, other than usual,
different; as, heteroclite, heterodox, heterogamous.
[1913 Webster]

From The Collaborative International Dictionary of English v.0.48 [gcide]:
Homo- \Ho"mo-\
A combining form from Gr. "omo`s, one and the same, common,
joint.
[1913 Webster]

From Bouvier's Law Dictionary, Revised 6th Ed (1856) [bouvier]:
HOMO. This Latin word, in its most enlarged sense, includes both man and
woman. 2 Inst. 45. Vide Man.


The ALP (usually known as Labor) supporters had unfortunately believed the lies of their own apparatchiks. They were convinced that the Greens were directing preferences to the Liberal party, even though in most districts the Greens actually directed preferences to the ALP! The only exceptions were a small number of districts with split preferences (favoring neither Liberal nor ALP). It continually amazes me that while helping the ALP they were attacking us! Once I showed the ALP supporters the cards I was distributing they became quite friendly, as the Greens had a very low chance of winning the lower house in the districts for the polling place in question the preferences would go to the ALP.

It was interesting to talk to a Liberal supporter, he supports the workplace reforms implemented by the Federal government (Liberal) because he was hired for his current job because his employer can easily get rid of him if the business has a down-turn. It is hard to argue with someone who has only got a job because of the policy in question, but I did point out that continuity of employment is a major factor when applying for a mortgage. I recently bought a house and had a significant amount of hassle from the banks due to the fact that I work as a contractor. I had previously enquired about borrowing twice as much money while at my last permanent position and had much fewer problems from the banks.

I mentioned some of the other bad things the Liberal government has done (such as invading Iraq for no good cause), but the Liberal supporter was too sensible to comment on any of the issues where he would only lose. This however left him with not very much to say.

Most of the work of handing out the cards was quite boring and very tiring. Fortunately a friend decided to visit and help out so there were three people handing out Greens cards instead of the scheduled two which made it easier work. The ALP apparently had four people which seems to be an optimal number as there were voters arriving from two directions and no matter where they came from at least two ALP supporters would be able to intercept them.

Surprisingly the work was easier at the most busy times. When the queue stretched out into the street I could stoll along the queue and give the cards to the voters. When the queue disappeared later in the day the voters were walking past at high speed and I had to move quickly to get to them.

Now it's time to start planning for the next Federal election.

Thursday, November 23, 2006

Linux support by politicians

In two days time we are having a state election in Victoria (Australia). For this election there is only one party with policies that are positive towards free software, that is the Australian Greens. The policy documents include an IT policy (note that the IT policy is on a link that may change while the policy documents is a permanent link).

The Greens IT policy has three sections under the goals, one of those is about open standards (ensuring that government data is in documented file formats for use by all with no need to purchase software) and another is about Open Source which directly advocates the use of free software by government agencies. The principles part of the document is also very positive towards free software and explains why it's beneficial for Australia.

Any Greens representatives that are elected on the weekend have to abide by the party policy, that means that they must advocate the use of open standards and Open Source in government use and vote accordingly when any legislation related to computers is being considered!

Some of the members of the Greens are also members of the free software community, we were able to explain to the other party members the benefits for Australia and for social justice in the use of free software, and thus we reached an agreement about on a policy that suits people who use free software - not to benefit such people, but because of the benefits to society of the use of free software.

I think it would be good if members of the free software community in other countries would also join their local Green party and promote similar policies. While there is no direct connection between the Green parties in different countries the aims are very similar and therefore the arguments that persuaded Green members in Australia can be expected to work reasonably well in other countries (I am happy to provide advise in this regard via private mail if requested).

Also it would be good if other parties could be persuaded to have similar policies. If you want to help the free software community but for some reason you don't support the Greens then please join a party that matches your views and advocate an IT policy that promotes free software.

Currently people who want to vote for free software in the Victorian election have no option other than to vote for the Greens. As a member of the Greens I am happy to document this as a reason to vote Green. But as a member of the free software community I would like to see other parties adopt policies that promote free software.

The Greens adoption of a policy that promotes free software was largely driven by the issue of social justice. We believe that every Australian citizen has the right to access all public government data. If government data is available in proprietary formats then access is only granted to people who can afford the latest software ($800 for a full copy of MS Office) and hardware to run it ($600 at least). We believe that unemployed people who receive free Linux computers from Computerbank should be able to access government data. We also believe that when FOI laws apply in 30 years time all current data should be accessible, there's no chance that whatever version of Office is being sold in 30 years time will read current MS file formats, and there's no guarantee that MS will even be in business then. File formats for which there are authoritative open-source programs written to use them will be accessible in 30 years time and more.

Wednesday, November 22, 2006

nuclear power in Australia

From Crikey: If a government wanted to figure out how best to defend the country, it wouldn’t hold an inquiry into the air force. It would hold an inquiry into … defence. So if a government wanted to figure out how to plan for responsible energy consumption in an age of climate change you’d assume it would hold an inquiry into energy consumption. Instead, the Australian government holds an inquiry into … nuclear energy.

The above really says it all. The Liberal government has decided that they want to get nuclear reactors regardless of what the citizens want. Surprisingly the Switkowski report was not very positive towards nuclear power. It concluded that producing 1/3 of Australia's electricity requirements would require 25 nuclear power plants, and that they would have to be built close to population centers, and mainly on the east cost. I guess that means about 8 reactors for Melbourne and about 10 for Sydney! It has been suggested that the federal government could force nuclear power on the states even if the state governments don't want it!

For those reactors to be economically viable
a carbon tax is required (this means taxing all energy sources on the amount of carbon that they release into the atmosphere). The Liberal government has been opposing such a tax but now the report they commissioned recommends it.

The Victorian branch of the Liberal party seems to support such things. I have been walking past the office of Ted Baillieu (the leader of the Victorian Liberal party) on my way to work. He has a sign in his office window opposing wind power so I guess he'll be supporting nuclear power.

It's something to keep in mind at the election on Saturday. I'll be handing out how to vote cards for the Greens.

Thursday, November 16, 2006

biometrics and passwords

In a comment on my post more about securing an office someone suggested using biometrics. The positive aspect of biometrics is that they can't be lost, no-one is accidentally going to leave a finger or an eye in their car while they go to a party while other authentication devices are regularly lost in such a manner.

The down-side is that having your finger or eye stolen would be a lot less pleasant than having a USB device, swipe-card, key, or other security device stolen. I think that it's good to have an option of surrendering your key when under threat (for the person who might be attacked at least).

Rumor has it that some biometric sensors look for signs of life (EG temperature and pulse), but I believe that these could be faked with a suitable amount of effort. A finger attached to a mini heart/lung machine should make it possible to pass the temperature and pulse checks (although I don't think that I have access to any data that is important enough to justify such effort on the part of an attacker).

One thing that biometrics could be useful for is screen-blankers. It would be good to be able to have a screen-blanker for your computer that operates when you go to get a coffee. For a period of 10 minutes after leaving a biometric method could be used to re-enable access. After that time a different method would ave to be used. This gives the convenience of biometrics for when you need it most (the many short trips away from your computer that you make during the day) but removes the benefit for an attacker who might consider removing part of your body. Also I am not convinced in the general security of biometrics. There are claims that you can make a finger based on a fingerprint which can fool a biometric sensor. If those claims are correct then a biometric sensor would still work for a coffee break (presumably you are not far away and will be back soon, and other people are in the area). The coffee break security is usually to prevent casual snooping such as colleagues who want to see what was on your screen but not actually do anything invasive to get it. Another benefit of biometrics for a screen saver is that although I trust people in the same office as me (whichever office that may be) not to try anything when they might get caught I still don't want them shoulder-surfing my password. Replacing the trivial authentication cases with a fingerprint reader would prevent that.

In the KDE 1.x days I had a shell script launched when the lid closed on my laptop which would lock the screen (the screen-saver ran in the background and a signal could make it lock the screen). This meant that I could merely close the lid of my laptop to lock the screen, this is fast and easy and also is not immediately recognised as locking the screen. Some people get offended if you lock your laptop screen when in their presence as they think that you should trust them enough to leave your most secret data open to them (generally people who aren't serious about computers - I'm sure that the same people would happily lock their diary if I was ever in the same room as it). Being able to lock the screen in a non-obvious way is a security benefit.

Regarding the comment about using a USB device to store passwords, there are two problems with this, one is that all passwords will be available all the time, this means a program that is permitted to access password A would also be given access to password B. The other is that the passwords can be accessed easily. The ideal solution is to have an encryption device that uses public key cryptography and stores the private keys on the device with no way of removing them. It would also permit the user to authorise each transaction.

I would like to see a USB device that stores multiple GPG keys and implements the GPG algorithm (with no way for anyone with less resources than the NSA to extract the keys). The device would have a display and a couple of buttons. When it is accessed it would display messages such as "signing attempt on key 1" and allow me to press a button to authorise or reject that operation.

This means that if I insert the key to sign an email I won't have a background trojan start issuing sign and decrypt commands. The only viable attack that would be permitted is the case where I want to sign a message and my message is sent to /dev/null and a message from an attacker is signed again. The non-arrival of my original message would hopefully alert me to this problem. I am not aware of any hardware which supports these functions.

Also I have just received a couple of RSA SecurID tokens as a sample. An RSA representative phoned me to ask about my use of the tokens, I said that I am an independent consultant and I have been having trouble getting my clients to accept my recommendations to use such devices and that I want to implement them on a test network so that I can give more detailed advice to my clients and hopefully get them to improve their security. For some reason the RSA rep found that funny, but I got my sample hardware so it's fine.

Wednesday, November 15, 2006

economics of a computer store (why they don't stock what you want)


In some mailing list discussions recently some people demonstrated a lack of knowledge of the economics of a shop. Having run a shop for a few years (an Internet Cafe) I have some practical knowledge of this. I will focus on small businesses in this article, but the same economic principles apply to large corporations too.

When running a shop the main problem you have is in managing stock. There are two ways of getting stock, one is to have wholesalers give it to you for a period in which you can try to sell it and you pay for it when it's sold, this is probably quite rare (I don't know of an example of it being done - and probably no retailer wants to talk about it in case they lose it). Often retailers consider themselves to be privileged if they are permitted to pay for hardware one month after they receive it! The more common way of getting stock is simply to buy it and hope you can sell it in a reasonable period of time (often the wholesaler will offer to buy the stock back at a 10% discount if you can't sell it).

To buy stock you need money, this can come from money that has accrued in the business account (if things are going really well) or from a mortgage taken out by the business owner if things aren't going so well. For small businesses things usually don't go so well so the money used to buy stock is borrowed at an interest rate of about 7% or 8% (I'm using numbers based on the current economic conditions in Australia, different numbers apply to different countries and different times but the same principles apply). The ideal situation is when there is money in the company bank account to cover the purchase of all stock, this means that the cost of owning stock is that you miss out on the 5.5% interest that the money will get in a term deposit.

Almost all stock has a use-by date of some form. Some items have a very short expiry (EG milk used to make hot chocolate in an Internet cafe, some have a moderate expiry date (computer systems become almost unsellable in about 18 months and lose value steadily month after month), but in the computer industry nothing has a long expiry date.

Let's assume for the sake of discussion that you want to run a small computer store that is open to passing trade (this means that you must have stock for an immediate sale). Let's assume that all items of computer hardware lose half their value over the period of 20 months at a steady rate of 2.5% of the original price per month (I think that most computer hardware loses value faster than that, but it's just an assumption to illustrate the point).

The next major issue is the profit margin on each sale. If you can make a 20% profit on a sale then an item that has lost 10% of it's value while gathering dust in your store will still be profitable. However the profit margins on computer sales are very small due to having a small number of major manufacturers (Intel, AMD, nVidia, ATI, Seagate, and WD) that have almost cartel positions in their markets and there being little to differentiate the stores apart from price. I have been told that 3% profit is typical for retail computer hardware sales by the small companies nowadays! Now if the stock will lose 2.5% of it's value per month, you pay 0.5% interest per month and you make a 3% profit then if an item remains in stock for a month then you lose money. So on average (by value) you need to have stock spending significantly less than a month in your store. Cheap items such as low-quality cases and PSUs can stay in stock for a while. More expensive items such as new CPUs and the motherboards to house them must be moved quickly.

What's the first thing that you do to reduce stock? You can keep stocks low, but there is a limit to how low you can go without losing sales. The next thing to do is to not stock items that customers won't often buy or items where there is a similar item that you can stock as a substitute. The classic example of this is hard drives, a customer will want a certain capacity for a certain price - if their preferred brand is not in stock they will almost always take a different brand if it has the same capacity at the same price. Stores often advertise prices on multiple brands of hard drive in each capacity, but often only try to keep one brand in stock.

Of course this is a problem for the more fussy buyer. If you want to buy two identical parts from the same store on different days you might discover that they don't have the stock on the second day and that they instead offer you something equivalent. Not only do retailers have issues with managing their investment in stock but wholesalers have the same problem. So if a retailer runs out of WD drives and discovers that their preferred wholesaler has also run out of WD drives then they just buy a different brand - most customers don't care anyway.

There are some companies I deal with that have a business model based on services. One of them sells hardware to customers at cost, but charges them for the time spend assembling them, transporting them, etc. The potential for a 3% profit on the hardware isn't worth persuing, they prefer to just charge for work and also save themselves the sales effort. Another company I know operates almost exclusively on the basis of ordering parts when customers request them (but still make a small profit margin on the sales), this means that the customer can be invoiced as soon as the hardware arrives. The down-side to this is that wholesalers have the same stock issues and they sometimes have excessive delays before the wholesaler can deliver the hardware.

Dell is the real winner out of this. As they operate by mail-order they don't need to have the stock immediately available, they have a few days to deliver it which gets them time to arrange the supply. They can also have a central warehouse per region which reduces the stock requirements again. A 3% profit on items that rapidly decrease in value makes it almost impossible to sustain a small business. But an organization such as Dell can sustain a successful business at that level.

Of course the down-side for the end-user is that Dell doesn't want to have too many models as that just makes it more complex for the sales channel. Also they have deals with major suppliers which presumably give them deep discounts in exchange for not selling rival products (this is why some brands of parts are conspicuously absent from Dell systems).

10 years ago there used to be a small computer store in every shopping area. Now in Australia there are a few large stores (which often only have a small section devoted to computers) and mail-order. There seems to be much less choice in computer hardware than there was, but it is much cheaper.


PS I've attached a picture of day 39 of the beard.

Saturday, November 11, 2006

more about securing an office

My post about securing an office received many comments, so many that I had to write another blog entry to respond to them and also add some other things I didn't think of before.

One suggestion was to use pam_usb to store passwords on a USB device. It sounds like it's worth considering, but really we need public key encryption. I don't want to have a USB device full of keys, I want a USB device that runs GPG and can decrypt data on demand - the data it decrypts could be a key to unlock an entire filesystem. One thing to note is that USB 2.0 has a bandwidth of 30MB/s while the IDE hard drive in my Thinkpad can sustain 38MB/s reads (at the start - it would be slower near the end). This means that I would approximately halve the throughput on large IOs by sending all the data to a USB device for encryption or decryption. Given that such bulk IO is rare this is feasible. There are a number of devices on the market that support public-key encryption, I would be surprised if any of them can deliver the performance required to encrypt all the data on a hard drive. But this will happen eventually.

Bill made a really good point about firewire. I had considered mentioning it in my post but refrained due to a lack of knowledge of the technology (it's something that I would disable on my own machines but in the past I couldn't recommend that others disable without more information). Could someone please describe precisely which 1394 (AKA Firewire) modules should be disabled for a secure system? If you don't need Firewire then it's probably best to just disable it entirely.

To enable encryption in Fedora Core 6 you need something like the following in /etc/crypttab:

home_crypt /dev/hdaX /path/to/key
swap /dev/hdaX /dev/random swap
Debian uses the same format for /etc/crypttab.

The Peregrine blog entry in response to my entry made some really good points. I wasn't aware of what SUSE had done as I haven't done much with SUSE in the past. I'm currently being paid to do some SUSE work so I will learn more about what SUSE offers, but given the SUSE/MS deal I'm unlikely to use it when I don't have to. Before anyone asks, I don't work for SUSE and given what they have just done I will have to reject any offer of employment that might come from them.

I had forgotten about rsh and telnet. Surely those protocols are dead now? I use telnet as a convenient TCP server test tool (netcat isn't installed on all machines) and never use rsh. But Lamont was correct to mention them as there may be some people still doing such things.

The Peregrine blog made an interesting point about Kerberised NFS being encrypted, I wasn't aware of this and I will have to investigate it. I haven't used Kerberos in the past because most networks I work on don't have a central list of accounts, and often there is no one trusted host.

I strongly disagree with the comment about iSCSI and AoE "Neither protocol provides security mechanisms, which is a good thing. If they did, the additional overhead would affect their performance". Lack of security mechanisms allows replay attacks. For example if an attacker compromises a non-root account on a machine that uses such a protocol for it's root filesystem, the victim might change their password but the attacker could change the data back to it's original values even it if was encrypted. Encryption needs to have sequence numbers embedded to be effective, this is well known - the current dmcrypt code (used by cryptsetup) encrypts each block with the block ID number so that blocks can not be re-arranged by someone who can't decrypt them (a weakness of some earlier disk encryption systems). When block encryption is extended to a network storage system I believe that the block ID number needs to be used as well as a sequence ID number to prevent reordering of requests. CPU performance has been increasing more rapidly than hard drive performance for a long time. Some fairly expensive SAN hardware is limited to 40MB/s (I won't name the vendor here but please note that it's not a company that I have worked for), while there is faster SAN hardware out there I think it's reasonable to consider 40MB/s as adequate IO performance. A quick test indicates that the 1.7GHz Pentium-M CPU in my Thinkpad can decrypt data at a rate of 23MB/s. So to get reasonable speed with encryption from a SAN you might require a CPU which is twice as fast as in my Thinkpad for every client (which means most desktop machines sold for the last two years and probably all new laptops now other than the OLPC machine). You would also require a significant amount of CPU power at the server if multiple clients were to sustain such speeds. This might be justification for making encryption optional or for having faster (and therefore less effective) algorithms as an option.

I believe that the lack of built-in security in the AoE and iSCSI protocols gives a significant weakness to the security of the system which can't be fully addressed. The CPU requirements for such encryption can be met with current hardware even when using a strong algorithm such as AES. There are iSCSI accellerator cards being developed, such cards could also have built in encryption if there was a standard algorithm. This would allow good performance on both the client and the server without requiring the main CPU.

Finally the Peregrine blog entry recommended Counterpane. Bruce Schneier is possibly the most widely respected computer security expert. Everything he does is good. I didn't mention his company in my previous post because it was aimed at people who are on a strict budget. I didn't bother mentioning any item that requires much money, and I don't expect Counterpane to be cheap.

Simon noted that developing a clear threat model is the first step. This is absolutely correct, however most organizations don't have any real idea. When advising such organizations I usually just invent a few possible ways that someone with the same resources and knowledge as I might attack them and ask whether such threats seem reasonable, generally they agree that such things should be prevented and I give further advice based on that. It's not ideal, but advising clients who don't know what they want will never give an ideal result.

One thing that I forgot to mention is the fact that good security relies on something you have as well as something you know. For logging in it's ideal to use a hardware security token. RSA sells tokens that display a pseudo-random number every minute, the server knows the algorithm used to generate the numbers and can verify that the number entered was generated in the last minute or two. Such tokens are sold at low prices to large corporations (I can't quote prices, but one of my clients had prices that made them affordable for securing home networks), I will have to discover what their prices are to small companies and individuals (I have applied to evaluate the RSA hardware). Another option is a GPG smart-card, I already have a GPG card and just need to get a reader (this has been on my to-do list for a while). The GPG card has the advantage of being based on free software.

One thing I have believed for some time is that Debian should issue such tokens to all developers, I'm sure that purchasing ~1200 tokens would get a good price for Debian and the security benefits are worth it. The use of such tokens might have prevented the Debian server crack of 2003 or the Debian server crack of 2006. The Free Software Foundation Fellowship of Europe issues GPG cards to it's members, incidentally the FSFE is a worthy organisation that I am considering joining.

Friday, November 10, 2006

flash for main storage

I was in a discussion about flash on a closed mailing list, so I'll post my comments here.

I believe that flash will soon be suitable for main storage on most desktop and laptop machines (which means replacing the vast majority of the hard drive market). Flash survives mechanical wear much better than hard drives (flash storage in a camera will usually survive the destruction of the camera), it produces less heat and less noise, and it has better seek times. It is more expensive, although the price is coming down and the main problem now is the number of writes that can be made.

Flash is widely regarded as being slow for bulk IO (benchmark results I have seen approach 10MB/s - while 60MB/s is common for cheap desktop IDE disks). I am not sure how much of this is inherent to flash technology and how much is due to the interface used to access the flash. I often work with Gig-E networks, but for my home use I only have 100baseT, so I have little need for more than 10MB/s IO rates at home.

It is generally regarded that a sector of flash storage wears out at between 10,000 and 1,000,000 writes depending on how recent the hardware is and who you talk to (some vendors are more optimistic than others regarding the usable life of their devices).

Let's assume that you have a 32G flash module running JFFS2 with an average of 2G free (30G of long-term data that doesn't change and 2G of space that is used for new files). Let's assume that the most pessimistic prediction for flash reliability of 10,000 writes happens to be correct. So if 10,000 writes are to be made to that 2G of space that means 20T of data written! If we assume that the machine will be obsolete in 5 years then that allows us an average of just over 10G of data written per day (20,000/365/5=10.9). On my laptop iostat reports the following after 5 days of uptime:


Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
hda 1.94 9.94 20.33 4614118 9439808
I believe that this means an average of 20 blocks were written per second over the last 5 days with a block size of 4K (page size), this means 6.6G per day. Clearly something is wrong with my laptop as there should not be so many writes, but even so I wouldn't expect it to wear out within 5 years if I used only flash storage. Incidentally I do a lot of travelling and generally find that I'm lucky if a laptop hard drive lasts three years. So I could expect flash to last longer than a hard drive for my laptop use.

When flash fails I believe that only a small part of the data will be lost, which is better than the hard drive failure condition which is often to lose everything!

Also there is nothing preventing you from creating a RAID-1 of flash devices. Last time I checked the JFFS2 kernel code didn't support such things but that could be fixed if there was suitable hardware.

Note that JFFS2 is vastly preferable to using Ext3 or similar filesystems on a flash device. Flash needs wear-levelling (spreading the write load over all parts of the disk) for sane operation. JFFS2 has this built in to the filesystem, while Ext3 etc are designed to repeatedly write the same parts of the disk. This means that to use Ext3 you need a mapping layer that does wear-levelling which causes inefficiency. Also JFFS2 has compression built in (same method as gzip). This is good for smaller flash devices (EG the 32M storage that was common in iPaQs), and also reduces the wear on larger storage.

The biggest problem for me in using flash at the moment is the lack of support for XATTRs (needed for SE Linux) in JFFS2. KaiGai Kohei has been working on this, it's been a while since I checked on the progress so I'm not sure if it's got into the kernel.org repository yet.

Another problem with flash is that it is totally unsuitable for use as a swap device. This means that you need to have so much RAM that swap is not needed. Fortunately desktop machines with 2G of RAM are becoming common.