Thursday, August 31, 2006

first significant project goes live

One advantage of not being a permanent employee is that I am free to do paid work for other people. This not only gives a greater income but also a wider scope of work.

I've just completed my first significant project since leaving Red Hat. The Inumbers project provides an email address for every mobile phone. If you know someone's mobile phone number but don't have an email address then you can send email to NNN@inumbers.com where NNN is the international format mobile phone number. The recipient will receive an SMS advising them how to sign up and collect the email.

It was fun work, I had to learn how to implement SRS (which I had been meaning to do for a few years), write scripts to interface with a bulk SMS service, and do a few other things that were new to me.

Wednesday, August 30, 2006

SRS development

I've been working on a mail forwarding system which required me to implement SRS to allow people who use SPF to be customers of the service (as I use SPF on my domain it's fairly important to me). Reading the web pages before actually trying to implement it things seemed quite easy. All over the web you will see instructions to just set up an /etc/aliases file that pipes mail through the srs utility.

The problem is that none of the srs utility programs actually support piped mail. It seems that the early design idea was to support piped mail but no-one actually implemented it that way. So you can call the srs utility to discover what the munged (cryptographically secure hash signed) originator of the email should be but you have to do the actual email via something else.

This wasn't so much of a problem for me as I use my own custom maildrop agent to forward the mail instead of using /etc/aliases (Postfix doesn't support what I want to do with /etc/aliases - dynamically changing the email routing as you receive it isn't something that Postfix handles internally).

However I still have one problem. Sometimes I get two or three copies of the SPF header from Postfix when it checks them.

In my main.cf file I have a smtpd_recipient_restrictions configuration directive that contains check_policy_service unix:private/spfpolicy and the Postfix master.cf file has the following:


spfpolicy unix - n n - - spawn user=USER argv=/PATH/spf-policy.pl


Does anyone have any ideas why I would get multiple SPF checks and therefore multiple email header lines such as:

Received-SPF: none (smtp.sws.net.au: domain of SRS0=MUyCQ6=CO=coker.com.au=russell@inumbers.com does not designate permitted sender hosts)
Received-SPF: none (smtp.sws.net.au: domain of SRS0=MUyCQ6=CO=coker.com.au=russell@inumbers.com does not designate permitted sender hosts)
[some other headers]
Received-SPF: pass (inumbers: domain of russell@coker.com.au designates 61.95.69.6 as permitted sender)
Received-SPF: pass (inumbers: domain of russell@coker.com.au designates 61.95.69.6 as permitted sender)
Received-SPF: pass (inumbers: domain of russell@coker.com.au designates 61.95.69.6 as permitted sender)


The email went through one mail router and then hit the destination machine, but somehow got 5 SPF checks along the way. Also the pair of identical checks had no lined between them and the set of three identical checks also had no lines between them. So multiple checks were performed without any forwarding. It seems that a single port 25 connection is giving two or three checks. Both machines run Postfix with SPF checking that is essentially idential (apart from being slightly different versions, Debian/unstable and RHEL4).

Any advice on how to fix this would be appreciated.

which blog and syndication server to use?

I'm currently working for a company that in the past has not embraced new technology. One of my colleagues recently installed a wiki which did a lot of good in terms of organizing the internal documentation.

The next step is to install some blogging software. What I want is to have every sys-admin run a blog of what they are doing and have an aggregation of all the team's blogs for when anyone wants to see a complete list of what's been done recently. The security does not have to be particularly high as it's an internal service (probably everyone will use the same account). The ability to store draft posts would be really handy, but apart from that none of the advanced features are really needed.

Also it would be handy to be able to tag posts. For example if userA did some work on the mail server they would tag it with SMTP and then at some future time it would be possible to view all posts with the SMTP tag.

I've done a search on google for this topic and there are many pages comparing blog software. But all the comparisons seem based on Internet use, they talk about what versions of RSS are supported etc. But I don't need much of that. An ancient version of RSS will do as long as there is a single syndication program that can support it. Performance doesn't have to be great either, I'm looking at less than a dozen people posting and reading and a fairly big Opteron server with a decent RAID array.

For the minimal requirements I could probably write blog and syndication programs as CGI-BIN scripts in a couple of days. They wouldn't support RSS or XML but that's no big deal. But I expect that if I use some existing software that someone recommends in a blog comment it will be faster to install and have some possibility of future upgrades.

Monday, August 28, 2006

combining two domains in SE Linux

To get the maximum value out of my writing when I am asked a question that is of general interest in private mail I will (without in any way identifying the person or giving any specifics of their work) blog my reply. I hope that not only will this benefit the general readers, but also the person who originally asked the question may benefit from reading blog comments.

The question is "I wonder whether I can define a domain which is a union of two existing domain, that is, define a new domain X, which has all the privilege domain Y and Z has got".

There is no way to say in one line of policy "let foo_t do everything that bar_t and baz_t can do" (for reasons I will explain later). However you can easily define a domain to have the privileges that two other domains have.

If you have bar.te and baz.te then a start is:
grep ^allow bar.te baz.te | sed -e s/bar/foo/ -e s/baz/foo/ >> foo.te
Then you need to just define foo_t in the file foo.te and define an entry-point type and a suitable domain_auto_trans() rule to enter the domain.

There are other macros that allow operations that don't fit easily into a grep command, but they aren't difficult to manage.

The only tricky area is if you have the following:
domain_auto_trans(bar_t, shell_exec_t, whatever1_t)
domain_auto_trans(baz_t, shell_exec_t, whatever2_t)

As every domain_auto_trans() needs to have a single target type those two lines conflict so you will need to decide which one you want to merge. This is the reason why you can't just merge two domains. Also the same applies for file_type_auto_trans() rules and for booleans in some situations.

Sunday, August 27, 2006

Linux on the Desktop

I started using Linux in 1993. I initially used it only in text-mode as I didn't have enough RAM to run XFree86 on my Linux machine. I ran text-mode Linux server machines from 1993 to 1998. In 1998 I purchased my first laptop and installed Linux with KDE on it. I chose KDE because it had the greatest similarity to OS/2 which I had used as my desktop OS prior to that time. At the same time I purchased an identical laptop for my sister and gave her an identical configuration of Linux and KDE.

Running a Linux laptop in 1998 was a lot harder for a non-technical person than it is today. There was little compatability with MS file formats and few options for support for Internet connections and third-party hardware and software (most things worked but you needed to know what to do). One advantage of using Linux in this regard is that the remote support options have always been good, I was able to fix my sister's laptop no matter which country she was in and which country I was in. Her laptop kept working for more than 5 years without the need for a reinstall (try that on Windows).

It was when VMWare first became available (maybe 2000) that I converted my parents to using Linux. At first they complained a bit about it being different and found VMWare less convenient than the OS/2 Dos box for running their old DOS programs. But eventually they broke their dependence on DOS programs and things ran fairly smoothly. There were occasions when they complained about not having perceived benefits of Windows (such as the supposed ability to plug in random pieces of hardware and have things all work perfectly). The fact that using OS/2 and then Linux has given them 14 years of computer use with no viruses and no trojans tends to get overlooked.

Of recent times the only problem that my parents have experienced is when they bought a random cheap printer without asking my advice. The printer in question turned out to not work with Fedora Core 4, but when Fedora Core 5 came out the printer worked. Waiting 6 months for a printer upgrade isn't really a serious problem (the old printer which had worked 6+ years was still going strong).

My parents and my sister now have second-hand P3 desktop machines running Fedora. P3 CPUs dissipate significantly less heat than P4 and Athlon CPUs, this significantly reduces the risk of hard drives dying when machines are left on in unairconditioned rooms as well as saving money on electricity. For the typical home user who doesn't play 3D games there is no real need for a CPU that's more powerful than a 1GHz P3. This of course means that there is less need for me to reinstall on newer hardware which also means more reliability.

I always find it strange when people claim that Linux isn't ready for the desktop. I provide all the support for three non-technical users of Linux on the desktop and it really doesn't take much work because things just work. Corporate desktops are even easier, in a company you install what people need for their work and don't permit them to do anything different.

It seems to me that Linux has been ready for the desktop since 1998.

Saturday, August 26, 2006

common mistakes in presentations

I attend many presentations and have seen many that had a lower quality than they should have. Some things are difficult to change (for example I have difficulty speaking slowly). But there are some things that are easy to change that many people seem to get wrong and I will list some that stand out to me.

Unreadable presentation notes. You have to use a reasonably large font for it to be read by most people in the room. This means probably a maximum of about 16 lines of text on the screen. I have attended some presentations where I couldn't read the text from the middle of the room!

Too many slides. On a few occasions I have heard people boasting about how many slides they are going to use. An average of more than one slide per minute does not mean that you have done a good talk, it may mean the exact opposite. One of my recent talks had 8 slides of main content plus an introductory slide while waiting for people to arrive and a Q/A slide with my email address and some URLs for the end. The speaking slot was 30 minutes giving an average of a slide every 3-4 minutes.

Paging through slides too quickly. If you have 60 slides for a one hour talk then you will have no possibility of going through them at a reasonable speed (see above). Even if you have a reasonable number of slides you may go through some of them too quickly. On one occasion a presentation included a slide with text that was too small to read, I tried to count the lines of text but only got to 30 before the presenter went to the next slide.

Using slides as reading material for after the lecture. Sure it can be useful for people to review your notes after the lecture, and it's generally better to give them the notes than to have them be so busy writing notes that they miss somehting you say. But if you want to have something verbose and detailed that can't be spoken about in the lecture then the thing to do is to write a paper for the delegates to read. Serious conferences have papers that they publish (minimum length is generally 4 solid A4 pages) which are presented by a talk of 30 to 60 minutes. That way people get a talk as an introduction and they get some serious reference material if they want to know more. Also people who miss the talk can read the paper and get much of the value. Is it not possible for slides to take the place of a paper.

Bad diagrams. Diagrams should be really simple (see the paragraph about readable text). It is OK to have diagrams that don't stand alone and need to be described, a lecture is primarily about talking not showing pictures.

When simplifying diagrams make sure that they still represent what actually happens. Simplifying diagrams such that they don't match what you are talking about doesn't help.

Animations. The only thing that is animated in the front of the room should be the person giving the presentation. Otherwise just do the entire thing in flash, publish it on the web, and don't bother giving a talk.

Staged content, particularly when used as a surprise. Having a line of text appear with every click of the mouse forces the audience to stay with you every step of the way. This may work for primary school students but does not work for an intelligent audience. Give them a screen full at a time and let them read it in any order that they like. This is worst when they someone tries to surprise the audience with a punchy line at the end of every paragraph. Surprising the audience once per talk is difficult. Trying to do it every paragraph is just annoying.

One final tip that isn't as serious but is not obvious enough to deserve a mention. Use black text on a white background, this gives good contrast that can be seen regardless of color-blindness and with the bright background the room is lit up even if all the lights are off. The audience wants to see you and sometimes this is only possible by projector light. Also the more light that comes out of the projector the less heat that builds up inside, it can really mess up a presentation if the projector overheats.

Friday, August 25, 2006

more security foolishness

Dutch police arrested 12 people for acting suspiciously on a flight to India. A passenger said "They were not paying attention to what the flight attendents were saying", I don't pay attention to the flight attendents either. When you fly more than 10 times a year you learn how to do up your seat-belt and when it's appropriate to use your laptop, so once you know where the emergency exits are you can read a book ot talk to other passengers. The 12 people who were arrested were apparently exchanging mobile phones - strange, they have never asked people not to do that.

The 12 people have since been released. The cost of canceling flights due to security scares is significant for the airline companies. The fear that this induces in the public (both of terrorism and of stupid police) causes them to be less likely to fly which hurts the airline industry even more as well as also hurting the tourism industry.

The US is more dependent on air travel than any other country due to a severe lack of public transport. Australia is also very dependent on air travel due to large distances and no land connection to any other country. The UK also seems to have more of a need for air travel than other EU countries.

If exchanging mobile phones can interfere with air travel then people who dislike the US and the other countries in the coalition of the willing/stupid can cause serious economic damage by trivial things such as exchanging phones in-flight or writing BOB on a sick bag without any risk to themselves.

The war on terror is already as good as lost. William S. Lind's blog is a good source of information on some of the ways that the US is losing. It's a pity that the Australian and UK governments are determined to take their countries down with the US.

2006 Open Source Symposium

Today (well yesterday as of 30 minutes ago) I spoke at the Open Source Symposium in Melbourne. This is an event sponsored by Red Hat. The first day was the business day and the second day was the Red Hat developers day.

I attended both days and spoke on the second day (today). My talk was about designing and implementing a secure system on Red Hat Enterprise Linux 4 (the Inumbers system for gatewaying SMS to email which is currently in Beta at the time of writing). I covered the issues of designing systems for least privilege via a set of cooperating processes under different UIDs. Secure coding principles, and SE Linux policy design. My presentation notes are HERE (in OpenOffice 2.0 format).

The talk seemed to be well accepted, so I'll probably offer variations of it at other venues in the near future. I'm thinking of making a half-day workshop out of it.

While at the symposium one of the SGI guys mentioned that an XFS expert was in Melbourne temporarily. I suggested that such experts should be encouraged to give a talk about their work when they are in town. As a result of that I arranged a venue for a talk on XFS, I had the venue arranged in about 4 hours, which resulted in about 24 hours notice given to LUV members. I wasn't able to attend the meeting due to prior commitments, so I'm not sure how it went.

Wednesday, August 23, 2006

fair trade is the Linux way

I have recently purchased a large quantity of fair trade chocolate. Fair trade means that the people who produce the products will be paid a fair price for their products which will enable them to send their children to school, pay for adequate health-care, etc. Paying a small price premium on products such as coffee and chocolate usually makes no notable difference to the living expenses of someone in a first-world country such as Australia, but can make a huge difference to the standard of living of the people who produce the products. Also fair-trade products are generally of a very high quality, you are paying for the best quality as well as the best conditions of the workers.

I will share this chocolate at the next LUV meeting, hopefully the people who attend will agree that the chocolate is both of a high quality as well as being good in principle and that they will want to buy it too.

The Fair Trade chocolate I bought cost $6.95 per 100g. I went to Safeway (local bulk food store with low prices) to get prices on other chocolate to compare. Lindt (cheaper Swiss chocolate) costs $3.09 per 100g and has a special of $2.54. The Lindt and the Fair Trade chocolate are both 70%, but the Fair Trade chocolate is significantly smoother, has a slightly better aroma, and a better after-taste. So the Fair Trade chocolate costs slightly more than twice as much as Lindt, but I believe that it has a quality to match the price. Then I compared the price of a cheap chocolate, Cadbury Old Gold chocolate is also 70% cocoa and costs $4.29 for 220g, this makes it between 3.5 and 4.4 times cheaper than the Fair Trade chocolate. But if you like chocolate then Cadbury products probably aren't on the shopping list anyway. I believe that the Fair Trade chocolate I bought can be justified on the basis of flavor alone without regard to the ethical issues.

All Linux users know what it's like to have their quality of life restricted by an oppressive monopoly. We are fortunate in that it only affects us in small ways, not in our ability to purchase adequate food and health care. As we oppose software monopolies that hurt us in the computer industry we must also oppose monopolies in the food industry that hurt people in third-world countries. The fair trade programs are the best way I know of doing that. Hopefully after tasting the chocolate many LUV members will want to buy it too.

Tuesday, August 22, 2006

outsourcing - bad for corporations but good for the world

There is ongoing discussion about whether outsourcing is good or bad. The general assumptions seem to be that it is bad for people who work in the computer industry (more competition for jobs and thus lower pay) and good for employers (more work done for less money).

I am not convinced that employers can get any benefit from outsourcing. The problem is that the pay rates for computer work are roughly proportional to the logarithm of the productivity of the person (at a rough estimation - it's certainly not linear). Therefore if you get an employee on twice the base salary you might expect ten times the productivity, and an employee on three times the base salary could be expected to deliver one hundred times the productivity. These numbers may sound incredible to someone who has not done any technical work in the computer industry, but actually aren't that exciting to people who regularly do the work. Someone who knows nothing may perform a repetitive task manually and waste a lot of time, someone who knows a little will write a program to automate it, and someone who knows a lot will write a program to automate it that won't crash...

Programmers in Indian outsourcing companies are paid reasonably well by Indian standards, but they know that it's possible to do a lot better. So all the best Indian programmers end up either migrating to a first-world country or running their own outsourcing company (there are a lot of great Indian programmers out there, but they aren't working in sweat-shops). The Indians who actually end up doing the coding are not the most skilled Indian programmers.

It might be better to hire cheap Indian programmers of average skill than cheap first-world programmers of average skill. But hiring a single skilled programmer (from any country) rather than a team of average programmers will be a significant benefit (both in terms of price and productivity). In addition to this there are the communication problems that you experience with different time zones (the idea that one team can solve a problem while the team on another continent is asleep is a myth) and with different cultures.

I am not convinced that outsourcing does any real harm to good programmers in first-world countries. If someone does computer work strictly 9-5 and never does it for fun then they are not a serious programmer. People who aren't serious about computers will probably be just as happy working in another industry if they get the same pay. Moving a few of the average computer programmer positions to India isn't going to hurt anyone, especially as the industry is continually growing and therefore there is little risk of any given programmer being forced out of the industry. The people who are serious about computers (the ones who program for fun and would do it even if they weren't paid to do so) are the most skilled programmers, they will always be able to find jobs. Will outsourcing reduce the income for such people? Maybe, but earning 5* the average income instead of 6* shouldn't hurt them much.

The final question is whether outsourcing is a good thing. I think it is good even though it's bad for first-world companies and not particularly good for programmers in first-world countries. Outsourcing benefits developing countries by injecting money into their economies and driving the development of a modern communications infrastructure (telephones, mobile phones, fast Internet access, reliable couriers, etc). I believe that the good which is being done in India by outsourcing money greatly exceeds the damage done to companies that use outsourcing services. Therefore I want this to continue and I also want to see outsourcing in other developing countries too. There is already a trend in outsourcing to eastern-European countries such as Russia, this is a good thing and I hope that it will continue.

Monday, August 21, 2006

terrorist "weakest link"

In the game show The Weakest Link competitors get voted off, usually not on whether they are weak but on whether the other contestents consider them to be a threat. It's mildly amusing as a TV game show but not funny at all when carried out on an airline.

Recently a flight from Malaga to Manchester was delayed because two passengers were considered to be suspicious by other passengers (either 6 or 7 passengers refused to get on the plane because of this). The passengers were thought to be speaking Arabic (as if there was anyone on the plane who would recognise Arabic when they heard it) and because they were wearing coats and looking at their watches. The two men in question had been searched twice and found to be clean, but a bunch of idiots on a plane thought they knew better and demanded that the passengers in question be removed.

Lessons to be learned from this for travelling to/from coalition of the willing countries:

  1. Avoid the urge to check your watch when your flight is being delayed unless you are white. Non-white people who do what white people do in this situation are considered to be terrorists.
  2. When travelling to a cold place (such as Manchester) you want to have a coat to wear when getting off the plane. The airline staff won't allow you enough hand-luggage space to store a coat so you will want to wear it when getting on the plane. This is fine if you are white, but if not white just deal with the fact that you will shiver when disembarking.
  3. Learn to speak English for your travels. If you speak another language you will be considered to be a terrorist.
  4. Whatever country you visit, stick to major cities as much as possible. Smaller cities have more racists and nationalistically bigoted people, there probably wouldn't have been a problem on a flight to London.

Also just avoid the coalition of the willing countries in your travels as much as possible. There are much less problems in this regard when the government doesn't depend on terrorism hysteria to justify going to war on the basis of lies.

Sunday, August 20, 2006

car-pooling

I am constantly amazed at the apparent lack of interest in car-pooling when travelling between LUV meetings and the restaurant where we have dinner. After the last meeting I was one of the first five people to arrive at the restaurant and we had arrived in three separate cars. For the most luxurious travel you can have four people to a car and a standard sedan class vehicle can legally and safely carry five people. So an extra seven people could have been comfortably driven to the restaurant and an extra ten people could have been safely and legally driven to the restaurant. But instead most people were waiting in the cold at the tram stop.

Things are quite different in Europe. There was one occasion when after an LSM (Libre Software Meeting) conference in Bordeaux we got 8 people in a Mazda 323, now that's what I call car-pooling! NB This is dangerous and illegal, so I can't recommend doing it.

run an insecure system and get raped

After a recent mailing list discussion about computer security I'm going to be quoted in someone's .sig so I think that I need to write a blog entry.

Here is an article about a 2001 case of a man who was arrested for pedophilia and spent 9 days in prison: http://www.xatrix.org/article.php?s=3549 .

This article on The Register has links to a few other articles and describes how a man has been found guilty due to the apparent actions of a hostile program on his machine (and served 20 days jail time).

Rumor has it that pedophiles are really disliked in prison and that they are often attacked by other prisoners. Even spending a few days in prison as a pedophile could be enough to get raped.

Run the latest version of the OS for your PC with all security patches. If you buy a second-hand machine reformat and reinstall as the first thing that you do just in case the last owner had kiddy porn (even though they may not have known of it).

Saturday, August 19, 2006

laptop security on planes

There has been a lot of discussion recently about how to take laptops on planes following the supposed terror threat in the UK which has been debunked by The Register and other organizations. There is an interesting eWeek article about this that contains the interesting quote "The built-in locks don't yet meet TSA specifications because they cannot be opened using the TSA master key" when reviewing a laptop case. Creating a master key is not that difficult and is explained in this PDF file. Theft by baggage handlers is quite a common occurance (see this google search for details).

So baggage handlers can easily reverse-engineer the TSA master key, steal laptops from baggage, smuggle drugs, and put bombs in baggage if they are so inclined.

There have been a number of cases of laptops containing sensitive financial, medical, and military data being stolen. Now someone who wants to steal data merely needs to work as a baggage handler and copy the hard drives of laptops before loading them. Data is more valuable if no-one knows that it has been stolen.

It would be ironic if an airline employee had their laptop hard drive copied and sensitive information about airport security was lost because of this.

Thursday, August 17, 2006

more on anti-spam

In response to my last entry about anti-spam measures and the difficulty of blocking SPAM at the SMTP protocol level I received a few responses. Brian May pointed out that the exiscan-acl feature of Exim allows such blocking, and Johannes Berg referred me to his web site http://johannes.sipsolutions.net/Projects for information on how he implemented Exim SPAM blocking at the SMTP level.

It seems that this is not possible in Postfix at this time. The only way I know of to do this in Postfix would be to have a SMTP proxy in front of the Postfix server that implements the anti-SPAM features. I have considered doing this in the past but not had enough time.

Also a comment on my blog criticises SORBS for blocking Tor (an anonymous Internet system). As I don't want to receive anonymous email and none of the companies I work for want to receive it either this is something I consider a feature not a bug!

Wednesday, August 16, 2006

blocking spam

There are two critical things that any anti-spam system must do, it must not lose email and it must not cause damage to the rest of the net.

To avoid losing email every message must be either accepted for delivery or the sender must be notified.

To avoid causing damage to the rest of the net spam should not be bounced to innocent third parties. To accept mail, process it, and then bounce messages that appear to be spam will result in spam being bounced to innocent third parties.

The only exception to these two conditions is for virus email which can be positively identified as being bad and therefore they can be silently discarded. For any other category of unwanted mail there is always a possibility of a false-positive and therefore the sender should be notified if the mail will not be accepted.

Therefore the only acceptable method of dealing with spam is to reject it at the SMTP protocol level. Currently I am not aware of any software that supports Bayesian filtering while the message is being received so that it can be rejected if it appears to be spam, it would be possible to do this (I could write the code myself if I had enough spare time) but AFAIK no-one has done it.

The most popular methods of recognising SPAM before it is accepted are through DNSBL lists (DNS based lists of IP addresses known to send SPAM), RHSBL lists (DNS based lists identifying domains that are known to be run by spammers), and Gray-listing (giving a transient error condition in the expectation that many spammers won't try again).

Gray-listing is not effective enough to be used on it's own, therefore DNSBL and RHSBL systems are required for a usable email system. Quickly reviewing the logs of some of my clients' mail servers suggests that the DNSBL dnsbl.sorbs.net alone is stopping an average of 20 SPAMs per user per day! The SORBS system is designed to block open relays, machines that send mail to spam-trap addresses, and some other categories of obviously inappropriate use. The number of false-positives is very small. On average I add about one white-list entry per month, which isn't much for the email for a dozen small companies. For every white list entry I have added I have known that the sender has had a SPAM problem. I have not had to add a white-list entry because of a DNSBL making a mistake, just because people want to receive mail from a system that also sends SPAM.

I was prompted to write about anti-spam measures by an ill-informed and unsubstantiated comment on my blog regarding DNSBL services.

If anyone wants to comment on this please feel free. But keep in mind that I have a lot of experience running mail servers including large ISPs with more than a million customers. The advice I give in terms of anti-spam measures concerns techniques that I have successfully used on ISPs of all sizes and that I have found to work well even when both ends use them. Make sure that you substantiate any comments you make and explain them clearly. Saying that something is stupid is not going to impress me when I've seen it work for over a million users.

Tuesday, August 15, 2006

a newbie question about SE Linux and anti-spam measures

An anti-spam measure that is used by a very small number of people is that of verifying the sender address by connecting to the sending mail server. For example when I send mail from russell@coker.com.au the receiving machine will connect to my mail server and see whether it accepts mail addressed to russell@coker.com.au and will reject my mail if that isn't the case.

The problem with this is that if I try to send mail to someone who has their mail server listed as a SPAM source then their efforts to verify my email address will fail and then my message to them will bounce with a confusing error message. This means that if one of the two mail servers involved in the communication is listed in a DNSBL or RHSBL service then all communication will be impossible. There will not be an option for one person to say "please phone me on this number if you can't send me an email".

This happened recently when someone from Italy asked me a question about SE Linux. So I will answer here (maybe they read my blog). In any case the answer might be of general interest:

Firstly I have to note that I have a B.Sc degree and no post-graduate qualifications, so it is not accurate to address me as Dr. Coker.

The question is: Let's imagine a user acquire root rights. Especially on Fedora Core, which modify su command to map it to sysadm_r role, couldn't he/she simply disable SELinux, delete logs, and so on?

If a user obtains ultimate privileges then they can do all things including deleting logs etc.

One thing to note is that there is no need for any process other than kernel threads to have ultimate privs, it would be useful in some situations to make log files append-only for all processes and the SE Linux policy language supports this.

The nearest any release policy comes to implementing such things is the separation between sysadm_r and secadm_r in the MLS policy in recent versions of Fedora.

Also note that it is possible to configure a SE Linux policy that does not permit any process to request that a new policy be loaded, the policy files be changed on disk, or the use of programs such as debugfs. Using SE Linux to enforce a policy that can not be bypassed by anything less than booting from installation media is quite easy to achieve.

One idea that I had was to have GPG implemented in the system BIOS and have GPG checks performed on the kernel before it's loaded (to verify that the kernel had not been modified). The kernel could be passed a decryption key for the root filesystem by the BIOS, and SE Linux would be enabled as soon as the root filesystem was mounted. Thus nothing less than disassembling the BIOS would allow a hostile person to access the data on the disk. This is all possible with technology that has been common for many years. I almost convincced a BIOS author to implement this in about 2002.

Monday, August 14, 2006

invasive vs inconvenient security

The recent news from the UK gives us an example of invasive security. Preventing passengers carrying on any hand luggage (even wallets) and frisking all of them is the type of treatment you expect for criminals and visitors to maximum security prisons. It's not what you expect for people who are involved in routine (or what used to be routine) travel.

The security measures offered by SE Linux are sometimes described as invasive. I don't believe that this is an accurate description. I admit that sometimes minor tweaks are required (such as setting the correct context of a file). But for most users (corporate users and typical home users) the distribution takes care of all this for them. A default Fedora install should just work for the typical home user and a default Red Hat Enterprise Linux install should just work for the corporate user.

The main reason that it's so easy to use is that the default domain for user sessions and for daemons that are not specifically configured in the security policy is unconfined_t. This means that programs for which there is no policy and programs run from a user session do not have SE Linux access controls. The default configuration of SE Linux only restricts programs that are known to be at risk.

The most common case of SE Linux access controls causing inconvenience is the policy for Apache (the daemon with the most configuration options). There are a set of configuration options (known as booleans) that can be used to determine what aspects of Apache will be confined, generally it only takes a few minutes to determine and specify the correct settings to support the desired operation.

Next time you are being frisked at a UK or US airport and are facing the prospect of a long flight with books and all other forms of entertainment banned keep in mind that airlines have invasive security and should be avoided if possible. SE Linux offers security that is at most a minor inconvenience (usually not even noticed) and should be embraced.

Sunday, August 13, 2006

the waste of closed lists

As I mentioned in my first post the amount of effort I'm prepared to invest in posting to a small group of people is limited. I don't think that I am the only person with this opinion.

I also believe that the number of people who refuse to post to open lists is quite small, and that on many lists they aren't the people who contribute much. I believe that they are outweighed in both number and contributions by the people who want open lists and who are unwilling to spend a large effort on posting to a closed list.

When posting to an open list you have to be concerned about your online reputation. Some lists are closed because of having NSFW content that people don't want known by their colleagues and managers, I guess that this makes sense for some lists.

IMHO the only good reason for closed lists is for discussion of truly sensitive information. This ranges from security problems in software that have not yet been fixed to medical and psychiatric problems. There are many lists which should not be publicly archived, but for general discussion of computers there is no such motivation.

For a list with a primarily technical focus on answering basic questions secrecy does no good, it merely protects people who want to post off-topic messages and create pointless arguments about issues that they don't understand.

My solution to some of these problems is to use this blog to comment on such things. I expect that my solution will also be adopted by other people on some of the closed lists that I use.

Also it has occurred to me that blogging about issues may improve the quality of list discussion. If instead of responding to a message in point-form you write an article about the general issue then it may reduce the level of personal dispute. I think it would be difficult to have a flame-war by blog.

Finally while on the topic I have to mention that I don't believe in anonymous posting to technical forums. Any content that is worth having should come with someone's name attached. IRC nicks etc are OK, but the person writing the content should be identifiable.

big and cheap USB flash devices

It's often the case with technology that serious changes occur at a particular price or performance point in development. Something has small use until it can be developed to a certain combination of low price and high performance that everyone demands.

I believe that USB flash devices are going to be used for many interesting things starting about now. The reason is that 2G flash devices are now on sale for under $100. To be more precise 1G costs $45AU and
2G costs $85AU.

http://www.coker.com.au/hardware/usb.html

The above page on my web site has some background information on the performance of USB devices and the things that people are trying to do with them (including MS attempting to use them as cache).

One thing that has not been done much is to use USB for the main storage of a system. The OLPC machines have been designed to use only flash for storage as has the Familiar distribution for iPaQ PDAs (and probably several other Linux distributions of which I am not aware). But there are many other machines that could potentially use it. Firewall and router machines would work well. With 2G of storage you could even have a basic install of a workstation!

Some of the advantages of Flash for storage are that it uses small amounts of electricity, has no moving parts (can be dropped without damage), and has very low random access times. These are good things for firewalls and similar embedded devices.

An independent advantage of USB Flash is that it can be moved between machines with ease. Instead of moving a flash disk with your data files you can move a flash disk with your complete OS and applications!

The next thing I would like to do with USB devices is to install systems. Currently a CentOS or Red Hat Enterprise Linux install is just over 2G (I might be able to make a cut-down version that fits on a 2G flash device) and Fedora Core is over 3G. As Flash capacity goes up in powers of two I expect that soon the 4G flash devices will appear on the market and I will be able to do automated installs from Flash. This will be really convenient for my SE Linux hands-on training sessions as I like to have a quick way of re-installing a machine for when a student breaks it badly - I tell the students "play with things, experiment, break things now when no-one cares so that you can avoid breaking things at work".

The final thing I would like to see is PCs shipped with the ability to boot from all manner of Flash devices (not just USB). I recently bought myself a new computer and it has a built-in capacity to read four different types of Flash modules for cameras etc. Unfortunately it was one of the few recent machines I've seen that won't boot from USB Flash (the BIOS supported it but it didn't work for unknown reasons). Hopefully the vendors will soon make machines that can boot from CF and other flash formats (the more format choices we have the better the prices will be).

Saturday, August 12, 2006

wasted votes

In a mailing list to which I subscribe there is currently a discussion on US politics with the inevitable discussion of wasted votes. As I don't want to waste my writing on this topic on a closed list I'm posting to my blog.

There is ongoing discussion on the topic of wasted votes. As a matter of principle, if a vote is considered to be wasted, then that should be considered a failure of the electoral system.

Having representatives for regions makes some sense in that a regional representative will have more interest in the region than a central government with no attachment to the region. I expect that representatives of regions were initially used because it was not feasible for people to vote for people that weren't geographically local. Now there is no real requirement for geographical locality (only a very small fraction of the voters get to meet the person they are voting for anyway) but having a representative for a region still makes sense.

The requirement for a regional representative means that if you live in a region mostly filled with people who disagree with you then your vote won't change much. For example I live in a strong Labor region so the REAL fight for the lower house seat (both state and federal) occurs in the Labor party room.

My vote for the senate counts as that is done on a state-wide basis. So of the two votes entered in one election one of them can be considered to not be wasted.

For the US system, the electoral college was developed in a time when it was impossible for the majority of voters to assess the presidential candidates, and it solved the requirements of those times reasonably well. Today it is quite easy to add up all the votes and use either a simple majority or the "Australian ballot".

Currently there is some controversy over the actions of Senator Joe Lieberman who lost the support of his party and then immediately declared that he would stand as an independent candidate. I believe that this illustrates a failure of the electoral system. It should be possible to have multiple candidates from each party on the list. In the Australian system it is possible to do that, but as they are in random order on the voting cards no-one would be sure of which candidate of the winning party would get the seat unless there were actual reasons for preferring one candidate over another (which sadly often isn't the case). This is good for voters (the minority of voters who care enough about internal party policies to prefer one party candidate over another should make the decision) but not good for the candidates who want a better chance of winning without actually demonstrating that they can represent their voters better than other candidates.

The Australian government system has nothing equivalent to the US presidential election. The prime minister is voted in by the members of parliament. So there is little chance of getting multiple candidates from one party contesting one position. For the US presidential election I think that the best thing to do would be to have an "Australian ballot" and permit multiple candidates from each party. For example you could have Bush and Cheney running as candidates for president with each promising to make the other their VP if they get elected. With the Australian ballot it wouldn't matter if you put Bush and Cheney as the last two votes on your ticket, the order you use for them will still matter.

I think that with the US presidential and state governor elections there is enough knowledge of the candidates among the voters to make it worth-while for each of the major parties to run multiple candidates.

One of many advantages of having multiple candidates is that you might have real debates. If the main candidates from the two big parties have a set of strict rules for their debate that prevents any surprise then the people who are the less likely candidates from those parties (and who therefore have less to lose) could go for a no-holds-barred debate with a selection of random members of the public asking questions.

Of course none of this is likely to happen. Any serious change would have the potential to adversely affect at least one of the major parties, and any improvement would necessarily have a negative impact on most of the current politicians. Votes ARE being wasted, and most politicians seem to like it that way.