Sunday, January 28, 2007

ssh tunneling of email

On a Debian mailing list someone claimed that it was inconvenient to use ssh tunneling for sending and receiving email due to the issue of broken connections.

On my source-dump blog I have posted an entry with xinetd configuration for doing this in a reliable manner.

Thursday, January 25, 2007

presentation laptops

I suggested in a previous blog entry that conferences should provide computers that speakers can use for their presentations. The reason for this is that getting one computer working with the beamer in each room is an easy task, while getting the laptop of every speaker to work is much more difficult.

It seems that my idea has been rejected by almost everyone who read it, so I'll document some tips for getting a laptop working.

 SZ:    Pixels          Physical       Refresh
*0 1400 x 1050 ( 474mm x 356mm ) *50
4 640 x 480 ( 474mm x 356mm ) 50
5 800 x 600 ( 474mm x 356mm ) 50
6 1024 x 768 ( 474mm x 356mm ) 50
8 1280 x 960 ( 474mm x 356mm ) 50
9 1280 x 1024 ( 474mm x 356mm ) 50

Firstly there is the command xrandr which can be used to change the resolution without logging out. Above are the most useful lines produced by running xrandr with no options on my Thinkpad T41p. The left column is the index to the list of resolutions. For example I run xrandr -s 9 to use mode 1280x1024 and xrandr -s 0 to use mode 1400x1050. This takes much less time than editing an X config file!

The next thing to note is that my Thinkpad has a refresh rate of 50Hz, apparently most beamers expect at least 60Hz, this explains why I have had ongoing problems in getting my Thinkpad to correctly work for presentations for the entire time that I have owned it. If you own such a Thinkpad then I recommend that you just bring another laptop to do your presentation on the assumption that the display possibly won't work and probably won't work properly! I had developed this habit anyway after repeated problems in getting my Thinkpad working (occurring on a number of occasions in several countries). It's good to now know the reason for this (thanks Keith).

When setting the resolution there are often tweaks that can be used. For example in my talk for the Debian Miniconf of LCA 2007 I used mode 800x600 (I think - Keith set it up and I didn't look closely after verifying that things basically worked). Even though the beamer didn't have good support for a low refresh rate it worked when the resolution was low enough. Fortunately the xrandr program allows changing resolution fast enough that all 13 resolutions could be attempted in about a minute.

The support for better display detection and configuration is steadily improving. Hopefully this year the problems will be solved (which means that for the Debian and RHEL releases in 2008 the problem will be solved).

A possible work-around is to use Xephyr (the replacement for Xnest). In a previous blog entry I described how to get Xephyr going for use by Xen images. It seems to be a common symptom of display synchronization problems that the edges of the screen will be clipped. The most common work-around for this is to not use the full-screen mode of OpenOffice - which means that instead of having a small amount of text clipped there is a large amount of OpenOffice menus etc on the screen. As Xephyr accepts any resolution it should not be difficult to arrange for it to use 98% of the screen space and then run the presentation full-screen in the Xephyr window. This will be particularly useful for programs such as MagicPoint (my favorite presentation program) which don't support a windowed mode of operation.

If you have any other suggestions on how to solve or work around display problems with laptops then please leave comments.

Wednesday, January 24, 2007

university degrees

Recently someone asked me for advice on what they can do to improve their career without getting a degree.

I have performed a quick poll of some people I know and found that for experienced people there seems to be little need for a degree. People who have extensive experience but no degree report no problems in finding work, and employers don't report any reluctance to hire someone who has the skills but no degree.

One thing that a degree is very good for is making a career jump. This is most notable when you get your first professional job, school results and references from part-time work don't help and a degree is a massive benefit. But if you have proven your abilities in the field then most employers will be more interested in checking references and the interview process than in qualifications. If you are only interested in getting a job that is one level above where you are at the moment then lacking a degree should not be a problem.

Another possibility for someone who lacks a degree is certification such as the Linux Professional Institute (LPI) provides and the Red Hat Certified Engineer (RHCE). One advantage of the RHCE certification is that it is based on fixing misconfigured Linux systems, no theoretical questions, just the type of work that real sys-admins do for their job - this means that people who do badly in traditional exams can be expected to do well, and it also means that the RHCE certification accurately depicts real skills in fixing problems (and it should therefore be more valuable to employers). The LPI exams can be taken by anyone, but to sit for an RHCE you have to be sponsored by an employer.

There are ways of getting career benefits without strictly going upwards. One way of doing this is to move to a region where the pay scales are different. Some years ago I moved from Melbourne, Australia to London to increase my salary. When in London I did work that was a lot less challenging and was paid considerably better for doing so. One thing I discovered is that in London Australians were widely admired for working really hard, I don't think that Australians work harder than British people on average, but people who will move to the other side of the world to advance their career are generally prepared to work hard!

If you spend some time working in another region and then decide to return home you will probably find that employers are more interested in hiring you for what you have learned in another region. Whether you actually learn things that are of value to potential employers when working in another country is debatable, it probably depends on the individual. But when applying for a job you want to make the most of every opportunity that is available - if someone wants to hire you for the special skills you learned in another country then that's OK. ;)

Another possibility is moving to a different industry sector. Some industries have career bottlenecks at different levels. If there is no possibility of moving upwards in the area where you work then getting a job with the same skill requirements in a different industry might open up more opportunities. An example of this is working as a sys-admin in a medium sized company that is not IT based. If you are the only sys-admin in the company then there is no possibility of promotion, moving from such a company to an ISP (or other IT based company) would then give the possibility of becoming a senior sys-admin, team leader, or even the manager of the ops team (if management is your thing).

A final option that few people consider is becoming a contractor. Contractors tend to earn significantly more than permanent employees when they do the same work (so becoming a contractor provides a significant immediate benefit) and as the duration of contracts is usually small there is less attention paid to degrees etc (what does it matter if the contractor will only be there for three months?). Of course most contracts last significantly longer than the initial term, some contractors end up working in the same position for 10 years or more!

There are some down-sides to being a contractor, one is that they get less interesting work (offering someone a choice of projects if they become a permanent employee or the project that is deemed to be least interesting if they insist on being a contractor is not uncommon). Another down-side to being a contractor is the way that contractors are used. The ideal way of running a company is to have mostly permanent employees and to use contractors for special skills, short-term projects, and for emergencies when they can't hire permanent employees. When a company has almost no permanent employees it usually means that something is going badly wrong. This means that if you select a random contract role there is a good chance that it will be one where things are going badly wrong. The money from contracting is good, but it can be depressing when projects fail.

Friday, January 19, 2007

licence for lecture notes

While attending LCA it occurred to me that the lecture notes from all the talks that I have given lack a copyright notice. So I now retrospectively license my lecture notes in the manner that probably matches what everyone was already doing. The Creative Commons web site has a form to allow you to easily choose a license. So I have chosen the below license, it applies to all lecture notes currently on my web site and all that I publish in future unless they contain special notice of different license conditions.

Update: From now on I am releasing all lecture notes under a non-commercial share-alike license. I had previously not given a specific license to the content on my blog - now I am specifically licensing it under a non-commercial share-alike license. This means (among other things) that you may not put my content on a web page that contains Google AdWords or any other similar advertising.

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.

Thursday, January 18, 2007

new release of postal

Today I have released a significant new version of my mail server benchmark Postal! The list of changes is below:

  • Added new program bhm to listen on port 25 and send mail to /dev/null. This allows testing mail relay systems.
  • Fixed a minor bug in reporting when compiled without SSL.
  • Made postal write the date header in correct RFC2822 format.
  • Removed the name-expansion feature, it confused many people and is not needed now that desktop machines typically have 1G of RAM. Now postal and rabid can have the same user-list file.
  • Moved postal-list into the bin directory.
  • Changed the thread stack size to 32K (used to be the default of 10M) to save virtual memory size (not that this makes much difference to anything other than the maximum number of threads on i386).
  • Added a minimum message size option to Postal (now you can use fixed sizes).
  • Added a Postal option to specify a list of sender addresses separately to the list of recipient addresses.
  • Removed some unnecessary error messages.
  • Handle EINTR to allow ^Z and "bg" from the command line. I probably don't handle all cases, but now that I agree that failure to handle ^Z is an error I expect bug reports.
  • Made the test programs display output on the minute, previously they displayed once per minute (EG 11:10:35) while now it will be 11:10:00. This also means that the first minute reported will have something less than 60 seconds of data - this does not matter as a mail server takes longer than that to get up to speed.
  • Added support for GNUTLS and made the Debian package build with it. Note that BHM doesn't yet work correctly with TLS.
  • Made the programs exit cleanly.
Thanks to Inumbers for sponsoring the development of Postal.

I presented a paper on mail server performance at OSDC 2006 that was based on the now-released version of Postal.

I've been replying to a number of email messages in my Postal backlog, some dating back to 2001. Some of the people had changed email address during that time so I'll answer their questions in my blog instead.

/usr/include/openssl/kssl.h:72:18: krb5.h: No such file or directory
In file included from /usr/include/openssl/ssl.h:179

One problem reported by a couple of people is having the above error when compiling on an older Red Hat release (RHL or RHEL3). Running ./configure --disable-ssl should work-around that problem (at the cost
of losing SSL support). As RHEL3 is still in support I plan to fix this bug eventually.

There was a question about how to get detailed information on what Postal does. All three programs support the options -z and -Z to log details of what they do. This isn't convenient if you only want a small portion of the data but can be used to obtain any information you desire.

One user reported that options such as -p5 were not accepted. Apparently their system had a broken implementation of the getopt(3) library call used to parse command-line parameters.

Wednesday, January 17, 2007

political compass

It appears that some people don't understand what right-wing means in terms of politics, apart from using it as a general term of abuse.

I recommend visiting the site http://www.politicalcompass.org/ to see your own political beliefs (as determined by a short questionnaire) graphed against some famous people. The unique aspect of the Political Compass is that they separate economic and authoritarian values. Stalinism is listed as extreme authoritarian-left and Thatcherism as medium authoritarian-right. Nelson Mandela and the Dalai Lama are listed as liberal-left.

I score -6.5 on the economic left/right index and -6.46 on the social libertarian/authoritarian index, this means that I am fairly strongly liberal-left. Previously the Political Compass site would graph resulta against famous people but they have since removed the combined graph feature and the scale from the separate graphs. Thus I can't determine whether their analysis of the politics of Nelson Mandela and the Dalai Lama indicate that one of those men has beliefs that more closely match mine than the other. I guess that this is because the famous politicians did not take part in the survey and an analysis of their published material was used to assess their beliefs, this would lead to less accuracy.

The Wikipedia page on Right-Wing Politics provides some useful background information. Apparently before the French revolution in the Estates General the nobility sat on the right of the president's chair. The tradition of politically conservative representatives sitting on the right of the chamber started there, I believe that such seating order is still used in France while in the rest of the world the terms left and right are used independently of seating order.

Right-wing political views need not be associated with intolerance. If other Debian developers decide to publish their political score as determined by the Political Compass quiz then I'm sure that we'll find that most political beliefs are represented, and I'm sure that most people will discover that someone who they like has political ideas that differ significantly from their own.

lifetime failures (LF)

This morning at LCA Andrew Tanenbaum gave a talk about Minix 3 and his work on creating reliable software.

He cited examples of consumer electronics devices such as TVs that supposedly don't crash. However in the past I have power-cycled TVs after they didn't behave as desired (not sure if it was a software crash - but that seems like a reasonable possibility) and I have had a DVD player crash when dealing with damaged disks.

It seems to me that there are two reasons that TV and DVD failures aren't regarded as a serious problem. One is that there is hardly any state in such devices, and most of that is not often changed (long-term state such as frequencies used for station tuning is almost never written and therefore unlikely to be lost on a crash). The other is that the reboot time is reasonably short (generally less than two seconds). So when (not if) a TV or DVD player crashes the result is a service interruption of two seconds plus the time taken to get to the power point and no loss of important data. If this sort of thing happens less than once a month then it's likely that it won't register as a failure with someone who is used to rebooting their PC once a day!

Another example that was cited was cars. I have been wondering whether there are any crash situations for a car electronic system that could result in the engine stalling. Maybe sometimes when I try to start my car and it stalls it's really doing a warm-boot of the engine control system.

Later in his talk Andrew produced the results of killing some Minix system processes which show minimal interruption to service (killing an Ethernet device driver every two seconds decreased network performance by about 10%). He also described how some service state is stored so that it can be used if the service is restarted after a crash. Although he didn't explicitely mention it in his talk it seems that he has followed the minimal data loss plus fast recovery features that we are used to seeing in TVs and DVD players.

The design of Minix also has some good features for security. When a process issues a read request it will grant the filesystem driver access to the memory region that contains the read buffer - and nothing else. It seems likely that many types of kernel security bug that would compromise systems such as Linux would not be a serious problem on the HURD. Compromising a driver for a filesystem that is mounted nosuid and nodev would not allow any direct attacks on applications.

Every delegate of LCA was given a CD with Minix 3, I'll have to install it on one of my machines and play with it. I may put a public access Minux machine online at some time if there is interest.

Tuesday, January 16, 2007

Some ideas for running a conference

Firstly for smooth running of the presentations it would be ideal if laptops were provided for displaying all presentations (obviously this wouldn't work for live software demos but it would work well for the slide-show
presentations). Such laptops need to be tested with the presentation files that will be used for the talks (or pre-release versions that are produced in the same formats). It's a common problem that the laptops owned by the speakers will have problems connecting to the projectors used at the conference which can waste time and give a low quality display. Another common problem is that laptops owned by the conference often have different versions of the software used for the slides which renders them differently, the classic example of this is OpenOffice 1.x and 2.x which render presentations differently such that using the wrong one results in some text being off-screen.

The easy solution to this is for the conference organizers to provide laptops that have multiple boot options for different distributions. Any laptop manufactured in the last 8 years will have enough disk space for the
latest release of Debian and the last few Fedora releases. As such machines won't be on a public network there's no need to apply security updates and therefore a machine can be used at conferences in successive years, a 400MHz laptop with 384M of RAM is quite adequate for this purpose while also being so small that it will sell cheaply.

A slightly better solution would be to have laptops running Xen. It's not difficult to set up Xephyr in fullscreen mode to connect to a Xen image, you could have several Xen instances running with NFS file sharing so that the speaker could quickly test out several distributions to determine which one gives the best display of their notes. This would also allow speakers to bring their own Xen images.

This is especially important if you want to run lightning talks, when there is only 5 minutes allocated for a talk you can't afford to waste 2 minutes in setting up a presentation!

In other news Dean Wilson gave my talk yesterday a positive review.

Monday, January 15, 2007

LCA talk

This afternoon I gave a talk at the Debian mini-conf of LCA on security improvements that are needed in Debian, the notes are online here.

The talk didn't go quite as well as I had desired, I ended up covering most of the material in about half the allotted time and I could tell that the talk was too technical for many audience members (perhaps 1/4 of the audience lost interest). But the people who were interested asked good questions (and used the remainder of the time). Some of the people who are involved in serious Debian coding were interested (and I'll file a bug report based on information from one of them after making this post).

I believe that I was quite successful in my main aim of giving Debian developers ideas for improving the security of Debian. My second aim of educating the users about options that are available now (with some inconvenience) and will be available shortly in a more convenient form was partially successful.

The main content of my talk was based on the lightning talk I gave for OSDC, but was more detailed.

After my talk I spoke to Crispin Cowan from Novell about some of these issues. He agrees with me about the need for more capabilities which I take as a very positive sign.

top 10 girl geeks

We have a list of 10 (famous) girl geeks from CNET and one from someone else.

The CNET list has Ada Byron, Grace Hopper, Mary Shelly, and Marie Curie. Mary Shelly isn't someone who I'd have listed, but it does seem appropriate now I think about it. Marie Curie is one of the top geeks of all time (killing yourself through science experiments has to score bonus geek points). I hope that there are better alternatives to items 4, 7, 9, and 10 on the Cnet list.

The list from someone else has 9 women I've never heard of. If we are going to ignore historical figures (as done in the second list) but want to actually list famous women then the list seems to be short. If we were to make a list of women who are known globally (which would mean excluding women who are locally famous in Australia, or in Debian for example). The only really famous female geek that I can think of is Pamela from Groklaw.

The process of listing the top female geeks might have been started as an attempt to give a positive list of the contributions made by women. Unfortunately it seems to highlight the fact that women are lacking from leadership positions. There seem to be no current women who are in positions comparable to Linus, Alan, RMS, ESR, Andrew Tanenbaum, or Rusty (note that I produced a list of 6 famous male geeks with little thought or effort).

Kirrily has written an interesting article on potential ways of changing this.

Thursday, January 11, 2007

ps and security

A post by Scott James Remnant describes how to hide command-line options from PS output. It's handy to know that but that post made one significant implication that I strongly disagree with. It said about command-line parameters "perhaps they contain sensitive information". If the parameters contain sensitive information then merely hiding them after the fact is not what you want to do as it exposes a race condition!

One option is for the process to receive it's sensitive data via a pipe (either piped from another process or from a named pipe that has restrictive permissions). Another option is to use SE Linux to control which processes may see the command-line options for the program in question.

In any case removing the data shortly after hostile parties have had a chance to see it is not the solution.

Apart from that it's a great post by Scott.

Wednesday, January 10, 2007

some random Linux tips

  • echo 1 > /proc/sys/vm/block_dump

    The above command sets a sysctl to cause the kernel to log all disk writes. Below is a sample of the output from it. Beware that there is a lot of data.

    Jan 10 09:05:53 aeon kernel: kjournald(1048): WRITE block XXX152 on dm-6
    Jan 10 09:05:53 aeon kernel: kjournald(1048): WRITE block XXX160 on dm-6
    Jan 10 09:05:53 aeon kernel: kjournald(1048): WRITE block XXX168 on dm-6
    Jan 10 09:05:54 aeon kernel: kpowersave(5671): READ block XXX384 on dm-7
    Jan 10 09:05:54 aeon kernel: kpowersave(5671): READ block XXX400 on dm-7
    Jan 10 09:05:54 aeon kernel: kpowersave(5671): READ block XXX408 on dm-7
    Jan 10 09:05:54 aeon kernel: bash(5803): dirtied inode XXXXXX1943 (block_dump) on proc

  • Prefixing a bash command with ' ' will prevent a ! operator from running it. For example if you had just entered the command " ls -al /" then "!l" would not repeat it but would instead match the preceeding command that started with a 'l'. On SLES-10 a preceeding space also makes the command not appear in
    the history while on Debian/etch it does (both run Bash 3.1).

  • LD_PRELOAD=/lib/libmemusage.so ls > /dev/null

    The above LD_PRELOAD will cause a dump to stderr of data about all memory allocations performed by the program in question. Below is a sample of the output.

    Memory usage summary: heap total: 28543, heap peak: 20135, stack peak: 9844
    total calls total memory failed calls
    malloc| 85 28543 0
    realloc| 11 0 0 (in place: 11, dec: 11)
    calloc| 0 0 0
    free| 21 12107
    Histogram for block sizes:
    0-15 29 30% ==================================================
    16-31 5 5% ========
    32-47 10 10% =================
    48-63 14 14% ========================
    64-79 4 4% ======
    80-95 1 1% =
    96-111 20 20% ==================================
    112-127 2 2% ===
    208-223 1 1% =
    352-367 4 4% ======
    384-399 1 1% =
    480-495 1 1% =
    1536-1551 1 1% =
    4096-4111 1 1% =
    4112-4127 1 1% =
    12800-12815 1 1% =

Monday, January 08, 2007

cooling

Recently there has been some really hot weather in Melbourne that made me search for alternate methods of cooling.

The first and easiest method I discovered is to keep a 2L bottle of water in my car. After it's been parked in the sun on a hot day I pour the water over the windows. The energy required to evaporate water is 2500 Joules per gram, this means that the 500ml that probably evaporates from my car (I guess that 1.5L is split on the ground) would remove 1.25MJ of energy.from my car - this makes a significant difference to the effectiveness of the air-conditioning (the glass windows being the largest hot mass that can easily conduct heat into the cabin).

It would be good if car designers could incorporate this feature. Every car has a system to spray water on the wind-screen to wash it, if that could be activated without the wipers then it would cool the car significantly. Hatch-back cars have the same on the rear window, and it would not be difficult at the design stage to implement the same for the side windows too.

The next thing I have experimented with is storing some ice in a room that can't be reached by my home air-conditioning system. Melting ice absorbes 333 Joules per gram. An adult who is not doing any physical activity will produce about 100W of heat, that is 360KJ per hour. Melting a kilo of ice will abrorb 333KJ per hour, if the amount of energy absorbed when the melt-water approaches room temperature is factored in then a kilo of ice comes close to absorbing the heat energy of an adult at rest. Therefore 10Kg of ice stored in your bedroom will prevent you from heating it by your body heat during the course of a night.

In some quick testing I found that 10Kg of ice in three medium sized containers would make a small room up to two degrees cooler than the rest of the house. The ice buckets also have water condense on them. In a future experiement I will measure the amount of condensation and try and estimate the decrease in the humidity. Lower humidity makes a room feel cooler as sweat will evaporate more easily. Ice costs me $3 per 5Kg bag, so for $6 I can make a hot night significantly more bearable. In a typical year there are about 20 unbearably hot nights in Melbourne. So for $120 I can make one room cooler on the worst days of summer
without the annoying noise of an air-conditioner (the choice of not sleeping due to heat or not sleeping due to noise sucks).

The density of dry air at 0C and a pressure of 101.325 kPa is 1.293 g/L.

A small bedroom might have an area of 3M*3M and be 2.5M high giving a volume of 22.5M^3 == 22,500L. 22,500 * 1.293 = 29092.500g of air.

One Joule can raise the temperature of one gram of cool dry air by 1C.

Therefore when a kilo of ice melts it would be able to cool the air in such a room by more than 10 degrees C! The results I observe are much smaller than that, obviously the walls, floor, ceiling, and furnishings in the room also have some thermal energy, and as the insulation is not perfect some heat will get in from other rooms and from outside the house.

If you have something important to do the next day then spending $6 or $12 on ice the night before is probably a good investment. It might even be possible to get your employer to pay for it, I'm sure that paying for ice would provide better benefits in employee productivity than many things that companies spend money on.

Sunday, January 07, 2007

Xephyr

As part of my work on Xen I've been playing with Xephyr (a replacement for Xnest). My plan is to use Xen instances for running different versions of desktop environments. You can't just ssh -X to a Xen image and run things. One problem is that some programs such as Firefox do strange things to try and ensure that you only have one instance running. Another problem is with security, the X11 security extensions don't seem to do much good. A quick test indicates that a ssh -X session can't copy the window contents of a ssh -Y session, but can copy the contents of all windows run in the KDE environment. So this extension to X (and the matching ssh support) seem to do little good.

One thing I want to do is to have a Xen image for running Firefox with risky extenstions such as Flash and keep it separate from my main desktop for security and managability.

Xephyr :1 -auth ~/.Xauth-Xephyr -reset -terminate -screen 1280x1024

My plan is to use a command such as the above to run the virtual screen. That means to have a screen resolution of 1280x1024, to terminate the X server when the last client exits (both the -reset and the -terminate options are required for this), to be display :1 and listen with TCP (the default), and to use an authority file named ~/.Xauth-Xephyr.

xauth generate :1 .

The first problem is how to generate the auth file, the xauth utility is documented as doing it via the above command. But this really connects to a running X server and copies the auth data from it.

The solution (as pointed out to me by Dr. Brian May) is to be found in the startx script which solves this problem. The way to do it is to use the add :1 . $COOKIE command in xauth to create the auth file used by the X server, and to generate the cookie with the mcookie program.

In ~/.ssh/config:
Host server
SendEnv DISPLAY

In /etc/ssh/sshd_config:
AcceptEnv DISPLAY

The next requirement is to tell the remote machine (which incidentally doesn't need to be a Xen virtual machine, it can be any untrusted host that contains X applications you want to run) which display to use. The first thing to do is to ssh to the machine in question and run the xauth program to add the same cookie as is used for the X server. Then the DISPLAY environment variable can be sent across the link by setting the ~/.ssh/config file at the client end to have the above settings (where server is the name of the host we will connect to via SSH) and in the sshd_config file on the server have the line AcceptEnv DISPLAY to accept the DISPLAY environment variable. It would have been a little easier to configure if I had added the auth entry to the main ~/.Xauthority file and used the command DISPLAY=:1 ssh -X server, this would be the desired configuration when operating over an untrusted network. But when talking to a local Xen instance it gives better performance to not encrypt the X data.

The following script will generate an xauth entry, run a 1280x1024 resolution Xephyr session, and connect to the root account on machine server and run the twm window manager. Xephyr will exit when all X applications end. Note that you probably want to use passwordless authentication on the server as typing a password twice to start the session would be a drag.

#!/bin/sh

COOKIE=`mcookie`
FILE=~/.Xauth-Xephyr
rm -f $FILE
#echo "add 10.1.0.1:1 . $COOKIE" | xauth
ssh root@server "echo \"add 10.1.0.1:1 . $COOKIE\" | xauth"
echo "add :1 . $COOKIE" | xauth -f $FILE
Xephyr :1 -auth $FILE -reset -terminate -screen 1280x1024 $* &
DISPLAY=10.1.0.1:1 ssh root@server twm
wait

Saturday, January 06, 2007

document storage

I have been asked for advice about long-term storage of documents. I decided to blog about it because my thoughts may be useful to others, and because if I get something wrong then surely people will correct me. ;)

Many organizations are looking at using computers for storing all documents. This gives savings on the costs of storing paper - the promise of the paperless office is being fulfilled. There are some potential issues about whether a signature in a PDF file that's scanned from paper is valid - but I guess it's the same as a signature on a FAX and everyone seems to accept FAXed contracts.

The technical problem is how to reliably store data long-term. The problem is that all modern methods of storing data will degrade over time. Anything less than engraving a message in stone, gold, or platinum and burying it will have some data loss eventually.

If the documents that need to be archived have no special requirements and if you have a good backup system in place (testing backup media, off-site storage in case of disaster, multiple sets of hardware that can read the backup media in case of hardware failure, etc) then you might be able to just store the documents on a server and include them in the backup plan. The regular backups should cater for replacing media over the long term. If however there is a significant amount of data or the data has confidentiality requirements that preclude having it all online all the time then you need a separate infrastructure for such storage.

Regular backup systems have to deal with files being deleted from storage and files that have their contents changed. For a document archiving system no file will ever be changed once it has been created and no file will be deleted. This allows some simplifications to the backup strategy. For example if you have multiple terabytes of documents backed up by tape and stored off-site you could use CD-ROMs or other media for storing recent changes. It would be very easy for an employee to grab a couple of CDs before rushing out of a burning building, but grabbing a set of tapes (or the correct tape from a large set) may not be possible.

It would be possible to use a tape library system as the primary storage for documents. If a large organization was going to implement this a few years ago that might be a good option. Nowadays storage is getting increasingly large and cheap. Terabytes are available in desktop PCs and hundreds of terabytes are available for server storage. So having the primary document store on a server with a decent amount of space and then making tape backups for storage in secure locations seems viable.

One thing to note about such document storage is that having everything on a server allows a much larger amount of data to be accessed and copied more easily than on paper. Sorting through a billion paper documents and copying the thousand most useful ones would be a difficult task for someone who was involved in industrial espionage. Finding the most useful files when they are indexed on a server should be quite easy and copying a few thousand is also easy (one thousand scanned documents of medium size should fit on a USB memory stick - much smaller than a few reams of copied documents).

Finally documents have to be archived in publicly documented file formats that can be easily read in the future. The PDF specification is well known and there are multiple programs that can display data in such files, another good option for scanned documents is JPEG. Proprietary formats such as MS-Word should never be used, you never know whether you will be able to read them in four years, let alone the seven years for which many documents must be retained or the 20-30 years that some documents must be retained.

Friday, January 05, 2007

core files

The issue of core file management has come up for discussion again in the SE Linux list.

I believe that there are two essential security requirements for managing core files, one is that the complete security context of the crashing process is stored (to the greatest possible extent), and the other is that processes with different security contexts be prevented from discovering that a process dumped core (when attacking a daemon it would be helpful to know when you made one of it's processes dump core).

The core file will have the same UID and GID as the process that crashed. It's impossible to maintain the complete security context of the crashing process in this manner as Unix permissions support multiple supplementary groups and Unix filesystems only support one GID. So the supplementary groups are lost.

There is also a sysctl kernel.core_pattern which specifies the name of the core file. This supports a number of modifiers, EG the value "core.%p.%u.%g" would give a file named "core.PID.UID.GID". It would be good to have a modification to the kernel code in question to allow the SE Linux context to be included in this (maybe %z).

To preserve the SE Linux context of the crashing process with current kernel code we need to have a unique type for each process that dumps core, this merely requires that each domain have an automatic transition rule for creating files in the directory chosen for core dumps. In the default configuration we have core files dumped in the current directory of the process. This may be /tmp or some other common location which allows an attacker to discover which process is dumping core (due to the directory being world readable) and in the case of SE Linux there may be multiple domains that are permitted to create files in /tmp with the same context which gets in the way of using such a common directory for core files.

The traditional Unix functionality is to have core files dumped in the current directory. Obviously we can't break this by default. But for systems where security is desired I believe that the correct thing to do is to use a directory such as /var/core for core files, this can be easily achieved by creating the directory as mode 1733 (so that any user can create core files but no-one but the sys-admin can read them) and then setting the core_pattern sysctl to specify that all core files go in that directory. The next improvement is to have a poly-instantiated directory for /var/core such that each login user has their own version. That way the user in question could see the core files created by their own processes while system core files and core files for other users would be in different directories. Poly-instantation is easier to implement for core files than it is for /tmp (and the other directories for which it is desirable) because there is much less access to such a directory. When things operate correctly core files are not generated, and users never need to access each other's core files directly (they are mode 0600 so this isn't possible anyway).

This area will require a moderate amount of coding before it works in the ideal manner. I aim to briefly describe the issues only in this post.

Thursday, January 04, 2007

monitors for developers

Michael Davies recently blogged that all developers should have big screens. This news has been around for a while, the most publicity for the idea came from Microsoft Research where they did a study showing that for certain tasks a 50% performance increase could be gained from a larger monitor.

If you consider that a good software developer will get paid about $100K and it's widely regarded that in a corporate environment the entire costs for a worker (including management, office space, etc) is double their base salary then you consider each developer to be worth $200K per annum. Therefore larger and more monitors could potentially give a benefit in excess of $100K per annum (we assume that the value provided by the developer is greater than their salary - apart from the dot-com boom that's usually the case).

It's quite obvious that you can get a really great monitor configuration for significantly less than $100K and that it will remain quite current for significantly more than a year (monitor technology advances comparatively slowly so a good monitor should last at least four years).

Some time ago I researched this matter for a client. I convinced all the managers in my area, I convinced a bunch of colleagues to buy bigger monitors for their homes (and bought one myself), but unfortunately senior management saw it as a waste of money. I was trying to convince them that people who were being paid >$100,000 should be each assigned a $400 monitor. Sadly they believed that spending the equivalent of less than a day's wages per employee was not justified.

If I was in a management position I would allocate an amount of money for each developer to spend on hardware or conferences at their own discretion. I would make that amont of money a percentage of the salary of each employee, and I would also allow them to assign some of their share to a colleague if they had a good reason for it (EG if a new hire needed something expensive that would exceed their budget for the first year). I think that people who are good programmers are in the best position to judge what can best be done to improve their own productivity, and that allowing them to make their own choices is good for morale.

On a more technical level I have a problem with getting a big monitor. I do most of my work on a laptop because I travel a lot. I don't travel as much as I did while living in Europe but still do a lot of coding in many strange places. I started writing my Postal benchmark in the hotel restaurant of a Bastion hotel in Utrecht during one of the worst recorded storms in Europe (the restaurant had huge windows and it was inspirational for coding). I wrote the first version of my ZCAV benchmark in Denver airport while waiting for a friend.

What I need is a good way of moving open windows from my laptop to a big external display and then back again. I don't want to logout before moving my machine. With a Macintosh this is quite possible (I'm using a OS/X machine while working for a client and the only thing that has impressed me is the monitor support). With Linux things aren't so easy, it's supposed to be possible but I haven't heard any unqualified success stories yet.

I guess I could try setting up XDMCP between my laptop and a desktop machine with some big displays and logout before moving.

Any suggestions?

Wednesday, January 03, 2007

Windows Vista

There's a blog about Windows Vista as the Free Software Foundation site. Not much content yet apart from RSS links but it should have some potential in future.

I am not planning on tracking Vista in detail (not enough time), but if you want to track such things then the FSF site should be useful.

Xen shared storage

disk = [ 'phy:/dev/vg/xen1,hda,w', 'phy:/dev/vg/xen1-swap,hdb,w', 'phy:/dev/vg/xen1-drbd,hdc,w', 'phy:/dev/vg/san,hdd,w!' ]

For some work that I am doing I am trying to simulate a cluster that uses fiber channel SAN storage (among other things). The above is the disk line I'm using for one of my cluster nodes, hda and hdb are the root and swap disks for a cluster node, hdc is a DRBD store (DRBD allows a RAID-1 to be run across the cluster nodes via TCP), and hdd is a SAN volume. The important thing to note is the "w!" mode for the device, this means write access is granted even in situations whre Xen thinks it's unwise (IE it's being used by another Xen node or is mounted on the dom0). I've briefly tested this by making a filesystem on /dev/hdd on one node, copying data to it, then umounting it and mounting it on another node to read the data.

There are some filesystems that support having multiple nodes mounting the same device at the same time, these include CXFS, GFS, and probably some others. It would be possible to run one of those filesystems across nodes of a Xen cluster. However that isn't my aim at this time. I merely want to have one active node mount the filesystem while the others are on standby.

One thing that needs to be solved for Xen clusters is fencing. When a node of a cluster is misbehaving it needs to be denied access to the hardware in case it recovers some hours later and starts writing to a device that is now being used by another node. AFAIK the only way of doing this is via the xm destroy command. Probably the only way of doing this is to have a cluster node ssh to the dom0 and then run a setuid program that calls xm destroy.

Tuesday, January 02, 2007

multiple ethernet devices in Xen

It seems that no-one has documented what needs to be done to correctly run multiple Ethernet devices (with one always being eth0 and the other always being eth1) in a Linux Xen configuration (or if it is documented then google wouldn't find it for me).

vif = [ 'mac=00:16:3e:00:01:01', 'mac=00:16:3e:00:02:01, bridge=xenbr1' ]

Firstly I use a vif line such as the above in the Xen configuration. This means that there is one ethernet device with the hardware address of 00:16:3e:00:01:01 and another with the address of 00:16:3e:00:02:01. I just updated this section, the 00:16:3e prefix has officially been allocated to the Xen project for virtual machines. Therefore on your Xen installation you can do whatever you like with MAC addresses in that range without risk of competing with real hardware. The Xen code uses random MAC addresses in that range if you let it.

I have two bridge devices, xenbr0 and xenbr1. I only need to specify one as Xen can figure the other out.

Now when my domU's boot they assign ethernet device names from the range eth0 to eth8. If there is only one virtual Ethernet device then it is always eth0 and things are easy. But for multiple devices I need to rename the interfaces.

eth0 mac 00:16:3e:00:01:01
eth1 mac 00:16:3e:00:02:01

This is done through the ifrename program (package name ifrename in Debian). I create a file named /etc/iftab with the above contents and then early in the boot process (before the interfaces are brought up) the devices will be renamed.

In the Red Hat model you edit the files such as /etc/sysconfig/networking/devices/ifcfg-eth0 and change the line that starts with HWADDR to cause a device rename on boot.

Update: the original version of this post used MAC addresses with a prefix of 00:00:00, the officially allocated prefix for Xen is 00:16:3e which I now use. Thanks to the person who commented about this.

Monday, January 01, 2007

installing Xen domU on Debian Etch

I have just been installing a Xen domU on Debian Etch. I'll blog about installing dom0 later when I have a test system that I can re-install on (my production Xen machines have the dom0 set up already). The following documents a basic Xen domU (virtual machine) installation that has an IP address in the 10.0.0.0/8 private network address space and masquerades outbound network data. It is as general as possible.

lvcreate -n xen1 -L 2G /dev/vg

Firstly use the above command to create a block device for the domU, this can be a regular file but a LVM block device gives better performance. The above command is for a LV named xen1 on an LVM Volume Group named vg.

mke2fs -j /dev/vg/xen1

Then create the filesystem with the above command.

mount /dev/vg/xen1 /mnt/tmp
mount -o loop /tmp/debian-testing-i386-netinst.iso /mnt/cd
cd /mnt/tmp
debootstrap etch . file:///mnt/cd/
chroot . bin/bash
vi /etc/apt/sources.list
/etc/hosts /etc/hostname
apt-get update
apt-get install libc6-xen
linux-image-xen-686 openssh-server
apt-get dist-upgrade

Then perform the basic Debian install with the above commands. Make sure that you change to the correct directory before running the debootstrap command. The /etc/hosts and /etc/hostname files need to be edited to have the correct contents for the Xen image (the default is an empty /etc/hosts and /etc/hostname has the name of the parent machine). The file /etc/apt/sources.list needs to have the appropriate configuration for the version of Debian you use and for your preferred mirror. libc6-xen is needed to stop a large number of kernel warning messages on boot. It's a little bit of work before you get the virtual machine working on the network so it's best to do these commands (and other package installs) before the following steps. After the above type exit to leave the chroot and run umount /mnt/tmp.

lvcreate -n xen1-swap -L 128M /dev/vg
mkswap /dev/vg/xen1-swap

Create a swap device with the above commands.

auto xenbr0
iface xenbr0 inet static
pre-up brctl addbr xenbr0
post-down brctl delbr xenbr0
post-up iptables -t nat -F
post-up iptables -t nat -A POSTROUTING -o eth0 -s 10.1.0.0/24 -j MASQUERADE
address 10.1.0.1
netmask 255.255.255.0
bridge_fd 0
bridge_hello 0
bridge_stp off

Add the above to /etc/network/interfaces and use the command ifup xenbr0 to enable it. Note that this masquerades all outbound data from the machine that has a source address in the 10.1.0.0/24 range.

net.ipv4.conf.default.forwarding=1

Put the above in /etc/sysctl.conf, run sysctl -p and echo 1 > /proc/sys/net/ipv4/conf/all/forwarding to enable it.

cp /boot/initrd.img-2.6.18-3-xen-686 /boot/xen-initrd-18-3.gz

Set up an initial initrd (actually initramfs) for the domU with a command such as the above. Once the Xen domU is working you can create the initrd from within it which gives a smaller image.

kernel = "/boot/vmlinuz-2.6.18-3-xen-686"
ramdisk = "/boot/xen-initrd-18-3.gz"
memory = 64
name = "xen1"
vif = [ '' ]
disk = [ 'phy:/dev/vg/xen1,hda,w', 'phy:/dev/vg/xen1-swap,hdb,w' ]
root = "/dev/hda ro"
extra = "2 selinux=1 enforcing=0"

The above is a sample Xen config file that can go in /etc/xen/xen1. Note that this will discover an appropriate bridge device by default, if you only plan to have one bridge then it's quite safe, if you want multiple bridges then things will be a little more complex. Also note that there are two block devices created as /dev/hda and /dev/hdb, obviously if we wanted to have a dozen block devices then we would want to make them separate partitions with a virtual partition table. But in most cases a domU will be a simple install and won't need more than two block devices.

xm create -c xen1

Now start the Xen domU with the above command. The -c option means to take the Xen console (use ^] to detach). After that you can login as root at the Xen console with no password, now is a good time to set the password.

Run the command apt-get install udev, this could not be done in the chroot before as it might mess up the dom0 environment. Edit /etc/inittab and disable gettys on tty2 to tty6, I don't know if it's possible to use them (the default and only option for xen console commands is tty1) and in any case you would not want 6, saving a few getty processes will save some memory.

Now you should have a basically functional Xen domU. Of course a pre-requisite for this is having a machine with a working dom0 installation. But the dom0 part is easier (and I will document it in a future blog post).

free software liason?

In my previous work as a sys-admin I have worked for a number of companies that depend heavily on free software. If you use a commercially supported distribution such as Red Hat Enterprise Linux then you get high quality technical support (much higher than you expect from closed-source companies), but this still doesn't provide as much as you might desire as it is reactive support (once you notice a problem you report it). Red Hat has a Technical Account Manager offering that provides a higher level of support and there is also a Professional Services organization that can provide customised versions of the software. But the TAM and GPS offerings are mostly aimed at the larger customers (they are quite expensive).

It seems to me that a viable option for companies with smaller budgets is to have an employee dedicated to enhancing free software and getting changes accepted upstream. For a company that has a team of 5+ sys-admins the cost of a developer dedicated to such software development tasks should be saved many times over by the greater productivity of the sys-admins and the greater reliability of the servers.

This is not to criticise commercial offerings such as Red Hat's TAM and GPS services, a dedicated free software developer could work with the Red Hat TAM and GPS people thus allowing the company to get the most value for money from the Red Hat consultants.

If using a free distribution such as Debian the case for a dedicated liason with the free software community is even stronger, as there is no formal support organization that compares to the Red Hat support (there are a variety of small companies that provide commercial support, but I am not aware of a 24*7 help desk or anything similar). If you have someone employed full-time as a free software developer then they can provide most of your support. It would probably make sense for a company that has mission critical servers running Debian to employ a Debian developer, a large number of Debian developers already work as sys-admins and finding one who is looking for a new job should not be difficult. There are more companies that would benefit from having DDs as employees than there are DDs, this isn't an obstacle to hiring them as most hiring managers don't realise the technical issues involved.

This is not to say that a company which can't hire a DD should use a different distribution, merely that their operations will not be as efficient as they might be.