Mirrors Primary (US) Issues April 2000

April 2000 Search Submit Article Contact Us How to Help Merchandise

T H I S M O N T H ’ S F E A T U R E S From the Editor Managing websites using :Part 4 The Future of BSD? by Nick Clayton by Brett Taylor The future's so bright... The long awaited continuation... Read More Daily Daemon News April ezine is out! No FreeBSD BOF at ApacheCon 2000 fooling! by Rob Arnold FreeBSD Diary site seized! SSH Communications An on the spot report from ApacheCon2000. Read More Security Announces SSH Secure Shell 2.1 The open-source pretenders? Merger Interview NetBSD ported to MIPS by Chris Coleman based Cobalt machines Feedback from BSDI and NetBSD on the latest merger everyone’s talking about. Read More Source Wars Week 12 R E G U L A R C O L U M N S

Newbies by Jonathan McKitrick

New User to FreeBSD? Perhaps these tips will help point you in the right direction. Read More Miscellaneous Answer Man Credits by Gary Kline and David Leonard The hard-working crew It seems no matter how many FAQ’s there are, some Tarball questions continue to get asked. In this issue Gary and Download a tar.gz David answer some more questions that seem to frequently version of this issue poke their way into the lists. Read More Search

The Dæmon’s Advocate Search by Greg Lehey Advanced In this issue of Daemon’s Advocate, Greg gives us insight on the (new) SIS feature in Windows 2000. He also digs or Search all Daemon into an advocate’s thought on what’s being labeled as the News biggest news of the year. Read More

Copyright © 1998-2000 DæmonNews. All Rights Reserved. April 2000 Search Submit Article Contact Us Join Us Merchandise

The Future of BSD?

by Brett Taylor [email protected]

In the past year there have certainly been a number of big developments that have had a big influence (if only in the public perception) on BSD. Apple has released Darwin as Open Source and it’s based on code taken from FreeBSD and NetBSD. Their new , OS X, has this same BSD layer inside. Certainly that’s going to increase the BSD user base, even if most of its users don’t know about it.

Of course one of the biggest events was the announcement of the merger of BSDI and Walnut Creek (read more about this in Greg Lehey’s Daemon’s Advocate). As Greg notes, there was a lot of grumbling concerning what this would mean for FreeBSD since Walnut Creek has been the primary financial backer for the project, but also how this would affect NetBSD and OpenBSD. At this point I think it’s far too early to make any serious predictions about what effects this is going to generate in the BSD world.

What can we say then about the future? Is it bright and rosy? If you look at most any Slashdot item that mentions BSD you’ll see strong opinions from some (note I said some) users that BSD should just roll over as Linux has already won. I don’t know what they’ve won, but that’s a separate issue I guess. I don’t think it’s good to have any one platform completely dominate the market. Many of these same people are Microsoft bashers because they feel Microsoft is completely controlling the market and is evil, and yet they want Linux to completely control the computer world (maybe Linus is a benevolent dictator). Regardless, I think that when someone like Apple steps in and uses BSD code it says something about the quality of the BSD codebase. Yes, it certainly made life easier for them since they were coming from NeXT (which was BSD-based), but if the code was poor they would have certainly gone some other way.

I like to think of myself as an optimist. I think with the merger and Apple’s use of BSD code, we’ll see more native ported for the BSDs (Apple, if you're listening I'd like a Quicktime player - I'd even pay for it). I look forward to seeing the coming merge of code between the BSD/OS and FreeBSD codebases, but I won’t be personally tracking -current unless I get another play machine to do it on - I’ll watch from afar for now. :-) I also am looking forward to playing around with Darwin and Mac OS X (when it comes out) on my new iBook which should be here shortly. So let’s look to the future with an optimistic eye - worry about the problems when they show up!

Just a couple quick announcements:

You’ll notice that we have a new Newbies’ Corner columnist on board with this issue. Jonathon McKitrick is a new FreeBSD user and if you are on the -questions mail list you’ve certainly seen his name. Being a new user, Jonathon can hopefully provide some new user insight that us old timers have forgotten (‘‘when I was a youngster we had to learn graphics programming on an Apple IIe using machine code’’). Welcome on board Jonathon.

Finally, a warm welcome back to Nik Clayton with his series on managing websites using CVS and make. I’ve personally been waiting for the next article, as have many other readers who’ve written us, for too long. If anyone can lend Nik a hand with household chores to free up some more writing time for him that would be great.

Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved. April 2000 Search Submit Article Contact Us Join Us Merchandise

Managing websites using Unix Part Four

Copyright © Nik Clayton [email protected]

This is the fourth in a series of articles explaining how to use the tools provided by Unix and clones (such as the free BSD implementations, and the various different Linux distributions) to manage the contents of a website, such as the free webspace that ISPs often give to their customers.

There is nothing about the techniques described here (and in future articles) that limit them to small, personal websites. The author has successfully used these approaches to manage sites with thousands of pages and half a dozen active webmasters working on the site.

1. Isn’t this a little bit late?

When I first started this series, I expected to be able to put out an article a month, for a total of six articles. Things were fine for the first three articles, and then my workload in real life went just a little bit mad. That’s why this article is roughly nine months over due. Sorry about that.

My thanks to the various people who have e-mailed me over the past nine months or so with praise for the previous articles. You all acted as a very strong incentive to continue, and I hope you find this article, and the rest in the series, very useful.

To refresh your memory, the previous articles were:

Article 1 An introduction to CVS Article 2 An introduction to make(1) and a simple make install Article 3 A more complete make install

2. Introduction

If you have followed the previous articles you should now have a framework you can use to install the files in your work area (which you have been adding to your CVS repository as you go) in to the staging area. This framework includes a number of simple Makefiles, which do little more than list the names of the files which must be installed, and a more complex web.mk, which is included in to the smaller files, and contains most of the logic.

This article explains how you can use make(1) to help automate converting files from one format to another. In particular, it describes how you can covert image files from JPEG format to GIF format automatically, and shows how this technique can also be used to store binary files safely in the CVS repository.

Note: Do not forget the CVS commands from the previous articles. You should use them to add and commit the files you write while following this article.

Important: The sample Makefiles in this article are written for Berkeley Make, the default on the BSD systems. Some of these examples are not compatible with other make(1)s, such as GNU Make. Download a copy of Berkeley Make from http://www.quick.com.au/ftp/pub/sjg/help/bmake.html.

3. Pre-requisites

1. The sh(1) scripting language. While not essential, this will make it easier for you to follow some of the examples in this article.

2. The netpbm[1] utilities. These are a suite of command line programs to convert image files from one format to another, and they will be used in many of the examples. FreeBSD users can download them from graphics/netpbm in the ports collection. If you are using another OS then the original installation files can be found at ftp://ftp.x.org/R5contrib/netpbm-1mar1994.tar.gz, and your OS might have ‘‘pre-packaged’’ versions of these programs available.

You can use other image conversion tools if you prefer, but you will have to adjust the examples as necessary.

3. The Independent JPEG Group’s JPEG software. This is a set of libraries and some command line tools to help manipulate JPEG images. FreeBSD users can download them from graphics/jpeg in the ports collection. If you are using another OS then the original installation files can be found at ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v6b.tar.gz, and your OS might have ‘‘pre-packaged’’ versions of these programs available.

4. The problem: format conversion

You have created your website, and stored the contents in a CVS repository. In addition, you have created the appropriate framework to allow you to install your website in to the staging directory, which lets you preview your site and check that everything is working correctly.

However, your website contains duplicate files, stored in different formats. Perhaps these are images, stored in GIF and JPEG format, or documents that you make available uncompressed, gzip’d, and bzip’d, or perhaps sound files in different audio formats, or... the list is endless. You could store all these duplicate files in your CVS repository. That is certainly the simplest and fastest solution, and if disk space is not an issue, is worth pursuing. For the majority of us disk space is an issue, and any approach that can be used to reduce it is a good one.

To describe the problem more directly: you have one or more files that you want to make available in many different formats. Each of these distribution formats can be mechanically generated from the master format. When you install your website in to the staging area, you want to ensure that if any changes have been made to the file in its master format, that the copies stored in the distribution formats are updated.

That last sentence is a big clue. Whenever you have a problem that involves regenerating one or more files whenever another file is changed you should start thinking about whether or not make(1) can help you solve the problem.

4.1. A practical example

Many, many[2] websites come complete with a picture of their creator staring at you out of the top left corner of the screen, daring you to follow any of their links.

As you should know, including very large images on a web page is frowned upon because it generally greatly increases the amount of time the page takes to download. Therefore it is very good practice to include a small thumbnail copy of the main image, and make this thumbnail a link to the larger picture. Typically, the small thumbnail is stored in GIF format, while the larger image is stored as a JPEG.

In this example, the larger JPEG file is the master format and the smaller GIF file is the distribution format. This is because you get much better results scaling a large image down to a small size than you do scaling a small image up.

So, what you should do is store the JPEG file in the repository. Our Makefile will contain rules to generate a GIF file from the JPEG.

You probably have a suitable image that you can use to follow the rest of these examples. However, if you do not (and you want a quick laugh) feel free to use this JPEG picture of me. Plus, when you’ve finished with this article, it makes a handy target for a dart board.

You should be in your work area for the rest of the examples in this article, with a copy of the files you have been editing from the previous articles checked out and ready to go. If you have been following the other examples then your CVS repository is located in ~/www/cvs-rep, and the working area is ~/www/mywebsite.

% cd ~/www % cvs -d ~/www/cvs-rep checkout mywebsite % cd mywebsite % ls Makefile about.html index.html unix/ web.mk

Your directory contents might differ slightly, but that is what you should have if you have been following these articles. Now copy nik.jpeg (or another JPEG file of your choice) in to this directory.

4.2. Scaling the image by hand

Before writing any code to automatically scale this image it is important to test the programs and methods at the command line. This is much easier to debug, and it is easier to experiment with additional effects that you might want to add.

The steps that will be involved in converting the JPEG image to a smaller GIF file are;

1. Decompress the JPEG file to an intermediate image file in the PBM image format.

2. Scale this intermediate image so that it is smaller than the original JPEG file.

3. Reduce the number of colours in the scaled down file--JPEG images can contain many thousands of colours, while GIF images are restricted to just 256. In practice, for a web page a palette of 32 or 64 colours will look best for most people.

4. Convert this scaled down image (which is still in PBM format) to GIF format.

These steps map nicely onto the commands that you need to run in order to carry out this task. The programs to do this are part of the netpbm and JPEG utilities that you should have installed earlier. If you have other command line software that you are familiar with then you can use that instead. But the examples are going to assume you are using these tools.

The commands to produce a GIF file that contains 64 colours and is 25% of the original JPEG size are;

% djpeg -pnm nik.jpeg > nik.pnm % pnmscale .25 < nik.pnm > jkh-small.pnm % ppmquant -fs 64 < jkh-small.pnm > jkh-smallcol.ppm % ppmtogif < jkh-smallcol.ppm > nik.gif

Note: The .pnm and .ppm extensions are not too important. PNM is a Portable Anymap, PPM is a Portable Pixmap. However, as you can see, ppmquant can read .pnm files.

Those commands are straightforward. .25 in the pnmscale command line is the scaling factor, in this case, 25%, while fs specifies Floyd-Steinberg error diffusion, which can give better results for some files. This is a more complex operation, and can take longer, so you can remove that option if you like.

Notice that all these commands use STDIN and STDOUT for their input and output, which needs to be redirected to files as necessary. This means that you could string all these commands together in a pipeline, and avoid the need for temporary files.

% djpeg -pnm nik.jpeg | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > nik.gif

Note: That example is broken over two lines, as indicated by the backslash, but you can enter it on one line if you prefer.

4.3. Makefile framework

Putting these steps in to a Makefile is relatively straightforward, but there are a few traps for the unwary.

To being with, you can just translate these instructions directly to a target. The target will have the name of the GIF image, and the dependency for the target is the JPEG image. This example also introduces a new make(1) variable.

nik.gif: nik.jpeg djpeg -pnm ${.ALLSRC} | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > ${.TARGET}

.ALLSRC is set to the entire contents of the dependency list. In this case that is nik.jpeg.

If you create a file with those lines in (call it image.mk) you will be able to automatically rebuild the GIF image from the JPEG.

% make -f image.mk nik.gif djpeg -pnm nik.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > nik.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found % make -f image.mk nik.gif % touch nik.jpeg % make image.mk nik.gif djpeg -pnm nik.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > nik.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found

As you have just confirmed, nik.gif is only created if it does not exist, or if it does exist and nik.jpeg was modified more recently.

Now that you can generate the GIF from the JPEG image, we need a mechanism to ensure that the images are installed when you run make install. Remember that the install target is written in web.mk, and requires some variables (specifically, DESTDIR, HTML, and possible SUBDIR) to have been set first. Also, remember the special _SUBDIRUSE target, which handles recursing down in to subdirectories as necessary.

If you have been following these articles then your install target in web.mk should look like this.

install: _SUBDIRUSE @[ -d ${DESTDIR} ] || ${MKDIR} -p ${DESTDIR} @for htmlfile in ${HTML}; do \ ${CP} -f $$htmlfile ${DESTDIR}/$$htmlfile; \ ${CHMOD} 444 ${DESTDIR}/$$htmlfile; \ done

There is not too much wrong with this, and it will only take some minor changes to get it to work with the images.

The first question to ask is ‘‘How will install determine which files to install?’’ This is easy to decide--we already have one variable, HTML, which contains a list of all the HTML files to install, so we can create two new variables, JPEG and GIF, and set these to contain the names of all the JPEG and GIF files to install.

All that needs to be done is include these variables in the main for loop. For completeness’ sake, we will change the name of the loop variable (currently htmlfile) to reflect this.

Change the install target in web.mk to:

install: _SUBDIRUSE @[ -d ${DESTDIR} ] || ${MKDIR} -p ${DESTDIR} @for file in ${HTML} ${JPEG} ${GIF}; do \ ${CP} -f $$file ${DESTDIR}/$$file; \ ${CHMOD} 444 ${DESTDIR}/$$file; \ done

The second question to ask is ‘‘How can we guarantee that the .GIF file will exist when we need to install it?’’ A moment’s thought should leave you realising that if you list the GIF files as dependencies for the install target, then make(1) will ensure that they exist (by running other rules, as necessary) before it runs the install target. Since the GIF files will all be listed in the GIF variable, you can just list the variable on the dependency line.

So alter the definition and dependency line of install to;

install: ${GIF} _SUBDIRUSE

Go back to image.mk, and flesh it out with the following code:

# # image.mk # # Show how to install HTML, JPEG, and GIF files #

HTML= index.html JPEG= nik.jpeg GIF = nik.gif

DESTDIR=/tmp/images

nik.gif: nik.jpeg djpeg -pnm ${.ALLSRC} | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > ${.TARGET}

.include "web.mk"

For the purposes of this test, the files will be installed in /tmp/images, although you can change that if you like.

Make sure that index.html and nik.jpeg exist in the current directory, and that you have removed nik.gif. If you now try and use this Makefile to install your site, you should see the following;

% make -f image.mk install djpeg -pnm nik.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > nik.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found [ -d /tmp/images ] || /bin/mkdir -p /tmp/images

If you recall, the various cp(1) commands are preceeded by a @ sign in web.mk, which is why they do not appear here.

5. Extending to support many GIF files (suffix rules)

As you may have noticed with the last example, it is fine when all you have is one GIF file to create from one JPEG file, but what if you have a directory full of JPEG files and you want to ensure that they are all converted to GIF files?

Your first thought (particularly if you read the previous article recently) might have been to move the rule that generated GIF files from JPEG files into its own target, and listed that target with .USE as a dependency. You could then list all your GIF files with this rule as a dependency, and let make(1) work its magic.

Something like this, perhaps?

# # Using .USE to convert images #

JPEG= foo.jpeg bar.jpeg baz.jpeg GIF = foo.gif bar.gif baz.gif

foo.gif: foo.jpeg _GIFJPEGUSE

bar.gif: bar.jpeg _GIFGPEJUSE

baz.gif: baz.jpeg _GIFJPEGUSE

_GIFJPEGUSE: .USE djpeg -pnm ${.ALLSRC} | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > ${.TARGET}

That is not a bad idea. It is reusing code, which is good, and it is applying lessons learnt from the previous articles, which is also good. However, it is not the best approach. For example, see how you have to list each GIF file separately, and explicitly specify the JPEG file it depends on. It would be nice if make(1) could work out the dependencies for you.

Fortunately, make(1) can, using what are called ‘‘suffix rules’’ A suffix rule is a body of code that you enter in to your Makefile that tells make(1) how to convert files with a filename that ends in one suffix (such as .jpeg) to files that end in another suffix (such as .gif). Any time that make(1) needs to create a file it looks to see if there is an explicit rule detailing how to create that file. For example, so far there has been an explicit rule detailing how to create nik.gif from nik.jpeg.

If an explicit rule does not exist, make(1) starts looking at its list of suffix rules, to see if there are any instructions on how to create the file from another file with the same name, but a different suffix. If there are, make(1) uses that rule.

In our situation, we want make(1) to determine that if there is not an explicit rule to create a GIF file, then it should check to see if there are any JPEG files with the same base name[3], and if one does exist, to use that the create the GIF file.

Rather than try and build up a suitable suffix rule bit by bit, it is far simpler to show you the completed suffix rule, and then pick it apart. So, here is the complete suffix rule that tells make(1) how to convert our JPEG images to GIFs.

.SUFFIXES: .jpeg .gif

.jpeg.gif: djpeg -pnm ${.ALLSRC} | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > ${.TARGET}

It really is that simple.

.SUFFIXES: .jpeg .gif is how you tell make(1) that it has to know about these suffixes. It is not enough simply to list the rules in the Makefile, if the special .SUFFIXES target does not have the two suffixes as dependencies then nothing will happen. Notice how you have to include the dot as part of the suffix.

The rule with the strange looking target is the suffix rule. The body of the rule is exactly as we have used before, but the rule target is a little strange. The target consists of the two suffixes (complete with the two dots, and with no spaces between them). The first suffix should be the name of the source suffix (i.e., the master format) and the second suffix should be the distribution format, as shown here.

This is enough to make a slightly simpler image.mk, and a slightly more complicated web.mk. You can remove the explicit rule for nik.gif in image.mk, so you are left with;

# # image.mk # # Show how to install HTML, JPEG, and GIF files #

HTML= index.html JPEG= nik.jpeg GIF = nik.gif

DESTDIR=/tmp/images

.include "web.mk"

Then add the new suffix rule to web.mk, so you have; # # web.mk #

CP= /bin/cp CHMOD= /bin/chmod MKDIR= /bin/mkdir

.SUFFIXES: .jpeg .gif

.jpeg.gif: djpeg -pnm ${.ALLSRC} | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > ${.TARGET}

install: ${GIF} _SUBDIRUSE @[ -d ${DESTDIR} ] || ${MKDIR} -p ${DESTDIR} @for file in ${HTML} ${JPEG} ${GIF}; do \ ${CP} -f $$file ${DESTDIR}/$$file; \ ${CHMOD} 444 ${DESTDIR}/$$file; \ done

As before, make sure nik.gif has been removed, and re-run the install.

% make -f image.mk install djpeg -pnm nik.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > nik.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found [ -d /tmp/images ] || /bin/mkdir -p /tmp/images

As you can see, there is no visible change. And because this code is now in web.mk, you can re-use it in any subdirectories of your web site which contain JPEG images that you wanted to convert to GIFs.

Extending your Makefile to support converting multiple JPEG files to GIFs is now very easy.

Either put some more JPEG files in your working directory, or make a few copies of nik.jpeg.

# cp nik.jpeg jkh1.jpeg # cp nik.jpeg jkh2.jpeg

Then, update the JPEG= and GIF= lines in image.mk appropriately.

JPEG= nik.jpeg jkh1.jpeg jkh2.jpeg GIF= nik.gif jkh1.gif jkh2.gif

Finally, re-run the last make.

% make -f image.mk djpeg -pnm jkh1.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > jkh1.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found djpeg -pnm jkh2.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > jkh2.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found

As you can see, the jkh1.gif and jkh2.gif files have been generated from their respective JPEG masters. Using these suffix rules, you can now store the JPEG files directly in the CVS repository, and only generate the GIF files when you build or install a copy of the web site. This cuts down on the repository size, at the expense of making builds take a little bit longer.

6. Storing binary files in the repository, and how suffix rules can help 6.1. Binary files and the repository; possible problems

You have probably started to think about how you are going to store our binary files (the JPEG images) in your CVS repository. This is a tricky subject, and one worth spending some time on.

Obviously, binary files are different from ordinary text files. For one thing, you cannot perform meaningful diffs on them. You could not look at the binary differences between two JPEG files, and tell what the actual differences to the image were. This slightly reduces CVS’ usefulness.

It is also not possible to use CVS revision strings in binary files. I haven’t mentioned these yet, but suffice to say that if CVS encounters special keywords in a file it is checking out, such as $Header: /usr/local/CVSROOT/ezine/articles/200004/ready/managing,v 1.9 2000/04/01 19:50:47 gsutter Exp $, or $Id: managing,v 1.9 2000/04/01 19:50:47 gsutter Exp $, it replaces the keyword with information about the file, such as its path, or the date and time it was checked out. This is useful information to have, particularly when you want to quickly see what revisions of files you have checked out.

Despite this, storing binary files directly in the repository does have one big advantage. It’s very easy to do, and you don’t need to change any of your Makefiles to support it.

So, knowing that you should definitely store either the binary files, or a mechanism for regenerating the binary files, in the repository, how do you go about doing that?

6.2. The simple way, -kb

The easiest way to bring binary files in to your CVS repository is to use -kb when you cvs add the file. This tells CVS that this is a binary file, and it turns off a few pieces of CVS functionality for this file in the process. For example, if CVS knows that a file is a binary file, it will not try and merge in any changes that have been made to that file if you do a cvs update.

To add nik.jpeg to your repository, as a binary file (should you decide to do such a thing) you would run;

% cvs add -kb nik.jpeg

As I say, this is simple, easy, and does not give me the chance to write any more about suffix rules. So I will ignore it for the time being. Don’t run that example.

6.3. The more interesting way, with uuencode(1)/uudecode(1)

Suppose, however, that you prefer all the files in the repository to be text files (even if they should contain binary data). What do you do?

This is already a solved problem, thanks to the pervasiveness of e-mail. E-mail over the Internet is not safe for binary files. There are too many systems out there with differing ideas of how many bits in each character are significant to make it feasible to safely e-mail binary files without first encoding them. This encoding process converts the binary files in to files that only use alphanumeric characters, making them safe for e-mail. Of course the size of a file that has been encoded in this way can be considerably larger than the original binary file, but that’s the tradeoff you have to make.

There are several possible methods you can use to encode files, but the grand-daddy of programs to do this is uuencode(1) (like uucp(1), I think the uu means "Unix to Unix", but I could be wrong). You use uuencode(1) to convert binary files to their text only equivalent, and you use uudecode(1) to convert the encoded text file back to the binary file. These programs are part of FreeBSD, and are almost certainly a part of whatever other Unix-like OS you are using.

Using uuencode(1) is quite simple. Suppose you want to encode nik.jpeg so that it becomes a text file, and store the results in nik.uue.

% uuencode nik.jpeg nik.jpeg > nik.uue

If you investigate the nik.uue file, and work through the uuencode(1) manual page you will see how the conversion is done.

The command line above is relatively simple. However, you might be wondering why nik.jpeg had to specified twice on the command line.

The first occurence tells uuencode(1) the name of the file to convert. The second occurence tells uuencode(1) what this file should be called inside the encoded file. You can use this parameter to encode a file such that when it is decoded a different file is generated.

If you were to run;

% uuencode nik.jpeg foo.jpeg > nik.uue % uudecode nik.uue

You would see that the decoded file has been called foo.jpeg, because that is what was specified when encoding the file.

To tell you the truth, I’ve never had a need for this feature, and when running uuencode(1) you just get in to the habit of typing the filename you are encoding twice.

This is all very well, but how do we use it with make(1)?

If you are thinking ahead, you should be able to see that if we have a .uue file, we can now construct a series of commands to convert that file first to its original JPEG format, and then use the rules we have already created to convert the JPEG file to a GIF file.

A simple approach might look like this.

# # Simple use of uudecode #

# This is the default target nik.gif: nik.jpeg djpeg -pnm ${.ALLSRC} | pnmscale .25 | \ ppmquant -fs 64 | ppmtogif > ${.TARGET}

nik.jpeg: nik.uue uudecode nik.uue

Put that code in a file called uue.mk (don’t forgot about using TABs and not spaces in the rule body).

Generate the nik.uue file if you have not already done so, then make sure that you have deleted nik.jpeg and nik.gif. Then run make(1) on uue.mk.

% uuencode nik.jpeg nik.jpeg > nik.uue % rm nik.jpeg nik.gif % make -f uue.mk uudecode nik.uue djpeg -pnm nik.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > nik.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found make(1) has had to do a little more work this time. nik.gif is the default target, and that file doesn’t exist. make(1) can see that nik.jpeg must exist first, in order to create nik.gif, but that file does not exist either. But make(1) can see that it can generate nik.jpeg from nik.uue. which does exist. So make(1) first runs the body of nik.jpeg target, which generates nik.jpeg, which can then be used to generate nik.gif. You did not have to specify the direct relationship between nik.gif and nik.uue, make(1) inferred that from the rules you gave it. If you recall, we covered this back in the second article.

We can use this information to update web.mk to teach it about .uue files, and create a suffix rule that will try and generate a .jpeg file from a .uue file as necessary.

The changes are very simple. First, you have to replace the SUFFIXES line with this one.

SUFFIXES: .uue .jpeg .gif

As you can see, all this does is tell make(1) about a new suffix it can use. Then you have to add a suffix rule that tells make(1) how to go from .uue to .jpeg. .uue.jpeg: uudecode ${.ALLSRC}

Finally, you have to update the dependency line on the install target in web.mk.

install: ${GIF} ${JPEG} _SUBDIRUSE

You might be wondering why you now need to explicitly list ${JPEG}.

Previously, you didn’t, because it was assumed that all the JPEG files were part of the repository, and would therefore exist when you checked out a copy. However, now, that’s not the case, as the JPEG files have to be created from the uuencoded files.

If (and only if) every JPEG file is also used to generate a GIF file then there’s no problem, and you would not need to add ${JPEG} as a dependency. This is because if all the GIF files are dependencies, then the build process for the GIF files will ensure that the JPEG files will exist as well--they have to, otherwise you can not create the GIF files from them.

However, if you have some JPEG files that are only used as JPEG files, and are not listed as dependencies for any GIF files (for example, a foo.jpeg that was not used to generate foo.gif, but that still had to be installed) the JPEG file would not be created.

By listing ${JPEG} as an explicit dependecy, make(1) will ensure that all the JPEG files exist before attempting to install them, rather than just the ones used to create GIF files.

Were you expecting to add another variable to Makefile and web.mk, perhaps something like this?

UUE= nik.uue

There’s absolutely no need. At the moment, the variables we have defined are used to determine which files should be installed. Because the .uue files are never installed, and because make(1) can work out the name of a .uue file based on the name of the .jpeg file it is required for, you do not need to explicitly list the UUE files.

6.4. A possible problem, and how to work around it

Suppose you have one or more GIF files that are not generated from JPEG files, but that have to be stored in the repository. You might think that you can uuencode(1) these files, and create another suffix rule that describes how to create a GIF file from a UUE file. Something like this perhaps?

1. Create foo.uue, containing a uuencoded copy of foo.gif.

% uuencode foo.gif foo.gif > foo.uue

2. Remove foo.gif.

% rm foo.gif

3. Add another suffix rule to web.mk describing how to create .gif files from .uue files.

.uue.gif: uudecode ${.ALLSRC}

Note: You do not have to update the .SUFFIXES: line, as it already contains both .uue and .gif.

4. Add foo.gif to the GIF variable in Makefile, without adding a corresponding foo.jpeg entry to JPEG.

GIF= nik.gif foo.gif

If you try that, you’re in for a shock, because it doesn’t work. You will see output like this.

% make uudecode nik.uue uudecode foo.uue [ -d ../mystage ] || /bin/mkdir -p ../mystage cp: nik.gif: No such file or directory *** Error code 1

Stop.

You’ve been a victim of your own cleverness[4]. What’s happened is that make(1) has tried to create nik.gif. Prior to adding the GIF to UUE suffix rule, the only way make(1) could do this was by first creating the JPEG file from the uuencoded file, and then converting the JPEG file to a GIF file - everything worked.

However, now make(1) has a new rule that describes how GIF files can be generated directly from uuencoded files. make(1) applies this rule, and has assumed that nik.gif can be generated directly from nik.uue, without having to go through the intervening JPEG stage.

This is obviously wrong, but make(1) doesn’t notice until it tries to install nik.gif, which doesn’t exist, because decoding nik.uue gives nik.jpeg instead.

You can work around this problem, but it requires a bit of a kludge.

1. Rename nik.uue and foo.uue so that their desination formats are encoded in the suffix. So something like;

% mv nik.uue nik.jpeg-uue % mv foo.uue foo.gif-uue

2. Update the .SUFFIXES: line in web.mk to use these two new suffixes instead.

.SUFFIXES: .jpeg-uue .gif-uue .jpeg .gif

3. Update the rules to use the new suffixes.

.jpeg-uue.jpeg: uudecode ${.ALLSRC}

.gif-uue.gif uudecode ${.ALLSRC}

You can then remove the generated files, and re-run make(1).

# rm *.gif *.jpeg # make uudecode nik.jpeg-uue djpeg -pnm nik.jpeg | pnmscale .25 | ppmquant -fs 64 | ppmtogif > nik.gif ppmquant: making histogram... ppmquant: 7454 colors found ppmquant: choosing 64 colors... ppmquant: mapping image to new colors... ppmtogif: computing colormap... ppmtogif: 64 colors found uudecode foo.gif-uue [ -d ../mystage ] || /bin/mkdir -p ../mystage

So first of all nik.jpeg-uue is decoded, and then converted to nik.gif, and then foo.gif-uue is just decoded, to produce foo.gif.

As I say, it’s a bit of a kludge, but it works.

6.5. Extending this to other file formats

You can use this approach for any binary data you might want to sore in the repository, regardless of the format. The only steps you have to follow each time you add a new format are:

1. Choose a new suffix for the encoded format. These last examples have used dest-format-uue, but you are free to choose your own.

2. Update the .SUFFIXES: line, so that make(1) knows about the format.

3. Create a new suffix rule that describes the steps necessary to decode the format.

There is a risk you might end up with a lot of suffix rules all with the same body. If this happens, you can always use the .USE directive to keep the body of the rule somewhere central. For example:

.SUFFIXES: .jpeg-uue .gif-uue .au-uue .wav-uue

.jpeg-uue: _UUEUSE

.gif-uue: _UUEUSE

.au-uue: _UUEUSE

.wav-uue: _UUEUSE

...

_UUEUSE: .USE uudecode ${.ALLSRC}

7. In the next article...

... we shall be looking at methods for copying your web site from the staging area (which, if you recall, is your local copy of the web site for testing purposes) to the live site area. Hopefully that will be next month, but I’m not going to promise anything...

Notes

[1] PBM means Portable Bitmap [2] Far too many, if you ask me. [3] The base name is the file name without the extension. [4] OK, I confess. When I was writing this article I did exactly the same thing, and spent a few minutes scratching my head when it didn’t work.

Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved. April 2000 Search Submit Article Contact Us Join Us Merchandise

FreeBSD BOF at ApacheCon 2000 by Rob Arnold

Yesterday, March 8, was the first day of Apachecon 2000 in Orlando, Florida. The Apache Software Foundation is holding this conference to help the public get the most out of the Apache web server and other projects. The sessions were mostly technical information, but as with many conferences, birds-of-a-feather sessions (BOFs) gave opportunities to meet with colleagues with similar interests and share experiences.

The FreeBSD BOF was held at the end of the day yesterday, and it was a great showing of support by veteran FreeBSD admins, with several people in attendance who were new to FreeBSD. The room scheduled for the BOF was occupied, so we all (about 45 strong) took advantage of the beautiful Orlando weather and convened outside. The discussion was lively, and covered a range of topics from the Walnut Creek/BSDI merger to security, to the benefits of version 4. Those present showed a lot of enthusiasm for FreeBSD, and there were several business cards handed out by seasoned FreeBSDers to new users or those considering FreeBSD.

Many of those who attended were also Linux admins--their support for FreeBSD was a compelling persuasive force for the small segment of BOF attendees who were just interested in FreeBSD but trying to decide whether it was the right fit for their needs. Web hosting companies put in a strong showing (of course!), and there were even a couple lurkers from one *very large* web hosting business which uses FreeBSD and Apache.

Some typical comments from this session were:

"I’m looking for ways to convince my management to embrace FreeBSD." "I use several , and I prefer FreeBSD." "I have FreeBSD boxes with over 500 days uptime." "The networking code was what made me choose FreeBSD over Linux." "I just installed FreeBSD a week ago, and I want to learn how to secure it." "I’m a web hoster that uses Linux, but I’ve heard a lot about FreeBSD and I want to find out more." "This has been the most informative session of the day."

Later that evening, I had a chance to talk to the two mysterious lurkers. I was wearing a FreeBSD polo shirt, and a web hosting provider who uses Linux had noticed my shirt and was engaging me in a thoughtful discussion of the merits of FreeBSD for web hosters. The individuals from the very large web hosting company joined the discussion. They revealed a number of the tricks of their trade, including how to run Apache for high-volume web sites.

As the conversation went on, it only got more interesting. It seems that in addition to making several hacks to Apache to improve performance, this company also makes several substantial kernel modifications to FreeBSD. They spent a good deal of time answering questions for me and my new Linux friend. They lent an enormous degree of credibility to my main points, which were: 1) FreeBSD is a stable platform, with superior networking support, and 2) the FreeBSD community is a valuable resource.

As the conference continues, there are bound to be more great opportunities to spread the word about FreeBSD. With so many supporters in attendance, I’m sure that by the end of the week, many Linux admins and undecided individuals will have some good encounters with knowledgeable, friendly FreeBSD supporters.

Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved. April 2000 Search Submit Article Contact Us Join Us Merchandise

Merger Interview

Chris Coleman

The BSDI / Walnut Creek merger is big news for the BSD community, and its no surprise that everyone is talking about it. The merger has raised quite a few questions from people within the BSD community wondering how it will affect them and their favorite operating system developers.

We decided to ask them. We have previously heard from FreeBSD in the Slashdot.org interview and most of our original information was taken directly from FreeBSD, so we decided to get our information from the rest of the BSD community. This is really three interviews compiled together.

Daemon News interviewed The NetBSD Foundation (TNF), Theo DeRaadt of OpenBSD, and BSDI via e-mail. A follow-up phone conversation with Paul Borman and Kevin Rose of BSDI wrapped up the interview session.

Theo DeRaadt had very little to say concerning the merger and indicated that its effect on OpenBSD’s goals would be minimal.

The biggest news from the interview is the announcement of a proposed BSD common ABI (Application Binary Interface) originating from BSDI. A common ABI will go along ways in getting strong commercial application support for BSD.

BSDI Interview

Daemon News: The first information we received made it appear that BSD/OS was going away, and merely becoming a value added patch to FreeBSD. What is the future of BSD/OS? Will developers keep working on it as opposed to focusing FreeBSD 5.0?

BSDI: BSDI remains committed to making BSD/OS the preferred operating system for a wide segment of the commercial market, based on its reliability, supportability and feature richness.

Both BSD/OS and FreeBSD have strong customer bases. We will combine forces where appropriate. For instance, we are going to produce a common ABI (Application Binary Interface) for BSD. NetBSD and OpenBSD are invited to participate.

Daemon News: Will the new merger offer consumer support to home/non-commercial users, ala ? BSDI: Yes. We are offering support to our FreeBSD users as well as our BSD/OS users. The new matrix of support options will be closely modeled on BSDI’s current industry leading support matrix.

Daemon News: Another issue for new/home users - how do you see the installation process changing from what is available now (I am familiar w/ FreeBSD but have never installed BSD/OS)?

BSDI Probably no great change, at least not at first. BSD/OS and FreeBSD have different needs in this area. For instance, FreeBSD has an excellent mechanism to install via FTP over the net. BSD/OS, as a licensed product, is installed from a local or remote CD-ROM. Both the BSD/OS engineering team and the FreeBSD engineering team at BSDI will see more of each others’ installation systems and, as engineers tend to do, will share ideas.

BSD/OS 4.1 already has taken one idea from FreeBSD, virtual consoles that enable the installer to spy on the installation process. We added it as a result of an engineer installing FreeBSD and saying "hey, cool."

BSDI has always focused on making its installation process intuitive and easy to use, and that will continue to be our focus.

Daemon News: MacOS X has a BSD layer in it, based on FreeBSD 3.4. How will this merger affect this collaboration with Apple?

BSDI: The merger will have a positive effect on our relationships with Apple and other organizations. We will have more resources to devote to these efforts, so we will have more to offer our partners. We intend to accelerate our co-development and co-marketing programs.

Daemon News: It was mentioned in the /. interview that the main focus will still be the server side, but work will be done on the desktop front. Does this mean more supported porting projects ala Applixware? Can you elaborate more (if possible) on this?

BSDI: BSD has had, and will continue to have, a strong showing in the server and embedded market places. Many developers run BSD on their desktops and notebooks.

Both BSD/OS and FreeBSD run Linux applications, so you’ll see many Linux productivity apps runing on BSDI.

We are also talking with ISVs about native desktop productivity apps, easier UIs, and games. The common ABI we are planning will help grow the market for these applications on BSD.

With partner innovations, our rich and rapidly expanding community of BSD enthusiasts, and the open source momentum, we expect to continue seeing BSD in the Internet infrastructure, appliances, and developer workstations. And we expect our popularity to extend to productivity desktops, too.

Daemon News: The new BSDI marketing strategy seems to be to promote "BSD" in the same way that "Linux" is promoted to the media. Since you hold the BSD trademark, will the other BSDs run into any problems from you trying to market the same way or trying to cash in on the "BSD" marketing that you have done?

BSDI: BSDI wants to promote BSD. The other BSDs are a part of that. We don’t want to impede the NetBSD and OpenBSD projects in any way. We want to see all of the BSD-based operating systems be successful but of course, we probably would recommend BSD/OS or FreeBSD first. :-)

Daemon News: How does BSDI plan to promote BSD as a community?

BSDI: Walnut Creek has already been doing this with limited resources and a FreeBSD focus. BSDI now plans to increase BSD promotion on all fronts. This interview is part of that, getting BSD more into the press. We expect to see more books about BSD. We will host the annual BSDcon later this year where all BSD users, vendors, and developers can get together and share the work they have been doing. And this is not just for BSD/OS and FreeBSD, this is for NetBSD and OpenBSD as well.

Daemon News: How does BSDI plan to market the BSD/OS and FreeBSD? Will they be using channel markets or will they try to do it all themselves? Will BSDI become a retailer? How will you create pull for a product line that has gone black (no advertising) for more than a year?

BSDI: I am an engineer and I focus on developing BSD. I have asked Kevin Rose, BSDI’s director of Marketing to answer this question. He said:

BSDI is the Internet experts’ choice. Over 90% of service providers use and love our products. 110 of the 120 largest backbone network vendors worldwide trust their business to BSDI. According to Dave Trowbridge at survey.com, BSDI use in mainstream corporate IT will grow between 100% and 500% in web, e-mail, e-commerce, security, multimedia, and communications applications.

BSDI offers BSD/OS and the BSDI Internet Super Server through direct, VAR, and OEM channels. Expect to see us offering FreeBSD here too. We are also exploring how to sell BSD/OS and BSDI Internet Super Server through FreeBSD’s direct, retail and VAR channels.

We expect to continue to rapidly build all of these channels. Now that we’re merged, BSDI will offer FreeBSD through direct, VAR and OEM channels where customers currently buy BSD/OS. And we are exploring how to sell BSD/OS and Internet Super Server through FreeBSD direct, retail and VAR channels.

This year watch for us at Spring Comdex, in Chicago; Usenix in San Diego; LinuxWorld in San Jose. Watch for us at Fall Comdex - Las Vegas; The LISA show in New Orleans and at LinuxWorld, in New York next January.

We’ll have our own major event, BSDCon in Monterey Oct 18-20. We expect to triple attendance over last year.

Daemon News: Will BSDI provide hardware in the future? BSDI: BSDI is already working with hardware vendors to provide its customers with a solution that they know will work. We will continue to build those relationships. We are going to offer BSD certification and branding of hardware so people will know what hardware to buy. Of course, BSDI listens to its customers. If our customers want hardware direct from BSDI then we will have to consider what we would need to do to make it happen.

Daemon News: Will the two CVS trees merge in one big tree (eventually with a private BSDI branch holding encumbered code)? How is that going to happen?

BSDI: BSDI’s main goal is to provide a quality solution to all of its customers. Part of what BSD/OS offers is the fact that we have control over what is shipped in the product. The FreeBSD project, which we help support, is independent of BSDI and as such we do not control what or when they ship. The systems do have different release schedules.

What you will more likely see is that for common code, one of the two groups will be the primary developer and the other group will import stable snapshots of the code into their own tree at a time which is convenient for them.

If we do have a shared tree (in addition to our main trees) it would probably be in the area of ports or contrib software. We have a lot of learning to do here and we simply don’t know how we are going to do some of this stuff yet.

In the end, we intend to do what will serve our customer bases, both for BSD/OS and FreeBSD.

NetBSD Interview:

This follow-up information was gathered during a phone call with BSDI. The original interview was done via email.

Daemon News: I have a couple of questions that I would like to ask. They are included. I will be taking all the information and writing it up for an article to appear on Daemon News.

NetBSD: We read with interest the recent announcement that BSDI are to buy out Walnut Creek software, owners of the FreeBSD distribution. We hope that this will benefit the open source movement in the long-term, and would encourage BSDI to move away from proprietary software, and embrace the open source movement, so that the whole community can benefit.

Daemon News: BSDI has announced that they will be targeting new platforms to port to, how will this affect The NetBSD Foundation (TNF)?

NetBSD: We have not seen that announcement. All the press releases that have come from the BSDI camp explicitly only mention BSDI and FreeBSD, even when they make broad references such as ‘BSD systems’. We hope that BSDI will pick up our source base for those platforms and contribute back code changes thereby adding benefit to the open source community as a whole.

Follow-Up:

BSDI indicated that in fact already had used NetBSD as the foundation for their Sparc port, and were planning on using as much of the NetBSD code as possible to speed up development. They also felt that now they were finally in a position to be able to commit changes back to the community and commented that it would make their life easier since they wouldn’t have to track so many changes to the source tree.

Daemon News: What do you feel will be the biggest gain for TNF from this merger?

NetBSD: This is up to BSDI. It could be negative gain if BSDI succeeds with its current campaign to convince the masses that "BSD = BSDI + FreeBSD". It would be zero gain if BSDI decides to just pick up changes from NetBSD and contribute nothing back. It could be positive if BSDI contributes code changes back to the community.

Also it would be beneficial to the project if:

BSDi made money/equipment donations to The NetBSD Foundation hired NetBSD developers to work on specific projects on a short term basis hired NetBSD developers to enhance and maintain NetBSD ports to other platforms and subsystems.

We believe that there are tremendous gains to be made for BSDI to move towards a more open source-friendly stance, especially after its takeover of Walnut Creek software, which manufactures CD-ROMs which go far beyond the FreeBSD operating system

Follow-up:

BSDI’s new marketing strategy is to market BSD as a community and that includes NetBSD and OpenBSD. BSDI said that the entire community would benefit from the increased marketing effort.

Daemon News: What would it take for TNF to merge with BSDI and provide the porting efforts to the other platforms?

NetBSD: It is too early to say. We are sure you have noticed that BSDI has not made any announcements about code merging, features taken from each OS, releasing to FreeBSD some proprietary subsystems (lockd/statd comes to mind). They also have not made any announcements about what would be the policy/criteria to keep code proprietary or releasing to FreeBSD.

From unofficial contacts we’ve understood that the developers want to keep most of the code open (except of course hardware drivers that have been developed under NDA), but the marketing people still believe that there is a competitive advantage from keeping parts of the source proprietary.

We think that an official clarification from BSDI is long overdue. The beauty of the BSD license is that it lets you do anything with the source: release it and gain the benefits of open-source (bug fixes and enhancements from the community), or keep the source proprietary. It is up to BSDI to make the choice.

Follow-up:

BSDI mentioned that the announcements didn’t include NetBSD and OpenBSD because the felt they couldn’t speak directly for them, however it was not their intention to exclude them.

BSDI will keep BSD/OS a closed source operating system and plans on using FreeBSD as a filter to convert closed source to open source. According to BSDI, the source code to BSD/OS has already been distributed to FreeBSD core team developers. Only when it gets committed to the FreeBSD source tree will it become Open Source.

However, BSDI has no plans to hold anything back, with the exception of sections developed under NDA licenses that prohibits the change.

Daemon News:

Do you think many of the current TNF developers will become part of the multi-platform developer group for BSDI/FreeBSD?

NetBSD: It is reasonable to believe that if BSDI adopts the NetBSD code base for the new architectures they plan to support, they will attract NetBSD developers.

Follow-up:

BSDI plans to hire developers to work on just such coding projects, so if you are qualified, you should introduce yourself.

Daemon News: Do you think that the TNF project goals will change any because of this merger?

NetBSD: No, we don’t see how this will affect the goals of the project. The NetBSD Project will continue providing a 100% open source operating system, with leading-edge technology and support for a huge range of hardware platforms. FreeBSD and BSDI have benefited from using NetBSD code in the past, and we expect that they will continue to do so in the future.

Daemon News: How does TNF plan to work towards a more unified BSD community?

NetBSD: By:

1. keeping its source open 2. making every effort to coordinate and exchange ideas and source code between BSD systems 3. not engaging in flamewars

Daemon News: What BSDI code do you look most forward to importing into NetBSD.

NetBSD: It is hard to say without looking at the code first. Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved. April 2000 Search Submit Article Contact Us Join Us Merchandise

New user tips by Jonathon McKitrick

So, you finally decided to give FreeBSD a try, eh? Well, there are probably three possibilities. In order of likelihood, you are either:

1. a relatively experienced Linux or Unix user and have decided to try or switch to FreeBSD for any number of reasons; 2. are a new Linux user and for whatever reason decided to try FreeBSD; or 3. you just decided to give FreeBSD a try because of all the good things you have heard!

Whatever the case may be, switching to a new OS is exciting and sometimes a little intimidating. We may proceed tentatively, afraid to ‘‘break’’ something or unsure if what worked in some other OS will work on this one. In this article, as a new FreeBSD user myself, I would like to share some of the little tips that helped me learn faster and more fearlessly. To any seasoned veterans who may be reading this, all of these tips are trivial and rudimentary. But all together, they helped me build my own little world where I wasn’t afraid to try new things and where I learned faster then I ever did on Linux.

The first cardinal rule is to not be afraid of breaking anything. Backup your important data often, especially if you tinker with important settings often, but the best way to learn is by doing. However, some old-school die-hards insist that this means using only the most basic of tools and learning them by RTFM’ing, trial and error, and experience. While this may have its merits, there may be a better way for users with less Unix experience, like myself. I started out using Midnight Commander exclusively. I used it for everything from learning my way around the filesystems to editing files. I learned from MC about chmod and chown, and other important commands. Then, when I felt ready, I was able to try them at my own pace. Now, I only use MC for mass file relocation and other bulk tasks. Eventually, when I master pattern matching, I expect to give it up almost completely. Is it an expert’s tool? No. Did it serve its purpose? I believe so. It helped me learn the more powerful tools in a carefully controlled way at my own pace.

This brings me to my next point: choose a GOOD Unix editor and master it. Whether that be vi, emacs, joe, or some other editor, learn it well. There is a reason why these editors are so popular. It is worth the time to find out why. These tools reward a modest investment of time with a great deal of power and efficiency. Such editors are universal, and will help you configure your system much more quickly than easier but less powerful or less available tools.

Another valuable tool: Learn aliases and use them copiously. I use aliases for everything. I have an alias that loads vim with my current .profile, and one that loads vim with my current kernel configuration. I even have an alias for displaying all my aliases. One good idea for newbies is to alias rm to mean ’rm -i’ to really make sure you want to remove all those files. The same concept applies to mv. Use aliases for your ppp commands and for your cvsup commands. Unix rewards effort with getting more done with less keystrokes and work.

As I mentioned at the outset, and any veteran BSD user will tell you, FreeBSD is a very robust system. It is difficult to break anything beyond repair, though it can be done. Part of the reason for this is because the FreeBSD filesystem is cohesive and logically arranged. To help build a newbie friendly system, we should follow the same model in our own personal file organization, as this will help us find things easier and backup our data and system configuration more efficiently.

I decided to follow a basic rule I learned on the mailing lists, and that was to leave the root directory and settings untouched, and use my user account along with su -m to perform necessary administration tasks. I set up an ’admin’ directory, where I store a symlink to my kernel configuration for easy editing. I also keep my cvsup configuration files here. Of course, I also keep my user account data in my home directory as well. But I found that by keeping the administrative data in a separate directory, I am able to perform backups more easily. All I need to do is backup /etc, /home, and /usr/local/sbin, where I keep some administration scripts.

Another simple trick that has taken the fear out of re-installing for me is to regularly generate a list of all installed packages. I output the result of pkg_info to a text file that I include when I make my regular backups.

Does all of this work? Yes, it does. When I decided to re-install my system from scratch, this is the procedure I followed, and it worked like a charm. None of these ideas are earth shattering or revolutionary, but all together, hopefully they can help some new users to be a little more fearless in learning the wonderful OS we call BSD.

Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved. April 2000 Search Submit Article Contact Us Join Us Merchandise

Answer Man by Gary Kline and David Leonard

This time we bring you several more of the questions that pop up very often on the mailing lists and BSD newsgroups--oh, and the answers with commentary in some depth.

More coming soon to this website direct to you. And next column, maybe a pleasant surprise!

Is there an easy way to uuencode and mail gzip’d tarballs? My power has gone out unexpectly in recent days. How can I automate my BSD system to auto-sync my disks, say, every few seconds... just in case? What do the directory abbreviations smm, psd, usd mean? How do I enforce automatic logout after specified amount of time? How do I print out man pages so I can go in a corner and read them? How can I set up my console screen to do 80x50? I’m tired of seeing the standard 80x24 lines.

Q: Is there an easier way to uuencode and mail gzip’d tarballs than:

$ uuencode saveconfig.gz saveconfig.gz > saveconfig.uu $ mail -s saveconfig.uu [email protected] < saveconfig.uu ?

A: Sure, and one that fits your example case might be to use the power of pipes with redirection. Here is a one-liner:

% uuencode saveconfig.uu < ./saveconfig.gz | mail -s saving [email protected]

There are examples of using uu{en,de}code in the uuencode(1) man page. Say that you want to send a friend your sample project code at a distant site and haven’t got an ftp path to reach him, or just want a one-liner. After you have cd’d to your project directory, this will automate the task.

% tar -zcf - . | uuencode proj.gz | mail -s proj.gz [email protected]

Q: My power has gone out unexpectly in recent days. How can I automate my BSD system to auto-sync my disks, say, every few seconds... just in case? A: In FreeBSD, this is one for /sbin/sysctl, the utility that lets you retrieve or set many things in the BSD kernel. To update the sync time, as root

# sysctl -w kern.update=3

will sync the disks every three seconds. sysctl lets the root user fine-tune the system. sysctl retrieves and lets root alter a wide range of things such as the number of processes to a given user or the number of open file per user (ID) or the open files per process.

Typing

# man 8 sysctl

will give you more details.

OpenBSD doesn’t have such a knob. The sched_sync (’update’) kernel thread syncs a filesystem once every second, unless you are running with soft updates in which case it syncs continuously. This behaviour is negated by mounting a filesystem with the ’async’ option.

The interesting question is why your reliability experience with Linux/ext2fs has been better than with *BSD/ffs. Perhaps this extract exlpains it:

BSD-like synchronous updates can be used in Ext2fs. A mount option allows the administrator to request that metadata (inodes, bitmap blocks, indirect blocks and directory blocks) be written synchronously on the disk when they are modified. This can be useful to maintain a strict metadata consistency but this leads to poor performances. Actually, this feature is not normally used, since in addition to the performance loss associated with using synchronous updates of the metadata, it can cause corruption in the user data which will not be flagged by the filesystem checker. (Full paper)

In BSD, the ffs performance loss associated with synchronous mounts (the default) can be reduced by either mounting with the ’async’ option (which can lead to more corruption) or enabling soft updates on your file system. See the tunefs(8) manual page for details on turning it on.

See also http://www.mckusick.com/articles.html

Q: What do the directory abbreviations

/usr/share/doc/smm /usr/share/doc/psd /usr/share/doc/usd stand for?

A:

smm is for the System Manager's Manual psd is for the Programmer's Supplementary Documents usd is for the User's Supplementary Documents

In days of yore, physical paper books, collectively called the Manual, accompanied Unix releases. It consisted of the smm, psd, usd and all the manual pages accessible with man(1).

The smm/psd/usd directories are intended to contain the source code to those parts of the Manual. These days, most manual searches are performed online, and the format of the smm/psd/usd documents are not as amenable as the manual pages are. Because of their reduced profile, they are poorly maintained in the *BSD world and so tend to be treated as historical documents rather than working documentation.

Q: How do I enforce automatic logout after specified amount of time?

A: FreeBSD sets this in /etc/login.conf, with the :idletime directive. To set an idletime maximum of 30 minutes, assuming the users you want to logout after 30 minutes are dialed in, set the

":idletime=30m:" line under the PPP/SLIP connections section.

Of course idletime and sessiontime are different things, so if you want to limit all users to, say 8 hours and 30 minutes, under the same section you would set

":sessiontime=8h30m:"

Do a

% man login.conf for details. It’s worth noting that /etc/login.conf is one of those almost-endlessly configurable sysadmin features.

OpenBSD does not have such burdensome rules. Instead, the reader is referred to the /etc/ppp/options or the idled port; /usr/ports/sysutils/idled.

Q: How do I print out man pages so I can go in a corner and read them?

A: First of all, you will need to find the file that contains the manual page ‘source’. You can find it by giving the -w option to man(1). Note that on some systems, manual pages are installed preformatted ASCII and/or compressed, and these will appear as ‘cat’ pages:

% man -w ls /usr/share/man/cat1/ls.0.gz

A slight diversion on cat pages is in order:

Cat pages are plain text files with backspace sequences that achieve overstrike, (bold) and underline on a dot-matrix or line printer. For example to print a bold ’B’, you would send to the printer ’B’, backspace and then the ’B’ again. The online paging programs, more(1) and less(1), understand the backspace sequences and simulate overstrike and underlining when you view the manual page.

However, we need the manual page source to the cat pages. For the ls cat page named above, the corresponding ’man’ page should be located at:

/usr/share/man/man1/ls.1

If you don’t have this form of the file, then you will find it hard to print out the documentation on anything except a line printer.

Assuming you found the man page source, you can now convert it into one of many types of outputs. For example, to generate postscript output, use the -T (type) and -mandoc flags to groff in the following incarnation:

% groff -Tps -mandoc /usr/share/man/man1/ls.1 > ls.ps

You can then send this file to your postscript printer with lpr.

Q: How can I set up my console screen to do 80x50? I’m tired of seeing the standard 80x24 lines.

A: In FreeBSD, one way of setting the kind of console display you want is thru vidcontrol.

# vidcontrol VGA_80x50

Doing a

% man vidcontrol

will tell you about all the vidcontrol options. Another way is to set up all your console screen with the 80x50 option by putting

allscreens_flags="VGA_80x50"

in /etc/rc.conf to set it for all console screens when you boot. But be aware that the "allscreens_flags" option only works for some vidcontrol options; for instance, you can’t set foreground and background colors this way. On OpenBSD, the proper way is to set the TERM environment variable to ‘pcvt25’ and then use the scon(1) utility:

% scon -s 25

You may also need to turn off the interfering ’full’ VT220 compatibility mode with:

% scon -f off About the Authors

Gary Kline has been porting code since the late 1970’s when he helped port several V6 utilities to V7 at Cal Berkeley. He is the principal architect for Project Muuz, an open-source mind-zapping app. When he isn’t hacking code, he’s hacking Zen poetry or prose, or listening to jazz radio and drinking espresso.

[home| mail]

David Leonard is a PhD student in the Department of Computer Science and Electrical Engineering at the University of Queensland, Brisbane, Australia.

His area of research is QoS-adaptive component software architectures, and in his spare time is a developer for the OpenBSD project. That said, David enjoys living the quiet life with his wife, Kylie and cat, Mu. He especially enjoys frequenting Moreton Bay’s many fabulous places to eat. Mmmmm!

[home| mail]

Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved.

What is BSD? Get BSD Contact Us Join Us Search FAQ FreeBSDzine FreeBSD Rocks OpenBSD Explained BSD Driver Database BSD Applications

By Susannah Coleman, and Seth Claybrook,

Previous First Episode

Previous First Episode April 2000 Search Submit Article Contact Us Join Us Merchandise

For the times, they are a’changing by Greg Lehey [email protected]

Things have been pretty active lately. The big news of the month (or is that of the year?) is that , Inc. (BSDI) have acquired Walnut Creek CDROM, the main sponsor of the FreeBSD project. We’ll look at that below; first, a couple of other observations.

‘‘Not invented here’’ department

Only the most die-hard of UNIX hackers will have missed the fact that Microsoft has released a new operating environment, called Windows 2000 (and thus ending our joking about whether the next Microsoft system would be called Windows 0). Understandably, Microsoft has put up a lot of publicity on its web site, and recently a number of people stumbled over the description of the Single Instance Store, which is intended to reduce the storage of redundant copies of files. The description reminded us so much of symbolic links that a number of jokes went around about it, along the lines of Henry Spencer’s famous quote, to be found in fortune:

Those who do not understand Unix are condemned to reinvent it, poorly.

In fact, the SIS is not the same thing as a symbolic link. The discussions were obviously significant enough that Microsoft sat up, took notice, and modified the page to describe the differences between SIS and symbolic links. I’ll summarize their version in my own words:

It’s not a real symbolic link, because a modification to one instance doesn’t change the other one. Instead, a ‘‘copy on write’’ function makes a second copy. The web page doesn’t say whether it makes a copy only of a section of the file (which would be quite clever) or the entire file (which could pose a significant performance problem).

There’s a reference count, like for real (‘‘hard’’) links. Microsoft doesn’t mention hard links, and it’s not clear that they know about them.

SIS does things ‘‘automatically’’ by scanning the system for identical files, a job which places a significant load on the system. I once wrote a program to do this, and it’s not something that you would want to run automatically. Microsoft seems to think this is a feature, not a bug.

To quote: ‘‘SIS exposes special backup APIs that allow a backup application to backup SIS files so that only one copy of the data is placed on the backup media’’. Again, this sounds like hard links. UNIX backup programs only back up a single copy of files with multiple links. When they encounter the other links, they simply back up a reference to the first file. In summary, SIS seems to do the same things that links and symbolic links do, just in a slightly different manner. It also seems to require a lot of resources. It may under certain circumstances have the advantage of automatically creating a copy of files if a modification is made to one of the copies, but it’s unlikely that there would be a significant number of copies of frequently modified files. Microsoft obviously expects massive replication:

The result is a feature that frees up as much as 80 to 90 percent of the space on a server, allowing users to store as much as five to 10 times the information as they could before. ‘‘The bottom line is that it saves the administrator time, which is why it?s part of Zero Administration for Windows,’’ Bolosky said. ‘‘It?s designed to ease the lives of the technical support staff.’’

The ? marks in the text above represent non-standard additions to the specified character set in the original quote. They appear to be intended to represent an apostrophe, but since they’re not part of the ISO 8859-1 character set, which this document claims to be written in, doesn’t recognize them and represents them as ?.

Cultural differences

Lately I’ve had a fair amount of contact with Linux developers. It’s been an interesting experience for a number of reasons, and it’s given me a new perspective on how people ‘‘in the know’’ view the BSD community (or communities, as they tend to see us). Some of this is reflected in a news article about the BSDI merger, which unfortunately is factually so inaccurate that it’s not worth reading. It does make claims that BSD has suffered because of a recently ended lawsuit with AT&T (that was April 1994, for those of you who want to check), and that the BSD communities are continually fighting each other.

The Linux people I have met seem to share some of these views. I can’t say, ‘‘no, we’re all one big happy family,’’ because it’s not true, but I suspect that the incidents that they’re thinking about also lie years back. We have had some spectacular flame wars, and without them there would probably be fewer BSD communities.

But what about Linux, I asked my friends. They all agree that serious disagreements between Linux hackers are very rare. This reminds me somehow of the respective mascots, a daemon for BSD and a particularly placid looking penguin for Linux (never mind that the penguin mascot came into being after Linus was bitten by a penguin, nor that I can’t recall any cases of daemons biting).

I can’t believe that Linux hackers are such fundamentally different people that they are all sweetness and light where we are continually flaming each other. I’m sure there’s some other reason, and I discussed it with my friends. We spoke specifically about kernel modifications, including FreeBSD’s famous ‘‘Danish axes.’’ About the only thing we could come up with is that Linus’ word is final: if there’s a difference of opinion, Linus decides. In the FreeBSD development environment, in which I am located, things aren’t so clear cut: we need a consensus, and it’s not always easy to get one. The ensuing flames^H^H^H^H^H^Hdiscussions can become somewhat heated. I assume that the situation in NetBSD and OpenBSD is similar.

I don’t think any of us particularly value flaming, so it might seem as if the Linux approach is better. If you value peace and quiet above everything else, this is a reasonable viewpoint. But I don’t think it is conducive to the best possible code. Linus is definitely an exceptional hacker, but he’s only one person, and he can’t always be right. The BSD way may be more strenuous, but it has the potential to create a better overall code base. Looking at the issue in social terms, it would seem as if Linux is a monarchy, and BSD is a collection of anarchies. Nowadays the term ‘‘anarchy’’ has a somewhat negative flavour about it, but that’s mainly because in real life an anarchy doesn’t work. In BSD development, it seems that it is working, at least up to a point. I’m thinking about the comparison, and I’d welcome feedback about it.

Finally, the merger

Unless you have spent the last month hiding under a rock without a network connection, you’ll know about the merger between Walnut Creek CDROM and Berkeley Software Design, Inc, usually known as BSDI. At this point, I should present my rant on the name of the latter company: for reasons I don’t understand, a number of people write it BSDi, with a lower case i. I have even heard rumours from people who should know better that the new, merged company will spell its abbreviation BSDi. This is a long-standing tradition, but I can’t see any evidence that there is any truth in the spelling. Check out the BSDI web site for further details.

What’s in it for me?

The news of the merger brought a flood of mail messages with subjects such as What result would *you* like from the merger? and Is FreeBSD dead ?. I’ve heard some FUD (Fear, Uncertainty and Doubt) on the FreeBSD lists (‘‘FreeBSD will no longer be free’’) and on the NetBSD lists (‘‘How come we missed out on this?’’).

So what’s really going on? I think the real issue is that some of the details haven’t been finalized, but here’s my take:

BSDI and Walnut Creek will become one company. The new company will be called BSDi.

Walnut Creek CDROM will stop selling Linux products, particularly Slackware. Instead, it will spin off a company to handle these products. Personally, I think this is bad news for Slackware, which for some time has been losing in popularity against other more recent Linux distributions. Without the Walnut Creek name behind it, things could become even more difficult.

BSDi and the FreeBSD Project will not merge. This means that there is no way for the new BSDi to directly influence the FreeBSD project. On the other hand, this doesn’t mean that they won’t be able to influence FreeBSD indirectly. Walnut Creek CDROM has been one of the main sources of funding for the FreeBSD project, and the intention is that BSDi will continue this funding. In theory BSDi could influence FreeBSD by controlling the funding.

BSDi and the FreeBSD Project will cooperate to merge BSDi’s BSD/OS product and FreeBSD into a single product. The various statements about the way this will happen are less than clear, and this has given rise to some FUD. I think it’s easy enough to take a more positive viewpoint: merging two operating systems is a very difficult task, one that to my knowledge has never been done successfully.

A good example of how not to do it is UNIX System V.4, which was supposed to be formed by merging the previous System V code with 4.3BSD. In fact, if you look at the source trees, it seems to be more of an addition than a merge: base System V.4 can do most things either the System V way or the BSD way, which goes quite some way to explain its enormous size. I wouldn’t give BSD/OS and FreeBSD much chance, either, except that the systems really are very similar.

Nevertheless, it’s a daunting task. At a meeting of the BAFUG in Berkeley on the 23rd March, Bob Bruce, the president of Walnut Creek, described some of the difficulties. It now looks as if the merge will take longer than originally hoped. Given the magnitude of the task, that’s not really surprising.

Significant parts of the BSD/OS kernel are better than the corresponding FreeBSD implementation. This applies particularly to SMP (symmetric multiprocessor) support and kernel threads. These two items, in particular, would greatly benefit FreeBSD. On the other hand, merging the code base would also benefit BSD/OS: some (unstated) aspects of FreeBSD are better than the corresponding parts of BSD/OS. Sure, it would have been possible for BSDi to incorporate these improvements into BSD/OS, and indeed some things have been imported, but the main issue is the time that it takes.

Some parts of BSD/OS can’t be merged into FreeBSD: there are a number of drivers developed under non-disclosure agreements (NDA), for which BSDi is not allowed to release the source. BSDi will continue to market these in a value-added distribution, which will not be free.

What does this mean for NetBSD and OpenBSD? They’re not part of the deal. But that doesn’t mean they’re left out in the cold. The press announcements specifically state that BSDi intends to maintain close contacts with NetBSD and OpenBSD. It’s too early to know what form these contacts might take. I’ll discuss this point further in the following section.

OmniBSD

So how do the remaining BSDs relate to each other? It’s clearly a good idea to stress the communality of the operating systems, as we have been doing for a year and a half now in Daemon News. As my Linux friends observe, there’s very much a feeling that the BSD projects each sit in their own corner and don’t communicate. While announcing the merger, the Wall Street Journal spoke of ‘‘Balkanization’’ of the BSD landscape. I don’t think that’s fair.

Looking at Linux, you’ll see that there are a large number of different Linux distributions. Each of these has their own version of some programs, their own directory layout, but they are all called Linux. By contrast, the BSD systems don’t (yet) call themselves BSD. This nurtures the general perception that the BSDs are very dissimilar operating systems.

There’s more to this issue than the name. Each BSD has a different kernel, and the interface isn’t completely uniform. Where it’s possible at all, you may need an emulation module to run an xBSD program on yBSD. Device names are different, especially since FreeBSD changed the names for disks and tapes.

The differences aren’t big; I’m currently writing a book on Systems Administration which I hope will be able to cover all three BSDs. But they’re big enough to confuse users who aren’t interested in the nitty-gritty and who only want to get their work done. In a word, we need (dare I say it?) standards.

In some circles, the term ‘‘standards’’ is treated as a dirty word. They hinder progress and bind people to the lowest common denominator. On the other hand, too much innovation can be confusing. Where do we draw the line? We already try to adhere to many standards, such as POSIX.1 and POSIX.2. It would make a lot of sense to have a single standard for other things, such as the system call interface.

Probably the biggest challenge for BSD in the next 12 months will be to address this kind of issue and show the world that the individual projects are not groups of enemy gangs spending all their efforts fighting each other. The BSDI/WC merger is a good step in this direction. Let’s hope that they’re successful.

Author maintains all copyrights on this article. Images and layout Copyright © 1998-2000 Dæmon News. All Rights Reserved.