Three Platforms, One Server Part 2: Windows and Quotas

The Problem
So we've hit a (hopefully) minor snag in our migration to a single authentication server for all platforms. It's, of course, a problem with Windows and its roaming profiles system.

Roaming profiles for Windows are like networked home accounts for Mac. The idea is simple: Your home account is stored on a networked file server of some sort, and when you log on to a workstation this home account is mounted and used for storing files, application preferences and other user-specific settings like bookmarks, home pages, and Desktop backgrounds. It's nice because it allows you to log in to any system on the network, retrieve your documents, and have consistent settings that follow you from computer to computer. On the Mac, the home account is simply a network mount that acts very similarly to a local hard drive or partition. That is to say, your settings are read directly from the server mount as though they were local to the system. This is how Linux works as well. Windows, however behaves, quite infuriatingly, differently.

What Windows does, instead of reading the roaming profile documents and settings directly from the server, is to copy everything in the roaming profile folder on the server to the local machine at login. Then, at logout, the changed files are copied back up to the server. This is a horrific nightmare for any user with more than a 100 MB or so of data in his roaming profile, because as the data grows, logins and logouts take increasingly long as the workstation tries to copy the account data to and from the server. Our user quotas are up to 6.5 GB. So you can see where we get stuck. Because of this copying behavior, Windows roaming profiles just can't handle the amount of data we allow in home accounts for Mac and Linux users. It would choke and kill our network anytime a user logged in. And that would be bad.

This problem has been handled in the past by our separate and discrete Windows Server, which allows assignation of roaming profile quotas. But now we want this to be handled by the Mac Server, which also handles Mac and Linux accounts. The Mac Server, though, doesn't really allow handling Windows accounts much differently than Mac accounts. I can't tell the Mac Server, "Give the systemsboy user a 100 MB quota for Windows, but a 6.5 GB quota for all other types of accounts." It just doesn't work that way. The only difference I can specify is where the Windows roaming profile is stored versus the Mac/Linux profile, and this is going to be a key in the solution to this problem.

The Solution
So, on the Mac Server, the trick will be to specify a separate volume for Windows profiles, and then apply a quota for each user on that volume. Specifying the volume is easy. I've created a partition called "Windows." And I have set the Windows roaming profile to use this volume for all users.

Quotas, on the other hand, are a bit trickier, and I've had to do some research here, but I think I have it. The Mac makes use of quotas using the FreeBSD model, and quotas on Mac Server work just as they do on FreeBSD. Setting quotas is both volume-specific and user-specific. That is, you set quotas per-user, per-volume. And it's not the most user-friendly process.

The first thing to do is to set a volume up with quotas. A volume that has quotas enabled will have these four files at its root:

.quota.user (data file containing user quotas).quota.group (data file containing group quotas).quota.ops.user (mount option file used to enable user quotas).quota.ops.group (mount option file used to enable group quotas)

To create these files you need to run a few commands. The first one generates info needed by the quotacheck command, via ktrace, and outputs it to the ktrace.out file:

sudo ktrace quotacheck -a

We can check our ktrace output, to make sure we have some useful data, with the following command:

sudo kdump -f ktrace.out | fgrep .quota

Which should produce output akin to:

7870 quotacheck NAMI  "//.quota.ops.group"7870 quotacheck NAMI  "//.quota.ops.user"7870 quotacheck NAMI  "/Volumes/Windows/.quota.ops.group"7870 quotacheck NAMI  "/Volumes/Windows/.quota.ops.user"

The next command takes the output of the ktrace file and uses it, through some shell voodoo, to create our needed mount option (.quota.ops) files on all mounted volumes:

sudo kdump -f ktrace.out | fgrep quota.ops | perl -nle 'print /"([^"]+)"/' | xargs sudo touch

Lastly, we need to generate the binary data files (.quota.user, .quota.group) on all our volumes, thusly:

sudo quotacheck -a

Or we can be more selective of which volumes upon which to enable quotas by specifying:

sudo quotacheck /Volumes/Windows

(NOTE: Be sure to leave the trailing slash OFF the end of the file path in this command, lest you get an error message.)

If we do an ls -a on our Windows partition, we should now see the above mentioned files listed. Any volume that lacks these files will not have quotas enforced on it. Any volume with these files will have quotas enforced once the quota system has been activated.

So the second step in this process is to simply turn the quota system on -- to make the OS aware that it should be monitoring and enforcing quotas. To do this we use a command called quotaon. (Conversely, to turn off quotas, we use the command quotaoff.) This c ommand:

sudo quotaon -a

will explicitly tell the system to begin monitoring quotas for all quota-enabled volumes. Again, to be more specific about which volumes should have quotas turned on, use:

sudo quotaon /Volumes/Windows

(Again, mind the trailing slash.)

This setting should be retained after reboot.

Now that we have a volume set up with quotas, we need to set the limits for each and, in our case, every user. Before we can set a user quota, however, the user in question must have a presence on the volume in question -- that is, must have a file or folder of some sort on the quota-enabled volume. So let's create a folder for systemsboy and set the permissions:

sudo mkdir /Volumes/Windows/systemsboy; sudo chown systemsboy:systemsboy /Volumes/Windows/systemsboy

Next we must set systemboy's user quotas. This requires editing the .quota.user and .quota.group files. Since the files are binary data files, and not flat text files, editing them requires the use of the edquota command. The edquota command will format and open the files in the default shell editor, which is vi. If you're not too familiar with vi, you may want to specify a different editor using the env command. To edit the user systemsboy's quota in pico, for instance, use this command:

sudo env EDITOR=pico edquota systemsboy

You should see something like this in your editor:

Quotas for user systemsboy:/: 1K blocks in use: 0, limits (soft = 0, hard = 0)inodes in use: 0, limits (soft = 0, hard = 0)

The first line in the file specifies the user whose quotas are being set. The second line is where you specify the maximum disk space allotted to the user. The hard quota is the actual quota -- the maximum amount of disk space allotted to the user. The soft quota provides an amount above which users might receive warnings that they were nearing their limits. This is how it would work in an all-UNIX world. Since our server is providing quotas for Windows, this warning system is effectively useless, so we don't really need to worry much about the soft quota, but for consistency's sake, we'll put it in. The third line specifies the maximum number of files the user can own on the volume. We're going to set all our users to have a quota of 100 MB, with a soft quota of 75 MB. We don't really need a limit on the number of files the user can have, so we'll leave those numbers alone. Leaving any value at 0 means that no quota is enforced. So here's our modified quota file for systemsboy:

Quotas for user systemsboy:/: 1K blocks in use: 0, limits (soft = 75000, hard = 100000)inodes in use: 0, limits (soft = 0, hard = 0)

There's one last step we need to worry about, and that's how to set these values for every user. Obviously, with a user base of 200+, it would be prohibitively time consuming to set quotas for each and every user with the above method. Fortunately, edquota provides a method for propagating the quotas of one user to other users. The syntax looks something like this:

sudo edquota -p systemsboy regina

where systemsboy is our "prototypical" user from which we are propagating quotas to the user regina. To modify all our Mac Server users, we'll need a list of all users in a text file, and of course, a directory for each user on our Windows volume. We can generate the user list by querying the LDAP database on our Mac Server, and outputting the response to a text file, like so:

dscl localhost -list /LDAPv3/127.0.0.1/Users > ~/Desktop/AllUsers.txt

A simple for loop that reads this file can be used to create the users' directories and propagate quotas to all our users en masse. This should do the trick:

for user in `cat ~/Desktop/AllUsers.txt`; do sudo mkdir /Volumes/Windows/$user; sudo chown $user:$user /Volumes/Windows/$user; sudo edquota -p systemsboy $user; done

Or we can combine these two steps and forego the text file altogether:

for user in `dscl localhost -list /LDAPv3/127.0.0.1/Users`; do sudo mkdir /Volumes/Windows/$user; sudo chown $user:$user /Volumes/Windows/$user; sudo edquota -p systemsboy $user; done

Done at Last
I warned you it wouldn't be pretty. But that's it. You should now have a volume (called "Windows" in this case) with quotas enabled. Every user in the LDAP database will have a 100 MB quota when accessing the Windows volume.

To back out of this process at any time, simply run the command:

sudo quotaoff -a

This will turn off the quota system for all volumes. It will also reset any user quota values, so if you want to turn it back on, you'll have to recreate quotas for a user and propagate those quotas to other users. If you're sure you want to get rid of all this quota junk once and for all, you can run the quotaoff command and then also remove all the volume-level quota files:

sudo quotaoff -asudo rm -rf /.quota*sudo rm -rf /Volumes/RepeatForAnyOtherVolumes/.quota*

Doing this takes you back to spec, and will require you to start from scratch if you want to reimplement quotas.

Final Notes
There are a few things I'd like to note regarding the Windows client interaction with this quota system. For one, the Windows client is not really aware of the quotas on the Mac Server. And since Windows copies everything to the local machine at login, Windows does not become aware of any quota violations until logout, when it tries to copy all the data back to the server and hits the user's limit. At this point, Windows will provide an error message stating that it cannot copy the files back to the server, and that settings will not be saved. In the event that this happens, the user's settings and documents will remain available on the particular workstation in question, and all he should have to do to rectify the problem would be to log back in to the machine and remove enough files to stay under his quota on the server, then log back out. Still, I can already see this being a problem for less network-saavy users, so it's something to be mindful of, and perhaps solve in the future.

A couple other, more general things I'd like to note: There's a nice free utility called webmin which can be used to set quotas for volumes and users via a simple web interface. I tried using webmin, and for setting the volume quotas it was pretty good, but I could not for the life of me figure out how to get it to propagate quotas to multiple users. If you feel a bit put off by some of the above commands, you might try fiddling with webmin, though by the time you install webmin, you could've just as easily done things via the command-line, so I don't really recommend it for this purpose. It's a little more user friendly, but in the end you'll definitely have to get your hands dirty in the shell.

Also, as one of the purposes of unifying our authentication servers is to ultimately simplify user creation, all this is leading to one big-ass user creation script. I have a Mac-only version of this from the old days that should be easy to repurpose for our new Windows user additions. The hardest part of modifying this script will be setting user quotas (though with the edquota -p option, it shouldn't be too bad). Having an understanding of the UNIX side of quota creation, though, will be essential in creating this script, and I for one am glad I took the time to learn it, rather than using the webmin GUI. I really like webmin, and would recommend it for some stuff, but I really like scripting too, and usually prefer a well written script to a web interface. Sure it might be "harder" to write a script. But the tradeoff is speed and the ability to affect a number of things -- be they users, files or folders -- en masse. With webmin, I could create new users, but I could never use it to, for instance, propagate Mac and Windows skel accounts. That would be a separate step. With the proper script, I should be able to do complete Mac/Linux/Windows user creation from one Terminal window, any time, any place. So, again, that's the direction I'm headed in and recommend.

Finally, most of the information regarding quotas on Mac OSX was gleaned from this article:
http://sial.org/howto/osx/quota/

Big thanks to Jeremy Mate, whoever you may be, for the generally clear explanation of setting up quotas on the Mac. I'd recommend folks stop by his site for other Mac command-line and admin hints and documentation as well. There looks to be a lot of good and relatively obscure information there:
http://sial.org/

Three Platforms, One Server Part 1: Planning, Testing, Building

Background
As I've said probably a thousand-billion times, I work in a very multi-platform environment. Our lab consists of Mac, Windows and Linux workstations, all of which rely, in some form or another, on a centralized home account server. Currently, the way this works is problematic in several ways: Each platform -- Mac, Windows, Linux -- authenticates to a different server, and then mounts the home account server via one of two methods, either NFS or SMB. The main problems arise when we want to create new users, or when a user wants to change his or her password. When creating a user, we must do so on each platform's respective server -- i.e. on the Mac server, the Windows server and the NIS server which hosts Linux accounts. The user's short name, UID, GID and password must match across all three servers. Then, if a user later wants to change his password, he must do so on each platform separately or the passwords will not match. There is also an inconsistency in the way the actual home account location appears and is accessed on the various platforms. On Mac and Linux, the home account directories are mounted at /home, but on Windows, the home account is mounted at a location that is separate from the users' roaming profile, adding another layer of confusion for any user who might use both Windows and any other platform.

These problems make user creation painful and error-prone, and discourage regular password changes for the community. Also, educating the community about this system of servers is next to impossible. It's a bad, old system that's long overdue for an overhaul. And that's just what I intend to do.

The Plan
The plan for this overhaul is to migrate all user accounts to one server which will store all user information -- UID, GID, password, name, everything -- and which will also share out the home account server in as consistent a manner as possible. I've chosen Mac OSX Server for this job because it integrates both LDAP for Mac and Linux and Active Directory for Windows into its Open Directory system, and because it's capable of re-sharing an NFS mount out to each platform in a relatively consistent manner appropriate to said platform. There are three major hurdles to overcome in doing this.

The first problem to overcome is building and setting up the new server to host Windows accounts in an acceptable manner. Windows is the real wildcard here. The problem child. Setting up Linux clients is as simple as flipping a switch. Set the Linux client to get authentication info from the Mac Server and you're done. Linux sees the LDAP data just like Mac does and can mount our NFS server directly in /home. No problem. Standard *NIX stuff. Same as it is on Mac. Windows, on the other hand, understands neither LDAP nor NFS natively. So we need to make special arrangements on our Mac Server to accommodate it.

The second hurdle is getting users migrated from all three platforms over to the Mac Server. Here, two complications arise. The first is that my existing Mac Server has never hosted Windows accounts, and therefore, is not capable of doing so using the existing database. So for all intents and purposes, the user database needs to be rebuilt from scratch, which brings us to the second complication: We need to reset everyone's password. What do we reset the passwords to?

The final major hurdle we face is that of redundancy. Since we're now moving all users to a single database, every system is now dependent upon a single server. A single point of failure. We suddenly find all our eggs in one basket, and this is potentially dangerous. If that server goes down, no user can login in and work, and we're dead in the water. So we need to figure out a method for keeping a live backup of our server that can take over if it goes down.

These are the major hurdles, and there are some minor ones along the way, which we'll talk about as we get to them. So let's start building that server. Our authentication server. The mother of all servers.

Building the Server
I started fresh with a Tiger server. Nice clean install. I set it up to host Mac accounts, as usual, using the standard LDAP stuff that's built-in, as I always do. Master server, and all of that. But this time I've also set the server up as a Primary Domain Controller (PDC) so that it can also host Windows accounts. Getting the server to host Windows was not particularly difficult, but getting it to host them on a re-shared NFS mount, and having the Windows roaming profiles live on that NFS share was a bit more tricky. To do this, I first mounted our NFS export on the Mac Server at /home. I then set /home/username to be the location for our Windows roaming profiles. This should theoretically work. But the Windows client wants to copy everything in the roaming profile directory to the local computer at login, and then copy it all back at logout. This is a problem for any user who might have more than a few megabytes of data in his home account as the time it takes to do the copy is painfully long. Also, Widows ran into permissions problems when it hit the Spotlight directories, and errored out. So I've created a special directory in the home accounts, called Windows, that only holds the roaming profile data, and set the server to have Windows clients look here for said data. This initially failed to work until I realized that roaming profile data must actually exist for Windows clients to understand what's going on. Once I copied some basic roaming profile data (skel data, essentially) to the user's Windows folder, the Windows clients were able to login using this folder for the roaming profile. I'm almost surprised that this worked as we're essentially re-sharing an NFS mount out as SMB from a Mac Server to Windows clients. But work it did, and so far, at least, I've encountered no problems here. Windows users will now use the same home account location as Mac and Linux users. This is extremely cool.

So, on the Mac server we've mounted our home account RAID at /home, and in the Windows section of the user account, we have a roaming profile path that looks something like this:
\\ServerName\home\UserName\Windows

And in that Windows directory we have some basic Windows skel data. And that's pretty much all we need to host our Windows accounts.

Migrating Users
Next we need a plan. How will users access this new system? We'll have to reset their passwords. To what? And how do we inform them of the change? And how do we actually migrate the users from the old system to the new?

To answer the last question first, I decided that I wanted to start fresh. No actual user migration would occur. I want to build the new user database from scratch, since we'll be needing to reset all passwords anyway, and just in case there are any inconsistencies with our old server's database. Which brings us to the how of this, and that will answer the question of password resets as well. What I decided to do was to create a record descriptor -- a text file that can be imported into the Open Directory database on the Mac Server -- using easily obtained and formatted data. All I really need to do this is a list of users, their long names, their short names, their UIDs and GIDs, and their passwords. Since we're resetting their passwords, we can actually use whatever we want for this value. But it makes the most sense to use, for the reset passwords, something that is already in the possession of the users. This way, disseminating passwords is far less of a chore. Each user is given an ID card by the school when they start . This ID card has a number on it which we can use for the new passwords. Building this record descriptor is a fairly simple matter of obtaining the aforementioned data and tweaking it a bit. This can be done using a combination of a basic text editor, like TextEdit, and something like Excel for properly formatting the data. I will leave the details of this process for another exercise. Suffice to say, we have a clean slate upon which to build our user database, and a good method for reassigning passwords. So this step is effectively done.

When it comes time to implement the new system, users will be informed of the password change via email, and signs posted throughout the facility. And if they want to change their passwords after that, it will be as simple as logging onto a Mac and changing the password in the Accounts System Preference pane.

Redundancy
Finally, we have the problem of redundancy, and this is fairly crucial. Having all user data on a single machine is far more convenient, but also has certain risks associated with it. First and foremost among these is the idea of a single point of failure. That is, if our user account server -- our Mac Server, which in our model is soon to be responsible for user authentication for every workstation in the lab -- fails, no user can log in, and therefore, no user can work. Classes cannot happen. We're screwed. Hard. Ideally, we'd have a second server -- a replica, if you will -- of our primary Mac Server. If our primary went down, the replica would automatically step in, and take over the task of authenticating users until the primary is running again. Well, such a thing is indeed quite possible, and is also built in to Mac OSX Server. It will be crucial for us to set this up and make sure it works before we implement our plan.

What's Next
So, this is where we are at this point. I've built a model Mac Server. I've begun compiling the user data into a record descriptor (actually, all students have been added, but I need faculty and staff data to complete the job). I've tested the behavior on test Linux and Windows machines. The next steps will be: more thorough testing of the above, setting up and testing the server replica behavior, and, finally, alerting users to and implementing the new system. These will be covered in upcoming articles. If you're interested in such things, do check back. I hope to have all this completed by the end of the Christmas break. But then, you never know what sort of problems will crop up in a move like this, so we'll see.

Either way, I'll be documenting my experiences here. More soon to come.

Automounting NFS Home Accounts

In the lab where I work, we have networked home accounts for all our Mac users. These accounts live on an NFS RAID on another, non-Apple machine. This, as they say, "took some doin'," but we've had it working very reliably for some time now. It's a neat process, and one I'm rather proud of figuring out. So I thought I'd write a quick (yeah, right!) explaination of what we do.*

General Overview
Generally speaking, in our setup, three things need to happen:
1. The client must be set up to bind to the MacServer with the Directory Access application.
2. The client must automount the NFS RAID at startup so that home accounts are available for the user.
3. The MacServer must authenticate the user and specify where her home account is mounted.

On The NFS Server
I do not administer our NFS RAID. I am not an NFS expert, but I can tell you what I do know:
1. For our purposes, the entire directory containing the user accounts must be exported.
2. Root, I believe, should be mapped to root. It is crucial that the client system have root access to the NFS export.
3. Most typical NFS setups should work withouot a great deal of tweaking, but, if I remember correctly (it's been awhile since we set this up), that last root thing is a deal breaker.

On The Client
The client needs a couple things done to it:
1. The client must be bound to a properly configured MacServer (see below), using the Directory Access application. Most folks who've set up networked home accounts know how to do this. If you don't, read the manual. It's not hard.
2. The client needs to mount the NFS share, preferably at each startup. And this is where the fun begins. Our goal here will be to create a custom StartupItem that automounts our NFS export at each boot.**

For purposes of this example, we'll call our local mount point /home, and our NFS export we'll say lives at the IP address 192.168.1.100 in the folder /Users/Home. (If you're following along at home, feel free to substitute your own values for anything provided in these examples.)

To mount our NFS server, we use a command called automount. automount is sweet, and you can do a lot with it, which we'll get to in a minute. For now, a command you may want to use to test your NFS setup before adding startup scripts and whatnot, is the mount_nfs command, and it looks something like this: IPaddress_of_NFSShare:/path/to/share /local_mount_pointDon't forget to create that local mount point directory first:sudo mkdir /home So, for this example:sudo mount_nfs 192.168.1.100:/Users/Home /home This is a good command to use for temporary mounts of the NFS export. Anything mounted this way will unmount after reboot. Or you can simply use:sudo /umount /hometo umount the NFS share. Now let's get into automount. One of the cool things about automount is that it uses maps to call NFS and other shared disks. Once you've established a startup procedure, you can use maps to add, remove, or change your automount setup. This is handy if you're using scripts (which we are) because it means we really shouldn't have to ever change our scripts. Any changes can happen to the maps and are easy to do. The automount map file looks like this:home rw,net,tcp 192.168.1.100:/Users/Home The first field specifies the local mount point, the second field specifies NFS options (these work best for us, your mileage may vary, but the rw option is necessary and the net option is recommended), and the third field is the NFS export. Place these values in a space-delimited, plain text file, and call it something you'll remember. For our example we'll call it MyMounts. (Do not use a .txt file suffix on this file. Doing so will break all the examples to come.) automount syntax is fairly simple, if confusing at times. It looks something like this:automount -m /mount /path/to/mymountswhere /mount is where the NFS mount will be mounted. The -m flag tells automount to use a map file, the path to which is specified in the second argument to the command. automount then reads the map file, and grafts the home mount point to a symlink inside the directory /mount.*** So, with your NFS server properly configured, and your MyMounts file on your Desktop, if you do this:sudo automount -m /mount ~/Desktop/MyMountsYou should see your home mount appear in the directory /mounts. In Tiger, however, this initial mount point does not show up in the Finder. To see it, you must type "command-shift-g" and type /mount in the text field. Or you can look in the Terminal with:ls /mountYou should also see the mount point listed when using the df command in Terminal. If you don't see the /mount with home inside, something is wrong. You need to troubleshoot your NFS setup. If the share is there, move on to the next step. Once you've got automount properly mounting the NFS export, it's time to create a very simple StartupItem to handle all this at each boot automatically. This is about the simplest StartupItem imaginable. You need three files:1. A simple shell script2. Your MyMounts file3. A StartupParameters.plist filePut these in a folder, which we'll call MountNFS. If you're following along, you have the MyMounts file already, so that's done. Put it in the folder. Next, let's make the StartupParameters.plist file. This file just specifies a thing or two about how the startup item should run, and what messages it will generate. Copy and paste the following text into a plain text file, call it StartupParameters.plist, and save it to your MountNFS Folder:

<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist SYSTEM "file://localhost/System/Library/DTDs/PropertyList.dtd"><plist version="0.9"><dict> <key>Description</key> <string>Automount NFS</string> <key>Messages</key> <dict
>      <key>start</key>      <string>Mounting NFS</string>      <key>stop</key>      <string>Mounting NFS</string> </dict> <key>OrderPreference</key> <string>Late</string> <key>Provides</key> <array>      <string>AutomountNFS</string> </array> <key>Requires</key> <array>      <string>NFS</string> </array></dict></plist>

Finally, we need the shell script, which simply looks like this****:

#!/bin/sh

### Automount NFS Export##

. /etc/rc.commonConsoleMessage "Automounting NFS"rm -rf /homeautomount -m /mount /Library/StartupItems/MountNFS/MountNFSln -s /mount/home /home

Copy this text into a new plain text file, and save the file as MountNFS (it must be the same name as the folder, and it must not end with a .txt or any other file suffix). Make sure it is executable:
chmod 755 ~/Desktop/MountNFS/MountNFS

So, this folder, MountNFS, becomes your actual StartupItem. At this point, you probably want to have this thing run at startup, and the way to do that is to place the MountNFS folder in /Library/StartupItems. (If the StartupItems folder doesn't exist, create it.) You should also set permissions on the MountNFS folder and its contents as well since Tiger will complain (and then kindly fix things) if there are any errors. The permissions should be set so that the owner is root, the group is wheel, and (I think) permissions on all files can be 755, so:
sudo chown -R root:wheel /Library/StartupItems/MountNFS
sudo chmod -R 755 /Library/StartupItems/MountNFS

Once NFS and automount are working properly together, and this StartupItem is in place, all you need to do is reboot your client Mac. You should see your NFS share mounted at /home. (If Tiger complains that the permissions are wrong after the first reboot, tell it to "Fix" the problem and reboot again. It's just making sure the permissions are secure, and if all's good, your StartupItem should work ever after.)

Congratulations! That was the hard part.

On The MacServer
As I said, home accounts for our Mac users live on a RAID which is shared via NFS. Authentication is handled (at present) by a MacServer. Briefly, this is what happens when a user logs in to one of our Macs:
1. When the user types in her username and password, the information is sent to the MacServer, which authenticates the user.
2. The MacServer also specifies where the home account of the user is located on the client machine, in our case, the NFS mount point /home.
3. The client allows the user access to the workstation, and places them in the home directory specified by the MacServer, which, again, is our NFS mount point /home.

This involves a little voodoo on the MacServer. Our MacServer users have their home accounts set in a way slightly different than what is generally done on OSX Server. Usually the home accounts are set to AFP or NFS shares that reside on the MacServer and that get automounted by the client. In this scenario, three fields are populated in the Workgroup Manager's home account settings for any given user. Go to the Home tab for any user, and click the edit button (the one that looks like a pencil) to examine these fields. The first field specifies where on the server the home account lives. The second field specifies the name of the folder for the home account (usually just the user's name). The third field specifies where on the client the home account will mount. In our setup, there is no AFP or NFS share on the MacServer itself, so the first two fields are irrelevant. The only field we need to concern ourselves with is the third field -- the one that tells the client machine where to find the user's home account. And all we need to put here is the absolute path to the mount point of our NFS share, which, by our example, would be /home/username. (Subsequent users can have their home directories indicated by simply selecting the new home location that gets created after setting this up. The "username" is assumed by Workgroup Manger, and does not need to be added for each user.)

That's it. Done.

If you've got all this set up properly, you should be able to reboot your client and log in to your Mac as a networked user whose home account is actually located on an NFS share on another computer. It's what we do, and it works great. And it allows us to centralize our Mac and Linux home account locations. Windows is another story. But we're working on it.

* NOTE: These instructions are for Tiger client authenticating to Panther Server. If details change when we get Tiger Server, I'll post them here.

** There is a simpler, though less elegant way to do all this if you don't feel like creating your own StartupItem. You can edit the existing /System/Library/StartupItems/NFS/NFS script. To do this, add the line:
automount -m /mount_point /path/to/mount_map
at the end of the "Start the automounter" section. This may, however, cause problems in Tiger client as the mount may not show in the Finder. Symlinks can be created here, as they are in our script, to alleviate this problem. The other problem with this is that system updates may overwrite this edit, causing you to redo everything. So I strongly recommend the custom StartupItem method outlined above.

*** Clever readers may notice that this method precludes mounting an export in a top-level directory in /. Unfortunately, using automount, the only way I've gotten it to work is by mounting the share inside the directory specified in the command, so if you want your share at the top level of the file system -- i.e. in / -- you'll have to symlink it. This is what we do. It works fine in Tiger (in fact, in Tiger the initial mount point -- in this case /mount -- doesn't appear in the Finder), but we had problems with this in Panther. In Panther we just used the nested mount point and lived with it.

**** This is the script we use for our Tiger clients. Tiger will not reveal the original mount point specified in the script in the Finder, so we use a symlink to the mount point for our actual home location. This is why you see symlink creation in the script. The first line destroys the symlink before recreating it at boot. If this doesn't happen, a broken link could interfere with the script. And, BTW, the symlink method was unreliable in Panther.

Are We Beta Testers?

Apple's Mac OS X v.10.4 -- or Tiger, as we affectionately call it -- has been out for a few months now. I've been testing the Hell out of it, and I, like so many others, have found a plethora of bugs and minor issues with the new OS. My attempts to solve these problems often land me on a number of The Usual Websites, not the least of which is Apple's own discussion forums. One of the most frequent complaints I hear on these sites is that Apple's initial release of any given OS is a beta version, and that we in the Mac community are all just beta testers for Apple. So, is this true?

I'd say the short answer is yes. I don't think this is entirely a bad thing, however, but I do have some problems with it.

When the latest, greatest version of just about anything -- a cell phone, a computer, a hard drive, what have you -- comes out, there are bound to be some little problems here and there. Operating systems are certainly no exception. In fact, because new operating systems are expected to do so many things and support such a wide variety of activities, as well as be innovative and better under-the-hood, as it were, they are perhaps the products most prone to these initial glitches. I would argue that the only way to properly test such a product is, ultimately, in the field, i.e. release the software to the general public for final bug testing, compile a list of complaints, and then fix those issues, in order of priority, with point releases. Indeed, this is what Apple does. It's also what Microsoft, Adobe, Macromedia, and everyone else does. There's simply no way for Apple to test the immense array of possible problems that even the most basic user is apt to encounter. The range of hardware and software combinations is simply too great. Sure, these bugs make me cranky from time to time. But I'd say, for the most part, Apple's quality control is pretty decent when it comes to OS releases, and their update system is pretty efficient as well. Overall, I'm fairly happy with the way Apple handles their OS releases.

There is one area in which, however, I feel Apple drops the ball. This is in the realm of the server OS. Since OS X arrived, Apple has been capable of producing simply amazing server operating systems that perform astounding feats and that work beautifully with their client counterparts. Mac OS X Server has made a quantum leap from the AppleShare Server days. Its Open Source UNIX underpinnings have everything to do with this leap. It has been truly amazing to be a part of. Unfortunately, Apple all too often wants to treat the server community the same way they treat the desktop user -- as beta tester. So when they release a new desktop operating system, the server version is released very shortly thereafter, and suffers many of the same -- or at least the same number -- of bugs and glitches. This is so not cool.

A server admin should not be, nor have to be, a beta tester, ever, and for a number of reasons.

First of all, server software is inherently production software. It has to work. If your server breaks, you're dead in the water. Typically, the first three point-versions of Mac OS X Server have problems so big that I can't use them in production. And building a server is a much bigger, much more delicate task than building a desktop machine, so if it's screwy, I'm out a lot more time and effort building a product that's unuseable to me. Because one little problem on a client is no big deal, but one little problem on a server can be a deal breaker. Because of these issues, Apple should test the Hell out of their server software. Permissions problems on a client can be worked around. Permissions problems on a server (documented in Apple's Knowledge Base) are simply unacceptable.

Second of all, quality control testing of server configurations is much easier than testing client configurations: Server software runs on far fewer hardware configurations, and typically has much less software installed. There are only a handful of computers that any reasonable admin would run OS X Server on, and usually no productivity software gets installed on those systems at all. On my server, there are almost no third-party apps installed. So what's so hard about this? Install your own software on your own hardware and test it against your own clients. Done.

Lastly, server software is almost four times more expensive than the client version. Making someone pay $499 for software that just doesn't work and does not include support is unacceptable and I truly resent it. I think it shows a huge level of contempt for the admin/server market. $129? Fine. $499? Fuck that shit.

The problems inherent in desktop OS quality control are almost completely absent from server OS quality control. And servers are difficult to build and maintain, and are mission critical. Yet, from what I can tell, Apple treats them the same: same bugs, same number of problems, same point-version release schedule. Same beta bullshit. Personally, I wouldn't mind waiting a bit longer for server software that worked out of the box. I mean, I always end up having to wait for a useable version anyway. And on the desktop, that's understandable and fine. But on the server level, it's just not.

I really like the Red Hat Linux model for OS distribution. Red Hat releases both free and paid software. Red Hat Linux is the paid version, and it comes in numerous configurations, each available for a different price. Red Hat is always a version or two behind the current build of Linux, though. Know why? Because the previous builds are the stable ones. That's right, Red Hat's paid product is the older, but stable OS. If you want the latest, greatest build, you can get it under the Fedora moniker. It's not guranteed to be stable, and comes with no support, but it's 100% free. And, I believe, all the userland beta testing that's done in Fedora makes it into subsequent Red Hat releases.

This is a great model. Unfortunately, Apple being a corporation with trade secrets and a bottom line, it's one they can't really use. But I think they could borrow one element of the Red Hat scheme: When Apple releases a new server OS, they should make sure it's stable, even if it means making the customer wait a bit longer, or providing a build that's slightly older.

'Cause frankly, I want to build servers, not beta test them.