Mac OS X Server 10.4.8 Breaks Windows Quotas

It's great to finally have something systems-related to post about amidst the endless bureaucracy that fills my days lately. Of course that means that — yup, you guessed it — something broke. But hey, that's what it's all about. Well, that and the fixing of said brokeness, of course.

So we recently discovered that our Windows clients were suddenly, and without explanation, able to greatly exceed their roaming profile quotas. In fact, looking at the roaming profile drive showed users with upwards of 25 GBs in their roaming profiles, which have quota limits of 50 MB. Not only that, but further testing revealed that Windows client machines wouldn't even complain if they went over quota. Any SMB connection to the roaming profile drive could exceed the quota limit without so much as a complaint from server or client. AFP worked. UNIX worked. But quotas were ignored over SMB. What the fuck?

For three days I've been trying to track this problem down, testing all sorts of quota scenarios and SMB configurations in between meetings and meetings and more meetings. Eventually, when I can't make headway on a problem, I start thinking it might just be a bug. So I started poking around in the Apple Discussions, and I found one and only one complaint of a similar nature: 10.4.8 Server with broken quotas on Windows. Had I recently done a system update that perhaps broke quotas?

So I started thinking about what in a system update could break such a thing. How do quotas work? There is no daemon. A colleague suggested that they were part of the kernel. Had I done anything that would have replaced the kernel in the last month or two?

The answer was yes. Over the winter break I had decided to update the server to version 10.4.8. Upon realizing this I began to strongly suspect that Mac OS X Server 10.4.8 contained a bug that broke quotas over SMB. Fortunately, as is often my practice, I'd made a clone of my 10.4.7 server to a portable firewire drive before upgrading. Testing my theory would be a simple matter of booting off the clone.

Sure enough, after booting from the clone, quotas began behaving properly on Windows clients again. Because I had the clone, reverting the 10.4.8 server back to 10.4.7 was a simple matter of cloning the contents of the firewire to the server's internal drive and rebooting. Voilà! Problem solved!

From now on I think I'll hold off on server updates unless I really, really need them. When it comes to servers, I think the old adage is best: If it ain't broke, don't fix it.

Backing Up with RsyncX

In an earlier post I talked generally about my backup procedure for large amounts of data. In the post I discussed using RsyncX to back up staff Work drives over a network, as well as my own personal Work drive data, to a spare hard drive. Today I'd like to get a bit more specific.

Installing RsyncX

I do not use, nor do I recommend the version of rsync that ships with Mac OS X 10.4. I've found it, in my own personal tests, to be extremely unreliable, and unreliability is the last thing you want in a backup program. Instead I use — and have been using without issue for years now — RsyncX. RsyncX is a GUI wrapper for a custom-built version of the rsync command that's made to properly deal with HFS+ resource forks. So the first thing you need to do is get RsyncX, which you can do here. To install RsyncX, simply run the installer. This will place the resource-fork-aware version of rsync in /usr/local/bin/. If all you want to do is run rsync from the RsyncX GUI, then you're done, but if you want to run it non-interactively from the command-line — which ultimately we do — you should put the newly installed rsync command in the standard location, which is /usr/bin/.¹ Before you do this, it's always a good idea to make a backup of the OS X version. So:

sudo cp /usr/bin/rsync /usr/bin/rsync-ORIG

sudo cp /usr/local/bin/rsync /usr/bin/rsync

Ah! Much better! Okay. We're ready to roll with local backups.²

Local Backups

Creating local backups with rsync is pretty straightforward. The RsyncX version of the command acts almost exactly like the standard *NIX version, except that it has an option to preserve HFS+ resource forks. This option must be provided if you're interested in preserving said resource forks. Let's take a look at a simple rsync command:

/usr/bin/rsync -a -vv /Volumes/Work/ /Volumes/Backup --eahfs

This command will backup the contents of the Work volume to another volume called Backup. The -a flag stands for "archive" and will simply backup everything that's changed while leaving files that may have been deleted from the source. It's usually what you want. The -vv flag specifies "verbosity" and will print what rsync is doing to standard output. The level of verbosity is variable, so "-v" will give you only basic information, "-vvvv" will give you everything it can. I like "-vv." That's just the right amount of info for me. The next two entries are the source and target directories, Work and Backup. The --eahfs flag is used to tell rsync that you want to preserve resource forks. It only exists in the RsyncX version. Finally, pay close attention to the trailing slash in your source and target paths. The source path contains a trailing slash — meaning we want the command to act on the drive's contents, not the drive itself — whereas the target path contains no trailing slash. Without the trailing slash on the source, a folder called "Work" will be created inside the WorkBackup drive. This trailing slash behavior is standard in *NIX, but it's important to be aware of when writing rsync commands.

That's pretty much it for simple local backups. There are numerous other options to choose from, and you can find out about them by reading the rsync man page.

Network Backups

One of the great things about rsync is its ability to perform operations over a network. This is a big reason I use it at work to back up staff machines. The rsync command can perform network backups over a variety of protocols, most notably SSH. It also can reduce the network traffic these backups require by only copying the changes to files, rather than whole changed files, as well as using compression for network data transfers.

The version of rsync used by the host machine and the client machine must match exactly. So before we proceed, copy rsync to its default location on your client machine. You may want to back up the Mac OS X version on your client as well. If you have root on both machines you can do this remotely on the command line:

ssh -t root@mac01.systemsboy.com 'cp /usr/bin/rsync /usr/bin/rsync-ORIG'

scp /usr/bin/rsync root@mac01.systemsboy.com:/usr/bin/

Backing up over the network isn't too much different or harder than backing up locally. There are just a few more flags you need to supply. But the basic idea is the same. Here's an example:

/usr/bin/rsync -az -vv -e SSH mac01.systemsboy.com:/Volumes/Work/ /Volumes/Backups/mac01 --eahfs

This is pretty similar to our local command. The -a flag is still there, and we've added the -z flag as well, which specifies to use compression for the data (to ease network traffic). We now also have an -e flag which tells rsync that we're running over a network, and an SSH option that specifies the protocol to use for this network connection. Next we have the source, as usual, but this time our source is a computer on our network, which we specify just like we would with any SSH connection — hostname:/Path/To/Volume. Finally, we have the --eahfs flag for preserving resource forks. The easiest thing to do here is to run this as root (either directly or with sudo), which will allow you to sync data owned by users other than yourself.

Unattended Network Backups

Running backups over the network can also be

completely automated and can run transparently in the background even on systems where no user is logged in to the Mac OS X GUI. Doing this over SSH, of course, requires an SSH connection that does not interactively prompt for a password. This can be accomplished by establishing authorized key pairs between host and client. The best resource I've found for learning how to do this is Mike Bombich's page on the subject. He does a better job explaining it than I ever could, so I'll just direct you there for setting up SSH authentication keys. Incidentally, that article is written with rsync in mind, so there are lots of good rsync resources there as well. Go read it now, if you haven't already. Then come back here and I'll tell you what I do.

I'd like to note, at this point, that enabling SSH authentication keys, root accounts and unattended SSH access is a minor security risk. Bombich discusses this on his page to some extent, and I want to reiterate it here. Suffice to say, I would only use this procedure on a trusted, firewalled (or at least NATed) network. Please bear this in mind if you proceed with the following steps. If you're uncomfortable with any of this, or don't fully understand the implications, skip it and stick with local backups, or just run rsync over the network by hand and provide passwords as needed. But this is what I do on our network. It works, and it's not terribly insecure.

Okay, once you have authentication keys set up, you should be able to log into your client machine from your server, as root, without being prompted for a password. If you can't, reread the Bombich article and try again until you get it working. Otherwise, unattended backups will fail. Got it? Great!

I enable the root account on both the host and client systems, which can be done with the NetInfo Manger application in /Applications/Utilities/. I do this because I'm backing up data that is not owned by my admin account, and using root gives me the unfettered access I need. Depending on your situation, this may or may not be necessary. For the following steps, though, it will simplify things immensely if you are root:

su - root

Now, as root, we can run our rsync command, minus the verbosity, since we'll be doing this unattended, and if the keys are set up properly, we should never be prompted for a password:

/usr/bin/rsync -az -e SSH mac01.systemsboy.com:/Volumes/Work/ /Volumes/Backups/mac01 --eahfs

This command can be run either directly from cron on a periodic basis, or it can be placed in a cron-run script. For instance, I have a script that pipes verbose output to a log of all rsync activity for each staff machine I back up. This is handy to check for errors and whatnot, every so often, or if there's ever a problem. Also, my rsync commands are getting a bit unwieldy (as they tend to do) for direct inclusion in a crontab, so having the scripts keeps my crontab clean and readable. Here's a variant, for instance, that directs the output of rsync to a text file, and that uses an exclude flag to prevent certain folders from being backed up:

/usr/bin/rsync -az -vv -e SSH --exclude "Archive" mac01.systemsboy.com:/Volumes/Work/ /Volumes/Backups/mac01 --eahfs > ~/Log/mac01-backup-log.txt

This exclusion flag will prevent backup of anything called "Archive" on the top level of mac01's Work drive. Exclusion in rsync is relative to the source directory being synced. For instance, if I wanted to exclude a folder called "Do Not Backup" inside the "Archive" folder on mac01's Work drive, my rsync command would look like this:

/usr/bin/rsync -az -vv -e SSH --exclude "Archive/Do Not Backup" mac01.systemsboy.com:/Volumes/Work/ /Volumes/Backups/mac01 --eahfs > ~/Log/mac01-backup-log.txt

Mirroring

The above uses of rsync, as I mentioned before, will not delete files from the target that have been deleted from the source. They will only propagate changes that have occurred on the existing files, but will leave deleted files alone. They are semi-non-destuctive in this way, and this is often useful and desirable. Eventually, though, rsync backups will begin to consume a great deal of space, and after a while you may begin to run out. My solution to this is to periodically mirror my sources and targets, which can be easily accomplished with the --delete option. This option will delete any file from the target not found on the source. It does this after all other syncing is complete, so it's fairly safe to use, but it will require enough drive space to do a full sync before it does its thing. Here's our network command from above, only this time using the --delete flag:

/usr/bin/rsync -az -vv -e SSH --exclude "Archive/Do Not Backup" mac01.systemsboy.com:/Volumes/Work//Volumes/Backups/mac01 --delete --eahfs > ~/Log/mac01-backup-log.txt

Typically, I run the straight rsync command every other day or so (though I could probably get away with running it daily). I create the mirror at the end of each month to clear space. I back up about a half dozen machines this way, all from two simple shell scripts (daily and weekly) called by cron.

Conclusion

I realize that this is not a perfect backup solution. But it's pretty good for our needs, given what we can afford. And so far it hasn't failed me yet in four years. That's not a bad track record. Ideally, we'd have more drives and we'd stagger backups in such a way that we always had at least a few days backup available for retrieval. We'd also probably have some sort of backup to a more archival medium, like tape, for more permanent or semi-permanent backups. We'd also probably keep a copy of all this in some offsite, fireproof lock box. I know, I know. But we don't. And we won't. And thank god, 'cause what a pain in the

ass that must be. It'd be a full time job all its own, and not a very fun one. What this solution does offer is a cheap, decent, short-term backup procedure for emergency recovery of catastrophic data loss. Hard drive fails? No trouble. We've got you covered.

Hopefully, though, this all becomes a thing of the past when Leopard's Time Machine debuts. Won't that be the shit?

1. According to the RsyncX documentation, you should not need to do this, because the RsyncX installer changes the command path to its custom location. But if you'll be running the command over the network or as root, you'll either have to change that command path for the root account and on every client, or network backups will fail. It's much easier to simply put the modified version in the default location on each machine.

2. Updates to Mac OS X will almost always overwrite this custom version of rsync. So it's important to remember to replace it whenever you update the system software.

Software Licensing and Registration Hell

Is it just me, or has software licensing and registration in some quarters become a total nightmare? Case in point: Today I'm trying to install Autodesk's Combustion 4, a fine compositing and effects program, somewhat akin to Adobe's AfterEffects. From a SysAdmin's standpoint, however, the two are night and day. To activate AfterEffects, one simply need install it and enter the serial number provided with the software bundle. That's it. It's done. Ready to use. Moreover, this one serial number is valid for the number of machines specified by our license agreement with the company. Adobe trusts me to install the software only on the number of machines agreed upon by our license contract, or to otherwise monitor licenses on our network. The software does not perform network license checks to see if we've exceeded our licenses. Adobe leaves that to me. That's my job, after all, and I get in big trouble if I fail to do that job. In this scenario, the onus of license enforcement falls to me. Adobe trusts me to do my job, and in return, installing software is fairly straightforward. It's a good deal.


Autodesk Combustion Splash Screen
(click image for larger view)

Other software manufacturers, however, are not so trusting and approach software license management via far more convoluted and Byzantine methods. Autodesk is among these companies. To install Combustion, not only do I have to provide a serial number, I also need to provide an authorization number. This authorization number can be obtained by registering each and every copy of Combustion I intend to use. The process of installing just one copy of Combustion goes something like this:

1. Install the software from disk.
2. Launch the application and type in the serial number for this copy.
3. Activate the "Licensing Wizard."


Autodesk Registration Splash Screen: Here We Go...
(click image for larger view)

4. Upon magical transportation to the Autodesk registration page, register the software by entering every personal or professional detail they can think to ask you about, including the serial number of said copy of said software.
5. Check your email, where you should shortly receive the authorization number for your copy of Combustion.
6. Enter this Authorization Code into the special box.
7. You should finally be able to use Combustion.
8. Lather, rinse, repeat for every copy you bought.


Autodesk Registration Panel: Are We Having Fun Yet?
(click image for larger view)

Whew!

Okay, there are a couple of big problems with this scenario. First off, if there's any problem along this insane route, you can get quite stuck. In my case, the registration site did not work. The error page directed me to file a Customer Service Request, but the link to said request was also broken. This means that I will have to register my software either by fax, mail or possibly telephone. It also means I will have to do this myself for each and every copy of Combustion I've purchased. I have five copies. Thank god I don't have more, because it's going to take me a while to install this software. And this is the other problem with this sort of arcane licensing scheme: It doesn't scale. Imagine if I had a hundred machines I wanted to install Combustion on. It's practically infeasible at worst, unbelievably annoying at best. And I have to say, all it makes me want to do is buy any product other than Combustion, just to avoid the install hassles.


Autodesk Online Registration: Broken!
(click image for larger view)

There are a number of software companies besides Autodesk that use these sorts of tactics: Cycling '74 and Digidesign spring immediately to mind as some of the most heinous offenders, but there are others. They seem to think that the best way to protect their intellectual property is to make their products difficult to install and maintain. The companies that do it right are ones like Apple, Adobe and, shockingly, Microsoft. These companies have volume license schemes and educational versions of their products that are relatively easy for institutions to install and maintain. They recognize the value of getting their software in the hands and minds of young users, and they make it as easy to do so as possible. And I think they recognize the value of happy SysAdmins, too. They know we're the ones who make the recommendations for future purchases, and that maybe — just maybe — it might be a good idea to not make our lives a total living hell.

So, to Autodesk and others like you, here's a little secret: Systems Administrators hate you. We hate your fucking guts. Because, for us, installing your software is ten kinds of torture. There's no reason for you to do this except for a clear contempt on your part towards the SysAdmin community, and most likely your users in general. Or just general stupidity. You're not stopping anyone from stealing your software, and frankly you're keeping those of us who plan to use it legitimately from even wanting to do so. In fact, I'd argue that stealing your software is far easier than installing it legitimately. So all you're really doing is punishing your legitimate users. It's so stupid, and I'm so sick of it, and I'm sure I'm not alone. But more than that, it's just bad business. The next time someone asks me to recommend a piece of compositing software, I'll think of all the hassle I had to go through to install Combustion.

And then I'll recommend AfterEffects.

UPDATE:
In the comments some astute readers have provided a link to a whole site that deals with application installers and requirements that present problems from the standpoint of educational lab administration. The site is more concerned with apps that require user-level authorization files or user-level access to parts of the filesystem that should be protected, while my article deals more with plain-old annoying installers and licensing schemes. Still, it's a great site, and a good compliment to this post, and I wanted to link to it directly:
Poorly-Made Applications

Amazingly, some of these practices are holdovers from the OS 9 days, when security and multiple users weren't really issues to Apple or developers (or Lab Admins). It's shocking to me that after all this time a lot of software developers still can't figure out how to properly, securely, or even conveniently install apps in Mac OS X.

Thanks to those who sent in the link.

Networked Home Accounts and The New RAID

We recently installed in our machine room a brand-spankin' new RAID for hosting network home accounts. We bought this RAID as a replacement for our aging, and horrendously unreliable Panasas RAID. The Panasas was a disaster for almost the entire three-year span of its lease. It used a proprietary operating system based on some flavor of *NIX (which I can't recall right at this moment), but that had all sorts of variations from a typical *NIX install that made using it as a home account server far more difficult than it ever should have been. To be fair, it was never really intended for such a use, but was rather created as a file server cluster for Linux workstations that can be easily managed directly from a web browser, as opposed to the command-line. It was really built for speed, not stability, and it was really completely the wrong product for us. (And for the record, I had nothing to do with its purchase, in case you're wondering.)

What the Panasas was, however, was instructive. For three years we lived under the shadow of its constant crashing, the near-weekly tcp dumps and help requests to the company, and angry users fed up with a system that occasionally caused them to lose data, and frequently caused their machines to lock up for the duration of a Panasas reboot, which could be up to twenty minutes. It was not fun, but I learned a lot from it, and it enabled me to make some very serious decisions.

My recent promotion to Senior Systems Administrator came just prior to the end of our Panasas lease term. This put me in the position of both purchasing a new home account server, and of deciding the fate of networked home accounts in the lab.

If I'd learned anything from the experience with the Panasas it was this: A home account server must be, above all else, stable. Every computer that relies on centralized storage for home account serving is completely and utterly dependent on that server. If that server goes down, your lab, in essence, goes down. When this starts happening a lot, people begin to lose faith in a lot of things. First and foremost, they lose faith in the server and stop using it, making your big, expensive network RAID a big, expensive waste of money. Secondly, they lose faith in the system you've set up, which makes sense because it doesn't work reliably, and they stop using it, favoring instead whatever contingency plan you've set up for the times when the server goes down. In our case, we set up a local user account for people to log into when the home account server was down. Things got so bad for a while that people began to log in using this local account more than they would their home accounts, thus negating all our efforts at centralizing home account data storage. Lastly, people begin to lose faith in your abilities as a systems administrator and lab manager. Your reputation suffers, and that makes it harder to get things done — even improvements. So, stability. Centralization of a key resource is risky, in that if that resource fails, everything else fails with it. Stability of crucial, centralized storage was key if any kind of network home account scenario was going to work.

The other thing I began to assess was the whole idea of networked home accounts themselves. I don't know how many labs use networked home accounts. I suspect there are quite a few, but there are also probably a lot of labs that don't. I know I've read about a lot of places that prefer local accounts that are not customized and that revert to some default state at every log in/out. Though I personally really like the convenience of customized network home accounts that follow you from computer to computer throughout a facility, it certainly provides a fair amount of hassle and risk. When it works it's great, but when it doesn't work, it's really bad. So I really began to question the whole idea. Is this something we really needed or wanted to continue to provide?

My ultimate decision was intimately linked to the stability of the home account server. From everything I've seen, networked home accounts can and do work extremely well when the centralized storage on which they reside is stable and reliable. And there is value to this. I talked to people in the lab. By and large, from what I could glean from my very rudimentary and unscientific conversations with users, people really like having network home accounts when they work properly. When given the choice between a generic local account or their personalized network account, even after all the headaches, they still ultimately prefer the networked account. So it behooves us to really try to make it work and work well. And, again, everything I saw told me that what this really required, more than anything else, was a good, solid, robust and reliable home account server.

So, that's what we tried our best to get. The new unit is built and configured by a company called Western Scientific, which was recommended to me by a friend. It's called the Fusion SA. It's a 24-bay storage server running Linux Fedora Core 5. We've populated 16 of the bays with 500GB drives and configured them at RAID level 5, giving us, when all is said and done, about 7TB of networked storage with room to grow in the additional bays should we ever want to do so. The unit also features a Quad-port GigE PCIX card which we can trunc for speedy network access. It's big and it's fast. But what's most important is its stability.

Our new RAID came a little later than we'd hoped, so we weren't able to test it before going live with it. Ideally, we would have gotten the unit mid-summer and tested it in the lab while maintaining our previous system as a fall-back. What happened instead was that we got the unit in about the second week of the semester, and outside circumstances eventually necessitated switching to the new RAID sans testing. It was a little scary. Here we were in the third week of school switching over to a brand new but largely untested home account server. It was at this point in time that I decided, if this thing didn't work — if it wasn't stable and reliable — networked home accounts would become a thing of the past.

So with a little bit of fancy footwork we made the ol' switcheroo, and it went so smoothly our users barely noticed anything had happened. Installing the unit was really a simple matter of getting it in the rack, and then configuring the network settings and the RAID. This was exceptionally quick and easy, thanks in large measure to the fact that Western Scientific configured the OS for us at the factory, and also to the fact that they tested the unit for defects prior to shipping it to us. In fact, our unit was late because they had discovered a flaw in the original unit they had planned to ship. Perfect! If that's the case, I'm glad it was late. This is exactly what we want from a company that provides us with our crucial home account storage. If the server itself was as reliable as the company was diligent, we most like had a winner on our hands. So, how has it been?

It's been several weeks now, and the new home account server has been up, without fail or issue, the entire time. So far our new home account server has been extremely stable (so much so that I almost forget about it, until, of course, I walk past our server room and stop to dreamily look upon its bright blue drive activity lights dutifully flickering away without pause). And if it stays that way, user confidence should return to the lab and to the whole idea of networked home accounts in fairly short order. In fact, it seems like it already has to a great extent. I couldn't be happier. And the users?... Well, they don't even notice the difference. That's the cruel irony of this business: When things break, you never hear the end of it, but when things work properly, you don't hear a peep. You can almost gauge the success or failure of a system by how much you hear about it from users. It's the ultimate in "n o news is good news." The quieter the better.

And 'round these parts of late, it's been pin-drop quiet.

Publishing iCal Calendars via Mac OS X Server

So a lot of people are familiar with my articles on publishing iCal calendars to the 'net with box.net. But it turns out that I also have to provide iCal publishing for staff on our internal network, and I do this using a Mac OS X server. Recently, after rebuilding my server, I had some problems with it and had to set it all up again after not having done it in quite some time. It's pretty easy, but there's one snag I got hung up on. We'll get to that in a minute. But first let's run through the steps to set this up. First and foremost, setting up iCal publishing on Mac OS X Server requires the web server to be running. This can easily be done with the flick of a switch in the Server Admin application. But before we start the service, let's make all our settings. The first thing we need to do is set the root folder for our site. Now in my situation I'm not actually doing any web serving. All I'm doing is calendar serving, and only on our internal network, and that's all I'll describe here. The standard site root folder in Mac OS X Server is /Library/WebServer/Documents. To make things a bit cleaner and easier I'll put all my calendars in a subfolder of this called "calendar," and since that's all I'm serving I'll make that my site root: /Library/WebServer/Documents/calendar. I've given my site a name — systemsboy.com — and manually set the IP address. Otherwise I've left everything alone.

Server Admin: General Web Settings (click image for larger view)

Next up we want to set some options. WebDAV is, of course, key to all this. Without it the calendars can't be served in a form we like. So turn on WebDAV. I've also left Performance Caching on and turned everything else off. Again, this is just for serving calendars.

Server Admin: Web Options (click image for larger view)

Finally, we need to set up our "realm" which is a place where WebDAV/calendar users can share files. To do this, first add a realm to the first box on the left there by clicking the big plus sign beneath it. Give the realm a name, and give it the path to the calendar folder we set as our site root. I am just using "Basic" authentication as this is only going on our internal network and security isn't a big concern in this instance. Once your realm is set up, save the changes and then add some users or a group to the boxes to the right of the realm box. In my setup I added a group called "icalusers" which contains all the users who need to and are permitted to share calendars. I've set the group to be allowed to browse and author. This is necessary for users to read from and publish to the server respectively. You can do the same with individual users in the upper box. Once you've got that set up, save your changes and start the web service.

Server Admin: Realms (click image for larger view)

That's pretty much it, except for one crucial thing: permissions. I always seem to forget this, but permissions on the calendar folder must be properly set. Since WebDAV permissions are handled by the web server, the proper way to set this up is to give the user that runs the web server ownership and read/write access to the calendar folder. In most cases that user is called www. It's probably a good idea to give group ownership over to the www user as well. So before this will work you need to run:

sudo chown www:www /Library/WebServer/Documents/calendar

To set the ownership to www, and:

sudo chmod 770 /Library/WebServer/Documents/calendar

To give said user full access to the folder. [Updated to reflect user comments. See note at end of article for details. -systemsboy]

Once that's done, just start the web service in the Server Admin application by selecting the "Web" service in the far left-hand column and pressing the big green "Start Service" button in the app's toolbar. You should now be able to publish and subscribe to calendars on your Mac OS X Server from iCal. The publishing URL for my example would look something like this: http://systemsboy.com

And subscribing to the calendar, where "Birthdays" is the calendar name, would look like: http://systemsboy.com/Birthdays.ics

Simple, right? Yeah, I thought so too. Just watch those permissions! Bites me every time.

NOTE: I had originally reported that permissions for the calendar folder should be set to 777. A couple readers pointed out in the comments section that this is not the case. I have edited this article to reflect their suggestions which are a much better solution than my original one.

Thanks, guys, for pointing that out! Really good to know!