Mac OS X Server 10.4.8 Breaks Windows Quotas

It's great to finally have something systems-related to post about amidst the endless bureaucracy that fills my days lately. Of course that means that — yup, you guessed it — something broke. But hey, that's what it's all about. Well, that and the fixing of said brokeness, of course.

So we recently discovered that our Windows clients were suddenly, and without explanation, able to greatly exceed their roaming profile quotas. In fact, looking at the roaming profile drive showed users with upwards of 25 GBs in their roaming profiles, which have quota limits of 50 MB. Not only that, but further testing revealed that Windows client machines wouldn't even complain if they went over quota. Any SMB connection to the roaming profile drive could exceed the quota limit without so much as a complaint from server or client. AFP worked. UNIX worked. But quotas were ignored over SMB. What the fuck?

For three days I've been trying to track this problem down, testing all sorts of quota scenarios and SMB configurations in between meetings and meetings and more meetings. Eventually, when I can't make headway on a problem, I start thinking it might just be a bug. So I started poking around in the Apple Discussions, and I found one and only one complaint of a similar nature: 10.4.8 Server with broken quotas on Windows. Had I recently done a system update that perhaps broke quotas?

So I started thinking about what in a system update could break such a thing. How do quotas work? There is no daemon. A colleague suggested that they were part of the kernel. Had I done anything that would have replaced the kernel in the last month or two?

The answer was yes. Over the winter break I had decided to update the server to version 10.4.8. Upon realizing this I began to strongly suspect that Mac OS X Server 10.4.8 contained a bug that broke quotas over SMB. Fortunately, as is often my practice, I'd made a clone of my 10.4.7 server to a portable firewire drive before upgrading. Testing my theory would be a simple matter of booting off the clone.

Sure enough, after booting from the clone, quotas began behaving properly on Windows clients again. Because I had the clone, reverting the 10.4.8 server back to 10.4.7 was a simple matter of cloning the contents of the firewire to the server's internal drive and rebooting. Voilà! Problem solved!

From now on I think I'll hold off on server updates unless I really, really need them. When it comes to servers, I think the old adage is best: If it ain't broke, don't fix it.

Directory Access Via the Command Line

I recently finally had occasion to learn some incredibly handy new command-line tricks I've been wanting to figure out for some time. Namely, controlling Directory Access parameters. I've long hoped for and wondered if there was a way to do this, and some of my more ingenious readers finally confirmed that there was, in the comments to a recent article. And now, with initiative and time, I've figured it all out and want to post it here for both your and my benefit, and for the ages (or at least until Apple decides to change it).

The occasion for learning all this was a wee little problem I had with my Mac OS X clients. For some reason, which I've yet to determine, a batch of them became hopelessly unbound from the Open Directory master on our network.

Weird Client Problem: "Some" Accounts Available? Huh?

(click image for larger view)

The solution for this was to trash their DirectoryService preferences folder, and then to rebind them to the server. This was always something I'd done exclusively from the GUI, so doing it on numerous clients has always been a pain: log into the client machine, trash the prefs, navigate to and open the Directory Access application, authenticate to the DA app, enter the OD server name, authenticate for directory binding, and finally log back out. Lather, rinse, repeat per client. Blech! The command-line approach offers numerous advantages, the most obvious being that this can all be scripted and sent to multiple machines via Apple Remote Desktop. No login required, no GUI needed, and you can do every machine at once.

The command-line tools for doing all this are not exactly the most straightforward set of commands I've ever seen. But they exist, and they work, and they're quite flexible once you parse them out. The first basic thing you need to understand is that there are two tools for accomplishing the above: dscl and dsconfigldap. The dsconfigldap command is used to add an LDAP server configuration to Directory Access. The dscl command adds that server to the Authentication and Contacts lists in Directory Access, and is used to configure the options for service access.

So typically, your first step in binding a client to an OD master in Directory Access is to add it to the list of LDAPv3 servers. This can be done via the command-line with dsconfigldap, like so:

sudo dsconfigldap -s -a systemsboy.com -n "systemsboy"

We like to use directory binding in our configuration, and this can be accomplished too:

sudo dsconfigldap -u diradmin -i -s -f -a systemsboy.com -c systemsboy -n "systemsboy"

The above command requires a directory administrator username and interactively requests a password for said user. But if you want to use ARD for all of this, you'll need to supply the password in the command itself:

sudo dsconfigldap -u diradmin -p 'DirectoryAdmin_Password' -s -f -a systemsboy.com -c systemsboy -n "systemsboy"

Directory Access: Adding an OD Server Configuration

(click image for larger view)

So, there you have it. You've now added your OD master to your list of LDAPv3 servers. You can see this reflected immediately in the Directory Access application. But, unlike in DA, the command does not automatically populate the Authentication and Contacts fields. Your client will not authenticate to the OD master until you have added the OD server as an authentication source. To do this you use dscl. You'll need a custom Search Path for this to work. You may already have one, but if you don't you can add one first:

dscl -q localhost -create /Search SearchPolicy dsAttrTypeStandard:CSPSearchPath

And now add the OD master to the Authentication search path you just created:

sudo dscl -q localhost -merge /Search CSPSearchPath /LDAPv3/systemsboy.com

Directory Access: Adding an OD Authentication Source

(click image for larger view)

If you want your OD server as a Contacts source as well, run:

sudo dscl -q localhost -merge /Contact CSPSearchPath /LDAPv3/systemsboy.com

Again, this change will be reflected immediately in the DA application. You may now want to restart Directory Services to make sure the changes get picked up, like so:

sudo killall DirectoryService

And that's really all there is to it. You should now be able to log on as a network user. To test, simply id a know network-only user:

id spaz

/>If you get this error:

id: spaz: no such user

Something's wrong. Try again.

If all is well, though, you'll get the user information for that user:

uid=503(spaz) gid=503(spaz) groups=503(spaz)

You should be good to go.

And, if you want to view all this via the command-line as well, here are some commands to get you started.

To list the servers in the configuration:

dscl localhost -list /LDAPv3

To list Authentication sources:

dscl -q localhost -read /Search

To list Contacts sources:

dscl -q localhost -read /Contact

A few things before I wind up. First, some notes on the syntax of these commands. For a full list of options, you should most definitely turn to the man pages for any of these commands. But I wanted to briefly talk about the basic syntax, because to my eye it's a bit confusing. Let's pick apart this command, which adds the OD master to the configuration with directory binding and a supplied directory admin username and password:

sudo dsconfigldap -u diradmin -p 'DirectoryAdmin_Password' -s -f -a systemsboy.com -c systemsboy -n "systemsboy"

The command is being run as root (sudo) and is called dsconfigldap. The -u option tells the command that we'll be supplying the name of the directory admin to be used for binding to the OD master (required for such binding). Next we supply that name, in this case diradmin. The -p option allows you to specify the password for that user, which you do next in single quotes. The -s option will set up secure authentication between server and client, which is the default in DA. The -f option turns on ("forces") directory binding. The -a option specifies that you are adding the server (as opposed to removing it). The next entry is the name of the OD server (you can use the Fully Qualified Domain Name or the IP address here, but I prefer FQDN). The -c option specifies the computer ID or name to be used for directory binding to the server, and this will add the computer to the server's Computers list. And finally, the -n option allows you to specify the configuration name in the list of servers in DA.

Now let's look at this particular use of dscl:

sudo dscl -q localhost -merge /Search CSPSearchPath /LDAPv3/systemsboy.com

Again, dscl is the command and it's being run as root. The -q option runs the command in quiet mode, with no interactive prompt. (The dscl command can also be run interactively.) The localhost field specifies the client machine to run the command on, in this case, the machine I'm on right now. The -merge flag tells dscl that we want to add this data without affecting any of the other entries in the path. The /Search string specifies the path to the Directory Service datasource to operate on, in this case the "Search" path, and the CSPSearchPath is our custom search path key to which we want to add our OD server, which is named in the last string in the command.

Whew! It's a lot, I know. But the beauty is that dscl and dsconfigldap are extremely flexible and powerful tools that allow you to manipulate every parameter in the Directory Access application. Wonderful!

Next, to be thorough, I thought I'd provide the commands to reverse all this — to remove the OD master from DA entirely. So, working backwards, to remove the server from the list of Authentication sources, run:

sudo dscl -q localhost -delete /Search CSPSearchPath /LDAPv3/systemsboy.com

To remove it from the from the Contacts source list:

sudo dscl -q localhost -delete /Contact CSPSearchPath /LDAPv3/systemsboy.com

And to remove a directory-bound configuration non-interactively (i.e. supplying the directory admin name and password):

sudo dsconfigldap -u diradmin -p 'DirectoryAdmin_Password' -s -f -r systemsboy.com -c systemsboy -n "systemsboy"

If that's your only server, you should be back to spec. Just to be safe, restart DirectoryService again:

sudo killall DirectoryService

If you have a bunch of servers in your Directory Access list, you could script a method for removing them all with the above commands, but it's probably easier to just trash the DirectoryService prefs (in /Library/Preferences) and restart DirectoryService.

Lastly, I'd like to end this article with thanks. Learning all this was kind of tricky for me, and I had a lot of help from a few sources. Faithful readers MatX and Nigel (of mind the expla

natory gap fame) both pointed out the availability of all this command-line goodness. And nigel got me started down the road to understanding it all. Most of the information in this article was also directly gleaned from another site hosted in my home state of Ohio, on a page written by a Jeff McCune. With the exception of a minor tweak here and there (particularly when adding Contacts sources), Jeff's instructions were my key to truly understanding all this, and I must thank him profusely. He made the learning curve on all this tolerable.

So thanks guys! It's help like this that makes having this site so damn useful sometimes, and it's much appreciated.

And now I'm off to go bind some clients command-line style!

UPDATE:

Got to test all this out real-world style today. Our server got hung up again, and we had the same problem I described at the head of this article. No one could log in. So I started trying to use the command-line to reset the machines. I had one major snag that caused it all to fail until I figured out what was going on. Seems I could not bind my machines to the server using the -s flag (secure binding). I had thought that this was the default, and that I was using it before, but now I'm not so sure. In any case, if you're having trouble binding or unbinding clients to a server, try the dsconfigldap command without the -s flag if you can, like so:

sudo dsconfigldap -u diradmin -p 'DirectoryAdmin_Password' -f -a systemsboy.com -c systemsboy -n "systemsboy"

That's what worked for me. I'm a little concerned that this is indicative of a problem on my server, but now's not really the time to go screwing with stuff, so I'll leave it alone for the time being.

This update brought to you by the little letter -s.

Networked Home Accounts and The New RAID

We recently installed in our machine room a brand-spankin' new RAID for hosting network home accounts. We bought this RAID as a replacement for our aging, and horrendously unreliable Panasas RAID. The Panasas was a disaster for almost the entire three-year span of its lease. It used a proprietary operating system based on some flavor of *NIX (which I can't recall right at this moment), but that had all sorts of variations from a typical *NIX install that made using it as a home account server far more difficult than it ever should have been. To be fair, it was never really intended for such a use, but was rather created as a file server cluster for Linux workstations that can be easily managed directly from a web browser, as opposed to the command-line. It was really built for speed, not stability, and it was really completely the wrong product for us. (And for the record, I had nothing to do with its purchase, in case you're wondering.)

What the Panasas was, however, was instructive. For three years we lived under the shadow of its constant crashing, the near-weekly tcp dumps and help requests to the company, and angry users fed up with a system that occasionally caused them to lose data, and frequently caused their machines to lock up for the duration of a Panasas reboot, which could be up to twenty minutes. It was not fun, but I learned a lot from it, and it enabled me to make some very serious decisions.

My recent promotion to Senior Systems Administrator came just prior to the end of our Panasas lease term. This put me in the position of both purchasing a new home account server, and of deciding the fate of networked home accounts in the lab.

If I'd learned anything from the experience with the Panasas it was this: A home account server must be, above all else, stable. Every computer that relies on centralized storage for home account serving is completely and utterly dependent on that server. If that server goes down, your lab, in essence, goes down. When this starts happening a lot, people begin to lose faith in a lot of things. First and foremost, they lose faith in the server and stop using it, making your big, expensive network RAID a big, expensive waste of money. Secondly, they lose faith in the system you've set up, which makes sense because it doesn't work reliably, and they stop using it, favoring instead whatever contingency plan you've set up for the times when the server goes down. In our case, we set up a local user account for people to log into when the home account server was down. Things got so bad for a while that people began to log in using this local account more than they would their home accounts, thus negating all our efforts at centralizing home account data storage. Lastly, people begin to lose faith in your abilities as a systems administrator and lab manager. Your reputation suffers, and that makes it harder to get things done — even improvements. So, stability. Centralization of a key resource is risky, in that if that resource fails, everything else fails with it. Stability of crucial, centralized storage was key if any kind of network home account scenario was going to work.

The other thing I began to assess was the whole idea of networked home accounts themselves. I don't know how many labs use networked home accounts. I suspect there are quite a few, but there are also probably a lot of labs that don't. I know I've read about a lot of places that prefer local accounts that are not customized and that revert to some default state at every log in/out. Though I personally really like the convenience of customized network home accounts that follow you from computer to computer throughout a facility, it certainly provides a fair amount of hassle and risk. When it works it's great, but when it doesn't work, it's really bad. So I really began to question the whole idea. Is this something we really needed or wanted to continue to provide?

My ultimate decision was intimately linked to the stability of the home account server. From everything I've seen, networked home accounts can and do work extremely well when the centralized storage on which they reside is stable and reliable. And there is value to this. I talked to people in the lab. By and large, from what I could glean from my very rudimentary and unscientific conversations with users, people really like having network home accounts when they work properly. When given the choice between a generic local account or their personalized network account, even after all the headaches, they still ultimately prefer the networked account. So it behooves us to really try to make it work and work well. And, again, everything I saw told me that what this really required, more than anything else, was a good, solid, robust and reliable home account server.

So, that's what we tried our best to get. The new unit is built and configured by a company called Western Scientific, which was recommended to me by a friend. It's called the Fusion SA. It's a 24-bay storage server running Linux Fedora Core 5. We've populated 16 of the bays with 500GB drives and configured them at RAID level 5, giving us, when all is said and done, about 7TB of networked storage with room to grow in the additional bays should we ever want to do so. The unit also features a Quad-port GigE PCIX card which we can trunc for speedy network access. It's big and it's fast. But what's most important is its stability.

Our new RAID came a little later than we'd hoped, so we weren't able to test it before going live with it. Ideally, we would have gotten the unit mid-summer and tested it in the lab while maintaining our previous system as a fall-back. What happened instead was that we got the unit in about the second week of the semester, and outside circumstances eventually necessitated switching to the new RAID sans testing. It was a little scary. Here we were in the third week of school switching over to a brand new but largely untested home account server. It was at this point in time that I decided, if this thing didn't work — if it wasn't stable and reliable — networked home accounts would become a thing of the past.

So with a little bit of fancy footwork we made the ol' switcheroo, and it went so smoothly our users barely noticed anything had happened. Installing the unit was really a simple matter of getting it in the rack, and then configuring the network settings and the RAID. This was exceptionally quick and easy, thanks in large measure to the fact that Western Scientific configured the OS for us at the factory, and also to the fact that they tested the unit for defects prior to shipping it to us. In fact, our unit was late because they had discovered a flaw in the original unit they had planned to ship. Perfect! If that's the case, I'm glad it was late. This is exactly what we want from a company that provides us with our crucial home account storage. If the server itself was as reliable as the company was diligent, we most like had a winner on our hands. So, how has it been?

It's been several weeks now, and the new home account server has been up, without fail or issue, the entire time. So far our new home account server has been extremely stable (so much so that I almost forget about it, until, of course, I walk past our server room and stop to dreamily look upon its bright blue drive activity lights dutifully flickering away without pause). And if it stays that way, user confidence should return to the lab and to the whole idea of networked home accounts in fairly short order. In fact, it seems like it already has to a great extent. I couldn't be happier. And the users?... Well, they don't even notice the difference. That's the cruel irony of this business: When things break, you never hear the end of it, but when things work properly, you don't hear a peep. You can almost gauge the success or failure of a system by how much you hear about it from users. It's the ultimate in "n o news is good news." The quieter the better.

And 'round these parts of late, it's been pin-drop quiet.

Publishing iCal Calendars via Mac OS X Server

So a lot of people are familiar with my articles on publishing iCal calendars to the 'net with box.net. But it turns out that I also have to provide iCal publishing for staff on our internal network, and I do this using a Mac OS X server. Recently, after rebuilding my server, I had some problems with it and had to set it all up again after not having done it in quite some time. It's pretty easy, but there's one snag I got hung up on. We'll get to that in a minute. But first let's run through the steps to set this up. First and foremost, setting up iCal publishing on Mac OS X Server requires the web server to be running. This can easily be done with the flick of a switch in the Server Admin application. But before we start the service, let's make all our settings. The first thing we need to do is set the root folder for our site. Now in my situation I'm not actually doing any web serving. All I'm doing is calendar serving, and only on our internal network, and that's all I'll describe here. The standard site root folder in Mac OS X Server is /Library/WebServer/Documents. To make things a bit cleaner and easier I'll put all my calendars in a subfolder of this called "calendar," and since that's all I'm serving I'll make that my site root: /Library/WebServer/Documents/calendar. I've given my site a name — systemsboy.com — and manually set the IP address. Otherwise I've left everything alone.

Server Admin: General Web Settings (click image for larger view)

Next up we want to set some options. WebDAV is, of course, key to all this. Without it the calendars can't be served in a form we like. So turn on WebDAV. I've also left Performance Caching on and turned everything else off. Again, this is just for serving calendars.

Server Admin: Web Options (click image for larger view)

Finally, we need to set up our "realm" which is a place where WebDAV/calendar users can share files. To do this, first add a realm to the first box on the left there by clicking the big plus sign beneath it. Give the realm a name, and give it the path to the calendar folder we set as our site root. I am just using "Basic" authentication as this is only going on our internal network and security isn't a big concern in this instance. Once your realm is set up, save the changes and then add some users or a group to the boxes to the right of the realm box. In my setup I added a group called "icalusers" which contains all the users who need to and are permitted to share calendars. I've set the group to be allowed to browse and author. This is necessary for users to read from and publish to the server respectively. You can do the same with individual users in the upper box. Once you've got that set up, save your changes and start the web service.

Server Admin: Realms (click image for larger view)

That's pretty much it, except for one crucial thing: permissions. I always seem to forget this, but permissions on the calendar folder must be properly set. Since WebDAV permissions are handled by the web server, the proper way to set this up is to give the user that runs the web server ownership and read/write access to the calendar folder. In most cases that user is called www. It's probably a good idea to give group ownership over to the www user as well. So before this will work you need to run:

sudo chown www:www /Library/WebServer/Documents/calendar

To set the ownership to www, and:

sudo chmod 770 /Library/WebServer/Documents/calendar

To give said user full access to the folder. [Updated to reflect user comments. See note at end of article for details. -systemsboy]

Once that's done, just start the web service in the Server Admin application by selecting the "Web" service in the far left-hand column and pressing the big green "Start Service" button in the app's toolbar. You should now be able to publish and subscribe to calendars on your Mac OS X Server from iCal. The publishing URL for my example would look something like this: http://systemsboy.com

And subscribing to the calendar, where "Birthdays" is the calendar name, would look like: http://systemsboy.com/Birthdays.ics

Simple, right? Yeah, I thought so too. Just watch those permissions! Bites me every time.

NOTE: I had originally reported that permissions for the calendar folder should be set to 777. A couple readers pointed out in the comments section that this is not the case. I have edited this article to reflect their suggestions which are a much better solution than my original one.

Thanks, guys, for pointing that out! Really good to know!

External Network Unification Part 4: The CMS Goes Live

NOTE: This is the latest article in the External Network Unification project series. It was actually penned, and was meant to be posted several weeks ago, but somehow got lost in the shuffle. In any case, it's still relavant, and rather than rewrite it accounting for the time lapse, I present it here in it's original form, with a follow-up at the end.
-systemsboy

Last Thursday, August 10th, 2006 marked a milestone in the External Network Unification project: We've migrated our CMS to Joomla and are using external authentication for the site. Though it was accomplished somewhat differently than I had anticipated, accomplished it was, nonetheless, and boy we're happy. Here's the scoop.

Last time I mentioned I'd built a test site — a copy of our CMS on a different machine — and had some success, and that the next step was to build a test site on the web server itself and test the LDAP Hack on the live server authenticating to a real live, non-Mac OSX LDAP server. Which is what I did.

Building the Joomla port on the web server was about as easy as it was on the test server. I just followed the same set of steps and was done in no time. Easy. And this time I didn't have to worry about recreating any of the MySQL databases since, on the web server, they were already in place as we want them and were working perfectly. So the live Joomla port was exceedingly simple.

LDAP, on the other hand, is not. I've been spoiled by Mac OS X's presentation of LDAP in its server software. Apple has done a fantastic job of simplifying what, I recently discovered, is a very complicated, and at times almost primitive, database system. Red Hat has also made ambitious forays into the LDAP server arena, and I look forward to trying out their offerings. This time out my LDAP server was built by another staff systems admin. He did a great job in a short space of time on what I can only imagine was, at times, a trying chore. The LDAP server he built, though, worked and was, by all standards, quite secure. Maybe too secure.

When trying to authenticate our Joomla CMS port with the LDAP hack, nothing I did worked. And I tried everything. Our LDAP server does everything over TLS for security, and requires all transactions to be encrypted, and I'm guessing that the LDAP Hack we were using for the CMS just couldn't handle that. In some configurations login information was actually printed directly to the browser window. Not cool!

Near the point of giving up, I thought I'd just try some other stuff while I had this port on hand. The LDAP Hack can authenticate via two other sources, actually: IMAP and POP. Got a mail server? The LDAP Hack can authenticate to it just like your mail client does. I figured it was worth a shot, so I tried it. And it worked! Perfectly! And this gave me ideas.

The more I thought about it, the more I realized that our LDAP solution is nowhere near ready for prime-time. I still believe LDAP will ultimately be the way to go for our user databases. But for now what we want to do with it is just too complicated. The mere act of user creation on the LDAP server, as it's built now anyway, will require some kind of scripting solution. I also now realize that we will most likely need a custom schema for the LDAP server, as it will be hosting authentication and user info for a variety of other servers. For instance, we have a Quicktime Streaming Server, and home accounts reside in a specific directory on that machine. But on our mail server, the home account location is different. This, if I am thinking about it correctly, will need to be handled by some sort of custom LDAP schema that can supply variable data with regards to home account locations based on the machine that is connecting to it. There are other problems too. Ones that are so abstract to me right now I can't even begin to think about writing about them. Suffice to say, with about two-and-a-half solid weeks before school starts, and a whole list of other projects that must get done in that time frame, I just know we won't have time to build and test specialized LDAP schemas. To do this right, we need more time.

By the same token, I'm still stuck — fixated, even — on the idea of reducing as many of the authentication servers and databases, and thus a good deal of the confusion, as I possibly can. Authenticating to our mail server may just be the ticket, if only temporarily.

The mail server, it turns out, already hosts authentication for a couple other servers. And it can — and is now — hosting authentication for our CMS. That leaves only two other systems independently hosting user data on the external network: the reservations system (running on it's own MySQL user database) and the Quicktime Streaming server, which hosts local Netinfo accounts. Reservations is a foregone conclusion for now. It's a custom system, and we won't have time to change it before the semester starts. (Though it occurs to me that it might be possible for Reservations to piggyback on the CMS and use the CMS's MySQL database for authentication — which of course now uses the mail server to build itself — rather than the separate MySQL database it currently uses. But this will take some effort.) But if I can get the Quicktime Streaming Server to authenticate to the mail server — and I'm pretty hopeful here — I can reduce the number of authentication systems by one more. This would effectively reduce by more than half the total number of authentication systems (both internal ones — which are now all hosted by a Mac OS X server — and external ones) currently in use.

Right now — as of Thursday, August 10th, 2006 — we've gone live with the new CMS, and that brings our total number from eight authentication systems down to four. That's half what we had. That awesome. If I can get it down to three, I'll be pleased as punch. If I can get it down to two, I'll feel like a super hero. So in the next couple weeks I'll be looking at authenticating our Quicktime server via NIS. I've never done it, but I think it's possible, either through the NIS plugin in Directory Access, or by using a cron-activated shell script. But if not, we're still in better shape than we were.

Presenting the new system to the users this year should be far simpler than it's ever been, and new user creation should be a comparative cakewalk to years past. And hopefully by next year we can make it even simpler.

FOLLOW-UP:
It's been several weeks since I wrote this article, and I'm happy to report that all is well with our Joomla port and the hack that allows us to use our mail server for authentication. It's been running fine, and has given us no problems whatsoever. With the start of the semester slamming us like a sumo wrestler on crack, I have not had a chance to test any other servers against alternative authentication methods. There's been way too much going on, from heat waves to air conditioning and power failures. It's been a madhouse around here, I tell ya. A madhouse! So for now, this project is on hold until we can get some free time. Hopefully we can pick up with it again when things settle, but that may not be until next summer. In any case, the system we have now is worlds better than what we had just a few short months ago. And presenting it to the users was clearer than it's ever been. I have to say, I'm pretty pleased with how it's turing out.