Three Platforms, One Server Part 7: Testing...

So, our primary authentication server, which will be used to enable network home accounts for our entire internal network — Mac, Windows and Linux — is up and running. I've installed it in our server room, put it on the KVM, and switched over all the workstations. So far, so good. (My fingers are so crossed they hurt.)

For the Windows quota requirements we went with the first solution mentioned in this article, and outlined in detail here: Windows roaming profiles are on a separate, quota-enabled volume on the authentication server. This — and the Windows implementation in general — is my biggest concern, and will require a great deal of testing. Summer Session has begun, and this will give us the opportunity to test on live subjects as a limited pool of students begins using the systems for work again.

There are a couple of immediate benefits of this new system. For one, users can now change their password lab-wide from any Mac in the lab. Now, users don't tend to do this much, but they can if they want to, and it's easy as pie. But as an admin, the most exciting benefit to all this is the ease with which new users can now be created. In the past, creating a new user was a multi-step, multi-computer process that involved coordination between multiple sysadmins: Admin 1 gets the user info, creates the Linux and Windows accounts and hands it off to admin 2, who then creates the Mac account and uploads the Mac skel. This process was lengthy and extremely error-prone. Using the new system, user creation is a breeze. One single, easy-to-use script on the authentication server pretty much does it all. Any sysadmin — or even a trained assistant — can do it. Just log in to the authentication server, run the script, and you're basically done. It's fast, easy and accurate. Which is really the whole reason we're doing all this. I'm pretty happy about it.

There are a few minor details I need to work out. The main one is file sharing. In addition to being an authentication server, our original Mac server is also a file server. On it there are numerous shares for various purposes — one for staff, one for students, etc.. At this point I'm thinking of keeping the file server and the authentication server separate. This will reduce the load on both, and mitigate the need to migrate any data. The catch is that user authentication data on the file server will need to match that of the new authentication server. This should be a simple matter of changing the old server's role to "Connected to a Directory System" and pointing it at the new authentication server. I've never done this though, and it's complicated by the fact that the old server is running Panther whereas the new one is running Tiger. Probably the best thing to do will be to wipe the old server (yes, I have a backup) and put Tiger on it, then redo the shares. But I'll probably test with Panther first, just to see what happens. I'll let you know.

In any case, this is, again, a minor problem that shouldn't affect the overall plan much and should be relatively easy to figure out and implement. It's just one of those little things you forget about when you're drawing up your master plan for world conquest. "Oh yeah... What about the file server?..."

Otherwise, the conversion is on and seems to be going smoothly so far. Once we get into serious testing I'll post the results. Hopefully this will all be done in the next few weeks and the conversion will finally be complete.

Three Platforms, One Server Part 6: More Windows Quota Problems

Of course I knew it was too good to be true. I've found the first fatal flaw in my plan to unify authentication on the internal network. It goes back to the Windows quotas problem I studied some time ago, and to which I'd thought I'd found a solution.

I won't go into great detail about the problems and various solutions to the Windows roaming profile issue. I've already written plenty on it, and the previous posts outline it all fairly well. I will say that the intended solution was to provide Windows roaming profile quotas by setting them locally on each workstation. But, last week, as we moved forward with the plan, one of my fellow sysadmins, who is far more capable with Windows than I happen to be, pointed out the fact that certain applications (i.e. Photoshop, Maya, etc.) need a certain amount of disk space for temp files and what-not in order to operate. Setting small local quotas effectively keeps these applications from running properly.

We are currently testing a few other scenarios in which quotas for Windows roaming profiles can be implemented to our satisfaction:

  1. The Authentication Server (Mac OS X Server 10.4.6) has a separate volume for Windows roaming profile storage and that drive has quotas enabled (this was the original plan). The drawback to this is that the user's home account data is stored separately for Mac and Linux — which keep this data on the home account server — than for Windows, which would store this data on a reserved volume. The other drawback is the fact that the client machine is not aware of the quotas until the user logs out and the Windows client attempts to upload the new data, at which point Windows issues an error if the user has exceeded his quota. But the user is not warned about quota violations until logout, and this could cause some minor problems.
  2. The Windows Server continues to host roaming profiles and determine quotas for Windows users, but gets user authentication information through a connection to our Mac Authentication Server. The drawback here is that we have to continue using the Windows Server, which we don't really want to do, though it does seem to give us a slightly higher level of control than does the Mac Server. This method is also complicated by the fact that we are currently running Windows Server 2000, which does not include native authentication to LDAP, so third-party solutions would be required. This method could also complicate user creation.
  3. Through some combination of Windows and Mac Servers, we convince the Windows roaming profiles to be situated on the Temp volume of the local workstations, rather than the default location on the "C" drive in the "Documents and Settings" folder, and then set quotas for the Temp volume. I'm skeptical that this is even possible. Windows seems to be hard-coded in ways that make specifying the location of roaming profiles anywhere other than "C:\Documents and Settings" impossible. So this last option seems the least likely to succeed, though if it did it would match the way Mac and Linux behave much more closely. And if we could figure out how to do this without the Windows server, it would be almost ideal.

(Actually, I just thought of a problem with this last method: The Windows Temp drives get erased every Friday. If users happen to be working during the deletion period, what happens to their roaming profiles? The same thing happens on the Mac, but the deletion script on Mac does not delete work owned by the currently logged in user. Can such a scenario be implemented on Windows?)

Most likely we will go with the first solution. We already know that it works. It's a little extra effort when creating new users, but that's totally scriptable. We plan to do user creation from a single script from now on anyway, so these extra few steps, once incorporated into our user creation script, won't really be a big deal at all. The only other problem with this is that our replica will now need to sync the Windows roaming profile volume as well if it's to work as a proper fallback system. This, too, should not be terribly difficult to accomplish. Overall, this solution is less elegant than the original one, but it should be workable. Hopefully, Windows Vista will mitigate a lot of these problems. (Yes, that was totally a joke. Please chuckle softly to yourself and move on.)

I guess what amazes me still is how contrary to other operating systems the Windows OS is. Everything we're doing here can be done the same way on Mac as it can on Linux, or virtually any other *NIX system: home accounts can be read directly from a network disk, their locations can be specified, and therefore, all this quota nonsense is unnecessary. On Windows, roaming profiles apparently must be downloaded to the client machine (an unbelievably stupid requirement), and the location of said profiles apparently must always be on the root drive in the "Documents and Settings" folder. I guess these are the ways in which Windows continues to force people to use Microsoft products (I can almost hear Bill Gates whispering in my ear, "Wouldn't it all be easier if you just used Windows Server?") But for software that's become the dominant standard in both the business and personal markets, Windows sure seems non-standard in baffling and infuriating ways. Though this may be how Microsoft has managed to stay on top all these years, I still, perhaps naively, believe that some day, if they don't change this strategy, it will hurt them. Frankly, though I'm sure they're out there, I don't know a single sysadmin that likes Windows. Can you blame them?

Three Platforms, One Server Part 5: Away We Go!

So it's the last week during which students have access to the lab, and that means I can finally implement my plan to unify internal network user authentication. Finally! I'm so jazzed. I've been waiting for months (well, years, really) for the chance to do this, and it's here at last.

The general outline of what I'll be doing over the next few days goes something like this:

  • Backup my current Mac server (for safety)
  • Build my master authentication server
  • Backup a clone of the clean server install
  • Configure the new server with:
    • Users and Groups
    • Home account automounting
    • Home account sharing to SMB (for Windows Roaming Profiles)
    • A skel account for Windows users (to live on the home account server)
    • Other share points
  • Create a replica of the new master server

What I did today:
Well, it's amazing how long a base install of Tiger Server can take. I've pretty much been doing that all day. Not that I'm so incompetent that I can't install the software in seconds flat, but software updates take forever and a day. Planning and getting drives to do all this on was a bit of an effort too. Plus I just wanted to make sure I did it right the first time, so I went slow, gave myself the day. I'm also making clones of everything along the way, for building my replica, and in case I goof and need to start over. So that takes a while. I guess I'm just saying that I'm taking my time with this, 'cause I want it to be as perfect as possible from the get-go.

By Monday we should have:

  • A base install of Tiger Server 10.4.6 with requisite Software Updates on a firewire drive
  • A backup of our old Mac Server
  • A new Tiger 10.4.6 authentication server that's configured to host Mac, Windows and Linux users
  • A replica of same

We will spend part of next week pointing all our workstations at the new server. The Windows machines will be the biggest pain as 1) they are running Windows, and 2) they need local quotas set (which could really be just a subset of point 1, but whatever). The reason for all this quota nonsense, you ask? Well, for the answer, you'll just have to read the previous posts on the matter. Suffice to say, I'm hoping the quota setting nonsense will be the worst part of this job, which it should if all goes according to plan, which, I'm sure you're aware, it rarely does.

Finally, I wanted to mention this quote that I read on Daring Fireball, by someone called John Gall, author of Systemantics, as it really jives with a lot of the stuff I've been thinking about with regards to the lab:

“A complex system that works is invariably found to have evolved from a simple system that worked….A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.”
— John Gall

Next week should be interesting. I'll keep you posted.

External Network Unification Part 2: CMS LDAP Connections

So I've been examining what we have, and thinking about what we want, and thinking about how to get there with regards to external network unification.

Here's what we have:

  • A mail server running FreeBSD and getting user info from it's own, local DB
  • A web and FTP server running same, getting user info (I believe) from the mail server
  • A community site running the Mambo CMS, running on the same BSD machine as the web/FTP server, getting its user data from MySQL
  • A custom-built online computer reservations system, also running on the web/FTP server, getting its user data from a second MySQL database
  • A Quicktime Streaming Server running Mac OSX Server, getting user info from the local NetInfo database

Here's what we want:

  • An LDAP server with all user information
  • A mail server running FreeBSD, getting user info from LDAP
  • A web and FTP server running same, getting user info from LDAP
  • A community site running Mambo (or similar) getting user info from LDAP
  • A custom-built (or prefab, if available) online computer reservations system getting user info from LDAP
  • A Quicktime Streaming Server running Mac OSX Server, getting user info from LDAP

Are you sensing a pattern? Did you notice how much easier the second list is to read and understand? Boy I sure did. Extrapolate.

So, porting some of these systems — particularly the BSD machines that rely on local databases of users — shouldn't be too bad: build the LDAP server, point the BSD boxes at it, and, bam! we're done. I'm almost not worried about those. They're standard *NIX boxes, and LDAP support is built in and fairly easy to set up, at least in terms of getting user data. Same with the Quicktime Server: Mac OS X has stupid-simple support for authenticating to LDAP, and there's tons of good documentation on the subject. So I've been concentrating on our web apps, which promise to be much tougher, and recently I had what I think will turn out to be a real breakthrough.

When last I wrote about this topic I was experimenting with setting up my first FreeBSD server, and also with some simple PHP/MySQL-driven web apps that purported to authenticate against LDAP. The one I finally got to work was a freebie called MRBS. MRBS is great. We may even modify it and use it for certain staff-centered scheduling tasks. It's great that it works with LDAP, and it's pretty easy to get set up, after some trial and error and help from AFP548. It's given me a way to go with certain other proposed web-apps in the future. And, most importantly, it's allowed me to demonstrate proof-of-concept. But if MRBS is our future, what about our present?

We have a whole lot of time and effort invested in our current Mambo site. Not so much that it would kill us to move to a new system, but enough so that moving would be painful, and we'd better have a plan and damn good reasons to do so before making the attempt. So for the past however many weeks now, I've been building and testing a multitude of CMS systems. In doing so, I've been primarily concerned with two things: 1) Will this system authenticate to LDAP? 2) Does it have all the functionality (or more) that we currently enjoy on our Mambo site?

I figured the easiest thing to do — and a good place to start — would be to get our current Mambo site to work with LDAP. This would save us the trouble of setting up and learning a whole new system and porting over all our content — again, not the end of the world, but not exactly desirable either. Turns out there is an LDAP hack available for Mambo, but the hack is only supported under older versions of Mambo. I tried installing every version of Mambo I could, and every version of the hack, and every combination of these, as well as hacks to the hack I'd found in forums. No luck. I simply could not get the Mambo LDAP hack to work.

It was at this point I began to turn my attention to other CMSes that might support LDAP. After hunting around I stumbled upon Plone, which looked like a worthy contender, and which supposedly supported LDAP authentication. The thing I liked about Plone from the get-go was that it is ported to Mac OS X, which is what I'm testing all this on, so installation was a breeze. Plone even installs in its own folder in /Applications, and it's here that, somehow, the Plone site root lives. The system itself is very nicely structured as well. The interface is clean and easy to understand, and even fairly easy to modify, in minor cosmetic ways. But getting Plone to authenticate to LDAP turned out to be a little scary and labor intensive for my tastes. Plone runs on Python and MySQL (as opposed to Mambo's PHP/MySQL engine), so Python is responsible for making calls to LDAP. According to the LDAP module READ ME, LDAP authentication in Plone requires the python-ldap module to be installed. Installing this looked to be a pain, and no one in my organization (myself included) knows the first thing about Python, so it was at this point I bailed and began to start thinking about another approach. So much for Plone.

The next system I tried was Drupal. Drupal was also supposed to have LDAP support, though I never got around to really looking into it. I really liked Drupal: It's fast and simple, and the interface is sharp and clean. And Drupal has great user management with support for custom roles and permissions. But Drupal doesn't come with much out of the box, and I never really got around to figuring out how to install additional components. In fact, though I guess you could install one, Drupal does not come with a WYSIWYG HTML editor, which is one of the main reasons we're using a CMS in the first place. So I moved on despite some of the really nice things I saw in Drupal.

Some time later I was talking to a fellow sysadmin about all this, and he said, "What about Joomla?" and I said, "I thought that cost money." and he said, "No, it's the new Mambo." And I thought, "Hmmm... The new Mambo, eh?..."

Needless to say, the next day I'd installed my first Joomla install. I liked what I saw. It has a very similar look and feel to Mambo, particularly on the back-end. In fact, it's almost exactly the same because Joomla is developed by the former developers of Mambo. I'm not (yet) sure what went down, but apparently the bulk of the Mambo team jumped ship and began their own, separate CMS project. So Joomla really is the new Mambo.

Joomla also claimed to support LDAP, and according to their documentation, LDAP would be built in to the next release. This is apparently true, as Joomla 1.1 Alpha includes a built-in LDAP plugin. I installed the Beta and gave it a whirl, but no joy. I couldn't get the Joomla beta to authenticate to LDAP. Reading around some more led me to a new variant of the Mambo LDAP hack that's made to work with the latest stable version of Joomla, version 1.0.8. I also read what I believe was a comment in the Joomla forums from the originator of that hack who swears allegiance to the Joomla team, which probably explains why there are no new versions for Mambo.

Last week I installed Joomla 1.0.8 and the ported LDAP hack for Joomla 1.0.8 and guess what? After weeks of scrounging and searching and hoping and praying and cursing and installing CMS after CMS, it worked!

It fucking god damn worked.

This is great. Not only was installing the hack easy as pie, but setting up the LDAP authentication — for the first time since I dug in on this — was a breeze and worked completely as I'd hoped and expected it to. Not only that, but migrating our Mambo site to Joomla should be a fairly easy task since Joomla is built on the Mambo core. The Joomla site even provides instructions on how to do this, and they don't sound terribly difficult at all. The bonus is that the built-in Joomla LDAP authentication looks promising and, down the line, will hopefully eliminate the need for a "hacked" solution. But until then, the hack works great for our purposes.

This is a huge milestone for the External Network Unification project. Getting our CMS — really, our most complex web application — to work with LDAP was one of my biggest concerns. Going with Joomla gives us the LDAP stuff we need, maintains consistent usability on both the front- and back-ends, makes migrating a whole mess easier, and provides good scalability in terms of development and support for the future. Joomla's developer team appears to be solid, the third-party developer community seems very active, and the LDAP support looks to be headed in the right direction and available in the near term. While it's by no means a done deal, this looks very promising.

Next on the list:

  • Getting our custom computer reservations system to work with LDAP (or finding/building a replacement)
  • Learning and building an LDAP server on FreeBSD (not Mac OSX)

I'm going in. Wish me luck.

"Go Away..." Redux: Simplifying the Complex

Well, it's official. The "Go Away or I Shall Replace You with a Very Small Shell Script" post is the runaway hit of the season. It's the most popular — or at least the most commented on — post on this site since "Getting Back to (Search) Basics," which is odd considering how different the two articles are: One is a simple, utilitarian shell script; one is an opinionated rant. Strange what strikes a nerve. But that's not really what this post is about.

In the "Go Away..." post I postulated, in essence, that perhaps this latest generation of users seems to be less tech-savvy than mine because they've had technology handed to them on a silver platter. I was pissed off and in my annual funk, or bitch-mode, or whatever the Hell it is that makes me so irritable 'round this time of year, and I needed to vent, and my logic skills were not at their finest. My brilliant theory may have been somewhat half-baked (although I still think it's kind of interesting). In the comments, people either agreed with me or didn't, and if you disagreed with me and I flamed you in any way, I want you to know that, though I'm not exactly apologizing for it, your comments are truly appreciated. I try to respond to every comment on the site, because responding forces me to rethink and restate my position on things, and that tends to get my brain a-workin' and that's always good (even when it keeps me writing 'til 4 AM). So on thinking about my theory in "Go Away..." and the the comments therein, some things occurred to me that I'd not considered in the original rant, and these lead down some interesting systems-philisophical roads.

The basic idea that I want to talk about here is simplicity. I realized that as time has gone on, maybe users are less tech savvy not because they're jaded and spoiled — no, maybe they haven't changed at all. Maybe it's because technology has grown so much more complex. I know that in our lab this is the case. When I first started this job, nearly six years ago, we were using Mac OS 9. Overall, as you probably know, OS 9 was a far less capable OS in terms of network capabilities, multi-user capabilities and reliability. But, man, was it easy to administer! Mac OS X is vastly more useful — has vastly more capabilities — but it's also infinitely more complex, and therefore more difficult to setup and maintain. Apple has done a great job of making OS X almost as easy to use as OS 9, from a user standpoint, but the simple addition of a multi-user environment, for instance, (not to mention the UNIX layer, networked home accounts, permissions and the like) has made it more difficult to use than previous Mac operating systems. And this additional complexity has spread throughout the entire lab, thus making the lab itself more difficult to understand and use. I once had a girlfriend who preferred McDonald's to Wendy's because the Wendy's menu offered too many options. In a complex world there is a certain appeal to simplicity.

Apple has done an absolutely amazing job in the realm of simplification. Mac OS X, as incredibly complex as it is, is still the easiest OS in the world to use. Mac hardware — even the pro hardware — is unbelievably easy to set up. And the unmitigated success of the iPod over literally all its competition — even cheaper, more feature-laden players — is proof positive that simple sells, and that Apple is the master of this philosophy. I first understood this when Safari was introduced a few short years ago. Here was a browser from which they'd stripped out numerous features, but which (after the release of the tab-enabled version) I found myself using as my primary browser. Though I did recently switch, for certain unavoidable professional reasons, I still prefer the Safari experience to that of any other browser. And part of the reason is because of its simplicity. Safari is a joy, in large part because it's simple and easy as Hell to use. You find this approach in every Apple product. It's a hallmark of the brand, and it's the reason Mac users are so loyal.

When I started this job I was a Macintosh SysAdmin. Mac only, in a mid-sized, very cross-platform networked lab with a separate SysAdmin for each platform. Over time we've restructured the staff, and in recent months responsibility for the whole lab has suddenly come under my purview. I now spend a lot of time thinking about how the lab is currently constructed and how I want it to be in the future. Right now, things work pretty well overall, and our network is quite powerful. There's a lot a user can do to leverage the power of this network if they have some understanding of how it works. The problem is, there's no way they can possibly understand how it works with their limited time and access. In "Go Away..." I said that, to a certain extent, people need to understand the tools of their trade, and I stand by this. But how well do they need to understand? How deep should their knowledge run? Should the car mechanic have a complete and thorough understanding of the way each and every individual car part is machined or manufactured? Should the car owner? Probably not. And I would argue that users of our lab should not need to have much understanding of the nuts and bolts of its construction in order to make use of the majority of its power. The functionality of our lab, like with an Apple product, should be almost self-explainatory. This is what I think about when I think about how I want our lab to be: How do I make the complex simple?

I look to Apple for ideas along these lines. How does Apple streamline and simplify the use of a product or a piece of software? There are a few ways. Firstly, they remove the unnecessary. If it's something people aren't using, they strip it out or they hide it. Apple is really good at deciding what the most important features of a product are and stripping out — or at least hiding to a certain degree — everything else. Everything you need to know is right there in front and working as 99% of people would expect, and more advanced functionality is hidden away in places where more advanced users know to look. So that's step one: remove the cruft. Take out everything we don't need and present a limited set of the most essential options.

Step two is what I guess I'd call sensibility: presenting things in a logical and intuitive manner. Once we've boiled functions down to the essentials, how do we present them to the user in a straightforward and easily understandable way? This step is hard because it requires a good understanding of users in general, and of our particular user base specifically. How do our users tend to use the computers in our lab? A great way to find this out is to ask them. I try, when I'm implementing a new feature — or updating an existing one — to ask a sample of users how they'd use the feature, or how they'd expect it to function. I also look to Apple again for inspiration. How does this feature (or a similar one) work in a Mac product? Why did they choose this method? What about it makes sense? If you can get a sense of how users are using your lab and the computers therein, you can then implement new features in a way that's easy for them to learn and remember.

Step three is consistency. Final Cut Pro is a great example of Apple's consistency. I've been using FCP since version 1. We're on version 5 now, and yet I've almost never had to relearn how to do anything in the program. With all the myriad new functions added in the last seven or eight years, almost every command functions exactly the same way it did in version 1. The interface layout is almost the same in every way. Apple may add features, but they're insanely consistent when it comes to maintaining usability from version to version of FCP. It's this kind of consistency that makes continued use of Apple products a very organic and easy affair. Similarly, in the lab, o nce we implement a new feature in a sensible way, maintaining the way that feature works from semester to semester should be as consistent as possible. This is not always easy, and sometimes even Apple themselves are to blame for broken consistency. God knows, differences even between Panther and Tiger have complicated the consistency problem for me and my users, and there's not a lot I can do about this sort of thing. But as long as general consistency is considered and maintained whenever possible, and combined with sensible implementations, it goes a long way to simplifying use and creating an environment that "Just Works." An example of where sensibility and consistency have broken down in our lab is the numerous password servers we employ. We have seven or eight (I can't even remember anymore) different password servers for the various platforms and web application on out intranet. This means that passwords for Macs are stored on a separate server than those for Windows or Linux or any of the other authenticated systems in our lab. This is neither sensible nor consistent. Users' passwords match at the outset of their enrollment here, but a change on any one platform will render the set of passwords inconsistent between platforms, and this is extremely confusing to users. I've spent untold hours working on (and writing about) this problem and plan to implement at least part of my solution this summer. Suffice to say, inconsistency in user data and workflow does little to simplify lab operations, and does a great deal to complicate them. And this makes the lab much harder to use than it should be.

The final step in simplifying the lab is education. Once intuitive, sensible, consistent features are implemented, we need to tell people how they work. Fortunately, if your implementations are truly intuitive, sensible and consistent, this shouldn't be that difficult, and should be something you don't have to do over and over again. Instructions on how to use the lab should be simple, clear and to the point. They should be illustrated whenever possible. And they should be documented online somewhere that is intuitively, sensibly and consistently accessible. Relearning the network should be as easy as opening and skimming a web page.

What we're really talking about here is the user experience. I think, personally, my job as a SysAdmin and Lab Administrator, at its heart, is really all about creating a user experience. Right now, in many ways, the user experience in this lab is more akin to Windows than it is to Mac OS X. There are too many steps to do simple tasks, things don't work as you'd expect, and there's a lot of confusion about how things work. Hopefully, it's getting better. Unfortunately, there is probably no way for me to make every change I want to make in one fell swoop. Changes to our existing infrastructure need to made, to some extent, with legacy users in mind. Plus there just isn't time to plan and implement everything at once, nor would it necessarily be wise to do so. So we move in baby steps. Occasionally leaps and bounds. But always piecemeal. In the end we hope to have a vastly simplified and, as such, vastly improved user experience that allows our users to focus less on how our lab is set up behind the scenes and more on how they can use it to do their work. That is what I see as "my job."

Thanks again to readers and commentors on this site. You've given me a lot to think about and I appreciate it.

UPDATE:
Another blogger, software developer Daniel Jalkut, posts some similar (and similarly long-winded) thoughts on simplification from a software development perspective on his Red Sweater Blog.

UPDATE 2:
Yet more. Seems to be a hot topic these days. (Via Daring Fireball.)