Final Cut Pro and Gigantic Frames

Recently, one of our producers needed to cut a 20+ minute planetarium show down to a 3 minute trailer. So he needed a way to edit an image sequence comprised of extremely large, non-standard frames. We decided to use Final Cut Pro for the editing software, but Final Cut is decidedly not built for working far beyond the confines of its presets. Nevertheless, it is a flexible enough tool to get the job done, with a little effort and a fast computer. Here's a blow-by-blow of what we did.

  1. Make Reference Movies of Frames The first step in the process was to make reference movies of the near-30,000 or so frames. Nothing about this entire process was straightforward, and this first step was certainly no exception. Attempting to import the entire range of frames invariably crashed Quicktime. But we were able to work with a subset of frames. So we ended up breaking the frames into folders of 5000. Of course, trying to move that many large files in the Finder would overwhelm the GUI, causing it to hang for vast stretches of time, so we moved the files via the command-line. This went exceedingly quickly. The Finder still really needs some help handling large groups of files apparently. But in the end, thanks to Mac OS X's UNIX core, we had six folders full of images, and each folder was opened as an image sequence in Quicktime, then saved as a reference movie.

    Quicktime: Create a Reference Movie

  2. Make DV-NTSC Movies from Reference Movies Since we were going to be editing this material, we wanted to get it into a good, edit-friendly CODEC. We chose DV-NTSC since it looks pretty good and is really easy to work with in Final Cut. Making our DV-NTSC movies actually went pretty well. Quicktime has done better at keeping current with the latest hardware capabilities than has the Finder. And, once we'd broken our image sequence down a bit, Quicktime made relatively short work of exporting them to DV-NTSC. And, blessedly, Quicktime was able to load and export multiple movies at once and take full advantage of our eight cores to process them.

    Quicktime: DV Movie Export

  3. Import and Edit at DV-NTSC Quality Well, okay... This was the easy part... In fact, making this easy was the whole point of all those other steps.
  4. Create Custom Sequence Settings DV-NTSC was nowhere near the resolution we needed for our final product. We just bumped down to it in order to make editing more feasible. Eventually we wanted to output this puppy back out to gigantic frames once the editing was done, so we needed a custom sequence preset that matched the properties of said frames. Later we'd use this preset to create our final, high-res output.

    Final Cut Pro: Custom Sequence

  5. Create Offline Project with Custom Settings Final Cut has a nifty feature called Media Manager, which allows you to consolidate your media and get rid of unused clips in order to reclaim disk space. It also can be used to conform from an offline — or low quality — version of your project to an online — or high quality version. Which is what we did. In Final Cut, we went to the Media Manager (under the "File" menu) and chose "Create offline" in the pulldown in the Media section where it says "media referenced by duplicated items."

    Final Cut Pro: Creating an Offline Project

  6. Reconnect Media to High-Res Source Once we had our offline project, we needed to re-associate our DV-NTSC clips with the original, full-quality media, i.e. our reference movies. This was fairly easy to do: simply right-click the media and choose "Make Offline..." from the menu. Then right-click the movies again and choose "Reconnect Media..."

    Final Cut Pro: Reconnect Media

    When prompted for the clips, we chose "Locate," and then selected our high-quality material. Be sure to uncheck "Matched Name and Reel Only" when selecting the high-res clip.

  7. Export Back Out to Frames with Compressor Once the movies were reconnected to the original source material, we needed to output back out to the original source type, which in our case was a TIFF sequence. There are a few ways to do this, but we had the best luck (i.e., the fewest crashes and slowdowns) sending the job right from Final Cut to — believe it or not — Compressor. This is certainly the first time in my storied history that I can say Compressor was the right tool for the job, but indeed it was. It knocked it out of the park. We, of course, had to set up a preset in Compressor that exported TIFF frames and apply that to our job. But once that was done, Compressor did it's thing and did it well.

In the end, what we got was a virtually lossless edit of our show. Because we used the original TIFF frames to render out our final trailer cut, and because Final Cut and Compressor shouldn't need to recompress those frames unless they've been altered somehow (which they only were in spots where effects or transitions occurred), our final output looks just as good as the original, even though we edited in the edit-friendly DV-NTSC CODEC. Which is exactly what we were going for.

There were a lot of hiccups along the way. One thing we noticed is that Final Cut does not take advantage of multiple processors, though, blessedly, Quicktime and Compressor now both do. Also, the Finder is still pretty damned abominable when it comes to dealing with a very large number of large files, which is a real shame since it's main job is file management. Quicktime, too, had some performnace issues with the large number of files. These things certainly slowed us down a bit. But, with some ingenuity and tenacity we were able to accomplish a pretty difficult task in a relatively short time. It was pretty cool, and now, if something like this ever comes up again, I'll have a process for it.

I want to say, too, that this is just the sort of challenge SysAdmins look for. Or at least SysAdmins like me. It's projects like this that make me happy I am where I am in my job and my career.

Capacity, Not Memory

When talking about cell phones and MP3 players with flash-based internal storage it's become commonplace to refer to the device's capacity as "memory." Even Walt Mossberg and other respected tech writers — the very folks who are supposed to make technology easier to understand — are guilty of this practice:

"The G1 also has much less memory than the iPhone."

This is technically acceptable, I suppose, as the flash mechanism used for data storage inside these devices is more similar to memory (i.e. RAM) than it is to a hard drive, but it's a pretty confusing use of language.

When talking about computers, the term "memory" refers to RAM, which is a temporary, non-user accessible space used by applications to boost the performance of certain types of operation. The term "disk space" is often used to talk about the amount of data storage available on the computer. Referring to the amount of data storage on a cell phone as "memory" is just plain confusing. But calling it "disk space" would be equally confounding.

The proper way to refer to the amount of data storage is "capacity." This term is device- and mechanism-agnostic — i.e. it means the same thing no matter what storage medium or device you're talking about. And it's completely accurate and specific — no one will ever wonder what you mean when you say "capacity;" it can only mean one thing.

So folks, please, stop calling it "memory." It's capacity. Period.

Geez!

Command-Tilde in Photoshop

When Adobe released the Photoshop CS3 Beta, I was pretty pleased with it. But I did have one big beef:

"My one big beef is that there still seems to be no key command to switch between open documents. Almost every other application on the Mac nowadays uses “command-`” to switch between open docs. Yet Photoshop CS3 still not only fails to adhere to this standard, but apparently lacks the ability to switch between open docs with the keyboard at all. This seems like a strange oversight for such a significant interface overhaul. I also wish Adobe would use standard Apple key-commands for things like hiding the app (”command-h” on the Mac, generally) but at least the ability exists to do this from the keyboard, and it’s configurable."

While I did later discover that Photoshop did in fact have a key command for document switching, I still found it bothersome that it varied from the system default, all in the name of backwards compatibility. A Photoshop developer chimed in with the following:

"Photoshop since the first OS X version has supported using Ctrl-Tab to switch amongst open documents. This is because Cmd-~ has since Photoshop 1.0 meant “show all channels of the current layer” a command ingrained in a great many fingers of a great many pro Photoshop users..."

To which I responded:

"...I do think there’s a certain logic to changing PS key commands to match the ones in the OS though. It seems to me that by choosing to stick with your original key commands to help legacy users, you’re actually doing a disservice to everyone: legacy users, instead of having to relearn one or two new key commands now have to use a different one in PS than they do in virtually every other application on their computer. Frankly, they’ve already relearned Command-H (and others) by using it in all the other apps, and in the OS. Including these key command in PS won’t significantly impair legacy users, and it will greatly speed the adoption of PS by new users who are already familiar with the Mac OS and its attendant apps. Giving us the ability to customize these key commands helps a lot, but I would still argue that the defaults should match those of the OS. Let legacy users customize things how they want. OR, have a key-command preset for legacy users that uses old PS key-commands, and a standard one that uses those of the OS for users who prefer that. That would be the best of both worlds!..."

Well, I never thought I'd see the day, but it's finally happened. Command-tilde is now the default method for switching between open documents in Photoshop CS4. And John Nack is even happy about it (via Daring Fireball).

It's a brave new world.

Infrastructure

There are a bunch of legacy issues at my new job. Many of them, I believe (I'm not completely sure, I'm still pretty new after all), stem from the once heavy use of IRIX and its peculiarities. We are only just reaching a point at which we can do away, once and for all, with that platform. It's a very complex ecosystem we're dealing with here. Complex, and in some spots delicate. But what surprises me more than a little is that, despite the fact that my new job is at one of the most respected institutions in the country — and one of the more advanced computer labs as well — I find myself facing many of the same issues I tackled in my last position. Those challenges have a great deal to do with creating a simpler, more elegant, more efficient computing environment. And, though the user base has changed dramatically — we're dealing with a much more technically sophisticated group of professionals here, not students — and the technological and financial resources are much more vast, the basic goals remain the same, as do many of the steps to accomplishing those goals. And the one thing those steps all have in common, at least at this stage of the game, is infrastructure.

What makes infrastructure so crucial? And why is it so often overlooked?

Infrastructure is key to creating efficient, and therefore more productive, work environments, whether you're in art, banking, science, you name it. If your tools are easy and efficient to use you can work faster and make fewer mistakes. This allows you to accomplish more in a shorter period of time. Productivity goes up. Simple. Infrastructure is a lot like a kitchen. If it's laid out intelligently and intuitively you can cook marvels; if it's not you get burned.

Infrastructure, for our intents and purposes, is the back-end of the computing environment. Everything you do between computers — that is, every interaction that takes place between your computer and another computer, be it file server, authentication server, web server, what have you — relies on some sort of infrastructure. I'm referring to network infrastructure here, to be sure, but also to the processes for adding and accessing computer resources in any given facility. How, for instance, do we roll out new workstations? Or update to the latest operating system?

Typically, it is Systems Administrators — the very people who know (or should know) the importance of infrastructure — that tend to work between computers the most. We of all people should all know how important a solid infrastructure is for even the simple act of basic troubleshooting: if your infrastructure is solid and predictable, the paths to troubleshoot are greatly lessened and simplified, making your job easier and making you better at it at the same time. Yet infrastructure, time and again, is left to stagnate for a variety of reasons.

I'd like to enumerate a few of those reasons, at least the ones I suspect factor most strongly:

  1. Infrastructure is difficult Infrastructure planning, like, say, interface design, is complicated and often requires numerous iterations, coordinated effort and a willingness to change to successfully implement.
  2. Infrastructure requires coordination Infrastructure changes often require re-educating the user base and creating a collective understanding as well as clear policies on how things are supposed to work.
  3. Infrastructure is not sexy (to most people) The benefits of infrastructure reorganization are often not immediately apparent, or even immediately beneficial for that matter. You might not see the benefits until long after a reorganization.
  4. Infrastructure can be expensive If an infrastructure requires a major overhaul, the cost can be high. Couple that with less-than-immediate benefits and you tend to often meet with a great deal of resistance from the money people who feel that they'd be better served buying new workstations than a faster switch.
  5. Change is scary You know it is.

I've been extraordinarily lucky in that I've been able to learn about infrastructure in a sheltered environment — that of the Education sector — that allowed me copious downtime (unheard of elsewhere) and a forgiving user base. (Students! Pfft!) I'm still pretty lucky in that A) I'm working somewhere where people, for the most part, get it; and B) I have some time, i.e. I'm not just being brought in as a consultant. This last bit is really fortunate because it affords me both ample opportunity to gain an understanding of the environment I'm trying to change as well as the time in which to change it. This is not to say that this sort of thing can't be done in consulting. But it's certainly a much harder sell, and one I'm glad I don't really have to make to such a degree.

Still, with all that, I've got my work cut out for me.

When I first arrived on the scene, The New Lab was using (actually, still is to a large extent) NIS for user authentication. Now this is something I know a bit about, and I can tell you (if you even remember what NIS is anymore) NIS is very, very passe. And for good reason. NIS is like the Web 1.0 of user authentication: it uses flat files rather than databases and is extremely cumbersome and inflexible. Moreover, it is not well-suited to cross-platform operation. It is completely end-of-life and obsolete. To continue to invest in NIS is silly. So one of my first duties was to build an Open Directory server, which relies on numerous databases, each suited to authentication for a given platform. The OD server will be both easier to use (creating users is a breeze) and more capable than any NIS server could ever hope to be (by allowing cross-platform integration down the line, if desired). But so far, for some reason, no one's done this. Partly, maybe, it's just inertia: NIS works fine enough, it's not that big a problem. And maybe it's partly happening now because this is something I just happen to know a lot about and can make it happen quickly and effectively. Because of my background, I also see it as a huge problem: by slowing down the user creation process, you're hindering productivity. And not just physical productivity, but mental productivity. If I have to spend twenty minutes creating a user, not only have I wasted that time on something trivial, but I've expended far too much mental energy for a task that should be simple. And this makes it more difficult to get back to Work That Matters. Again, the beauty of being on staff is that I have time to introduce this gradually. To gradually switch machines over to the new server. To gradually get the user base used to the slightly new way of doing things before we move on to the next item up for bid.

So far, so good.

I've talked to my fellow co-workers as well, and they're all primed to make changes happen. That's really good. We're talking about all kinds of things: re-mapping the network, asset management with DHCP, redoing the scheduling system, and others I can't even think of right now. User authentication took years at my old job. It was, in many ways, a much more complex network than this new one (we don't manage an external network, thank God). But this place has its own set of complexities and challenges, and though the authentication server is basically done, there are a whole host of things I could see happening in the realm of infrastructure. And they're all right there... See them? Just over the horizon.

Should be fun.

There are a few basic things I like to keep in mind when preparing for and making major infrastructure changes. These are the types of problems I look for that give me purpose:

  1. Repeat offenders What problems crop up again and again on the user side? What questions get asked over and over? These are indicators that something is functioning sub-optimally, or that a process could be more intuitive.
  2. Personal frustration What parts of my job are frustrating or infuriating to me? These are usually indicative of a problem, as I tend to get frustrated with things that don't work well. Either that or I need more coffee.
  3. Redundant errors Is there a process that tends to yield mistakes on a regular basis? If so it could probably use some automation or clarification at some point. Sometimes all you need is a clear policy or workflow.
  4. Long-term problems Is there something that everyone complains about, but that just "never gets fixed?" Betcha ten bucks it's an infrastructure problem.
  5. The workflow How do people in the facility currently work? What's the pipeline for getting things done? Are they spending their time working on tech issues when they should be working on production? How could this be easier?

There are probably more, but these are the general things I'm thinking about when considering infrastructure changes. And the better I can understand the people and the technology in a facility the more informed my decisions can be with regards to those changes.

Finally, there are some basic notions I keep in mind when proceeding with infrastructure changes:

  1. Simplify The simpler solution is almost always best, both for the admin and the user. Building a simple solution to a problem is often exceedingly difficult, and I might point out, not necessarily simple on the back-end. But a simple workflow is an efficient one and is usually my prime directive.
  2. Centralize It's important to know when to centralize. Not everything benefits from centralization, obviously. If it did we'd all be using terminals. Or web apps. For everything. But properly centralizing the right resources can have a dramatic affect on the productivity of a facility.
  3. Distribute Some resources should be distributed rather than (or in addition to being) centralized. Some things will need redundancy and failover, particularly resources that are crucial to the operation of the facility.
  4. Educate Change doesn't work if no one knows about it. It's important to explain to users what's changing and also why. Though I've been met with resistance to changes that would actually make a user's job easier (this is typical), making them aware of what and why the change is happening is the first step in getting them to see the light.

It's true that infrastructure changes can be a bit of a drag. They are difficult. They're hard to justify. They piss people off. But in the end they make everything work better. And as SysAdmins — who are probably more intimate with a facility's resources than anyone — we stand as much to gain (if not more!) than our users. And they stand to gain quite a bit. It's totally win-win.

Note To Self: Restart autofs

I just looked all over Hell's half acre for this (okay, I performed a perfunctory Google search) and I couldn't find a definitive answer. Now I know and I just wanted to make a quick note of it for posterity. In the olden days (i.e., a few months ago), in order to get any mounted to shares to re-mount, we would restart automount thusly:

sudo killall -HUP automount

This no longer works. Now we must restart autofs. To restart autofs on Mac, do this:

sudo killall -HUP autofsd

To be additionally thorough, though this should not be necessary, you could also restart automount, which now looks slightly different (note the "d", which is new):

sudo killall -HUP automountd

None of this is surprising, but then again, if you're not sure you're doing it right (like you run the command and nothing happens and you want to be sure you're doing the right thing) it helps to have it written down somewhere.

Enjoy!