Thick imaging, thin imaging, and no imaging macOS

Last year, Tech Republic published a quick rundown on three approaches to Mac deployment.

Thought I’d do my quick take on it, based on my experiences.

Thick Imaging

Among the leading Mac admins out there (the ones giving workshops at conferences and on the tech panels and the primary contributors to widely used GitHub projects that facilitate Mac admin’ing), it seems there’s something approaching a consensus that the admins should be moving away from the “golden master image” approach.

The idea of the “golden master” is that you have a Mac entirely configured exactly the way you want it and then image that to other machines, so they’re completely identical.

In terms of the details of the imaging process, I have a tutorial here: Cloning an image using Thunderbolt and Disk Utility (post–El Capitan).

Pros

  • The imaging process itself is very quick per machine. One of our fully configured faculty laptops we can ASR over Thunderbolt in 3-5 minutes.
  • Takes up less bandwidth. We’re actually blessed with some hefty bandwidth here, but your organization may not be, and imaging over Thunderbolt or even USB-3.0 would be a great way to not have the imaging process take forever or steal bandwidth away from your users.

Cons

  • If you build your “golden master” on an older Mac model and then try to image that over to a newer Mac model, you may get a do-not-enter sign when you boot up the newly imaged machine. So you’ll always want to create the “golden master” on the newest Mac that you have.
  • You’ll have to constantly update the “golden master” so that it doesn’t quickly become a “silver master” or a “bronze master.” At a certain point, if the source image is behind enough in updates, you’ll be pulling so many updates post-image that you’re not gaining any of the bandwidth reduction or speed-of-deployment benefits that you should get with this method.
  • If you have several different configurations, you have to create and maintain all of those different “golden master” images. So if you have a multimedia lab image and a faculty laptop image and a staff laptop image and a faculty desktop image and a staff desktop image and a library desktop image… that’s a lot of separate images to create and maintain.

Thin Imaging

Historically, Mac admins have tended to favor DeployStudio for thin imaging over a network, but many Mac admins are eschewing Mac servers for Linux ones, so there’s been increasing adoption of Imagr (which can be run on a Mac but also on Linux) instead.

If you want to set Imagr up on Linux, Getting started with BSDPy on Docker is a good place to start.

If you want to set up Imagr using OS X Server on a Mac, Amsys has a great step-by-step tutorial on how to do so: Part 1, Part 2, Part 3, and Part 4.

Whether you decide to go with DeployStudio, Imagr, or even a local Thunderbolt (“bronze master”), you’ll probably want to look into using AutoDMG to create that thin, never-booted Mac image. Here’s an example workflow using AutoDMG and Munki: AutoDMG / Outset / Munki bootstrap workflow.

Pros

  • Allows for flexibility in creating various workflows.
  • Since the thin image is never-booted, it will work with more hardware models (anything that supports the operating system version).

Cons

  • Requires a lot of infrastructure setup, particularly if you’re using DeployStudio or Imagr.
  • Requires a lot of bandwidth (may be a non-issue at your organization).
  • May require netboot troubleshooting for particular laptop models or certain cables/adapters.
  • Netboot itself could take a while. And even if you’re using AutoDMG over Thunderbolt, all the bootstrapped updates will pull over the network, so if you want to immediately deploy the machine, your user may end up waiting for a while for it to be fully usable.

No Imaging

“BUILDING” 2015 MACS describes a cool process of installing .pkg files over to a never-booted and non-imaged Mac over Thunderbolt and Target Disk Mode. Unfortunately, this appears to result in a slow booting or refusing-to-boot machine. Greg Neagle’s (author of the aforementioned blog post and primary developer on Munki) workaround at the time was to boot into recovery mode and use the terminal to install the .pkg files. I believe he’s eventually gone on to use Imagr instead, but the no-image concept is a good one to consider still.

One way you can do it without recovery mode is to have an external drive (with Thunderbolt or USB-3.0) that has a version of macOS installed on it with an autologin account and then install the .pkg files you need to get things up and running (/var/db/.AppleSetupDone, Munki, Munki bootstrap, etc.). It’ll be quicker to boot than recovery mode.

Pros

  • You don’t have to create an image, even a thin one (which, with AutoDMG, has to be built on the same exact version of macOS as the never-booted image you’re using to build the image). You just need the packages you want to install.
  • The no-image boot is a lot faster than a netboot and comparable to a “golden master” image in speed to get done (sans updates) and move on to the next image.

Cons

  • Still consumes a bunch of bandwidth to pull all updates.
  • Requires a lot of booting to recovery mode (which takes a long time) or having a bunch of external drives to boot from (installing the packages over Thunderbolt and Target Disk Mode does not always work well).

Comments

One response to “Thick imaging, thin imaging, and no imaging macOS”

  1. […] some background on what a no-image approach to deployment is and its benefits, read Thick imaging, thin imaging, and no imaging macOS and "BUILDING" 2015 […]

Leave a Reply

Your email address will not be published. Required fields are marked *