Category Archives: Mac

Multiple GPUs on unsupported Mac Pro

The first two generations of Apple’s Mac Pro, the MacPro1,1 and MacPro2,1, do not officially run OS X later than 10.7.5. However, there is a modified EFI bootloader available which emulates the EFI64 interface on EFI32 machines. The original version available at Google Code supports OS X 10.9, and there’s a newer one available at Github which also does OS X 10.10. You simply drop in a new boot.efi in two places and add your board ID to the list of supported systems.

The resulting system works perfectly fine, which makes me wonder why Apple didn’t come up with a solution like this themselves. In any case, it extends the life of 2006 and 2007 Mac Pros beyond last year’s end-of-support for OS X Lion.

Since OS X 10.7 and higher included graphics drivers that not only supported the official Apple-supplied GPUs with EFI-compatible firmwares, but pretty much any off-the-shelf Nvidia or AMD GPU, GPUs have become quite easy to upgrade in Mac Pros (the classic tower cheese grater Mac Pro, not the new black trash can Mac Pro). The only thing you lose is the boot screen, so you still need to keep around that original GPU to debug the machine if it doesn’t boot.

These upgraded GPUs work fine with the modified bootloader as well, however if you try to install multiple GPUs (e.g. if you want to drive more than two displays or develop CUDA code and would like to run it in the debugger), only one of them will actually output video.

The solution to make multiple GPUs work in Macs running OS X versions they don’t officially support is surprisingly simple:

sudo /usr/libexec/PlistBuddy -c "Add :IOKitPersonalities:AppleGraphicsDevicePolicy:ConfigMap:Mac-F4208DA9 string none" /System/Library/Extensions/AppleGraphicsControl.kext/Contents/PlugIns/AppleGraphicsDevicePolicy.kext/Contents/Info.plist
sudo /usr/libexec/PlistBuddy -c "Add :IOKitPersonalities:AppleGraphicsDevicePolicy:ConfigMap:Mac-F4208DC8 string none" /System/Library/Extensions/AppleGraphicsControl.kext/Contents/PlugIns/AppleGraphicsDevicePolicy.kext/Contents/Info.plist
sudo touch /System/Library/Extensions

So here we are, running a 2006 MacPro1,1 with two EVGA Nvidia GT610 1GB cards, driving three Apple Cinema Displays.

Fixing OS X Server Push Mail

OS X Server 10.7 and later support push mail for iOS devices. This mechanism is neither based on IMAP IDLE (which iOS doesn’t support) nor Exchange ActiveSync (EAS), but on Apple’s Push Notification Service (APNS) infrastructure.

After setting up Mail using the GUI in OS X Server 10.10 Yosemite, I wondered why push didn’t work. From my understanding, it should happen automatically. The only indications something was wrong were the following lines in /Library/Logs/Mail/push_notify.log:

Feb 21 20:13:27 server.example.com push_notify[22848]: ApplePushServiceProvider: Warning: no device map found for 3F2504E0-4F89-41D3-9A0C-0305E82C3301

as well as XAPPLEPUSHSERVICE missing from the IMAP capabilities list:

$ openssl s_client -quiet -connect localhost:993
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready.

This is often the point where you have to break out the disassembler to find out what is wrong. Luckily however, Dovecot is open source, including the modifications Apple made to support APNS. Tracing through the code, the message above is logged if /Library/Server/Mail/Data/mta/guid_device_maps.plist does not contain a section for the user to which the incoming email is addressed. This section is written when Dovecot receives an XAPPLEPUSHSERVICE command. This command is probably only sent by a client when the XAPPLEPUSHSERVICE capability is reported by the server. The reason why the server didn’t report the capability was a simple incorrect (default) setting, easily fixable using

sudo serveradmin settings mail:imap:aps_topic_enabled = yes

Push mail immediately started working for me after this command, and the capability is correctly reported:

$ openssl s_client -quiet -connect localhost:993
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE XAPPLEPUSHSERVICE AUTH=PLAIN AUTH=LOGIN] Dovecot ready.

PHP 5: ldap_search never returns when searching Active Directory

I recently moved a PHP web application from a server running PHP 5.3 on Mac OS X 10.6 to a newer one with PHP 5.4 on Mac OS X 10.9. This caused the following code sample, run against an Active Directory server, to hang at the ldap_search() call:

$conn = ldap_connect('ldaps://' . $LDAPSERVER);
ldap_set_option($conn, LDAP_OPT_PROTOCOL_VERSION, 3);
$bind = @ldap_bind($conn, $LDAPUSER, $LDAPPW);
$result = ldap_search($conn, $LDAPSEARCHBASE, '(&(samaccountname=' . $searchuser . '))');
$info = ldap_get_entries($conn, $result);
ldap_close($conn);

Wiresharking the connection between web server and LDAP server (after replacing ldaps:// with ldap://) showed:

bindRequest(1) "$LDAPUSER" simplebindResponse(1) success searchRequest82) "$LDAPSEARCHBASE" wholeSubtree
searchResEntry(2) "CN=$searchuser,...,$LDAPSEARCHBASE" | searchResRef(2) | searchResDone(2) success [1 result]
bindRequest(4) "" simple
bindResponse(4) success
searchRequest(3) "DC=DomainDnsZones,$LDAPSEARCHBASE" wholeSubtree
searchResDone(3) operationsError (000004DC: LdapErr: DSID-0C0906E8, comment: In order to perform this operation a successful bind must be complete on the connection., data0,

So it’s binding, receiving a success response, searching and then receiving a response and a referrer to DC=DomainDnsZones,$LDAPSEARCHBASE. Next, it opens a new TCP connection and follows the referrer, but does an anonymous bind.

The solution is simple: just add

ldap_set_option($conn, LDAP_OPT_REFERRALS, FALSE);

after line 2. If for some reason you actually need to follow the referrer, have a look at ldap_set_rebind_proc, which lets you specify a callback which then does the authentication upon rebind.

Update August 2015: Same goes when using Net_LDAP3, which is used e.g. by Roundcube’s LDAP integration. Here you need to add the following:

$config['ldap_public']['public'] = array(
[...]
 'referrals' => false,
);

CUPS-to-CUPS printing with server-side processing and page_log

Printer sharing on Windows is easy: the client receives the driver from the server, presents the driver GUI and passes on an intermediate format along with the options selected in the driver to the server, which then renders the print job for the printer (usually into PostScript).

In the Unix (Mac OS X in my case, but Linux would be the same) world, CUPS is commonly used for printing. It’s very powerful, but I find the documentation severely lacks details about the exact way something is implemented in the code. Luckily, the code is open-source and Michael Sweet, the developer of CUPS who now works at Apple and still maintains CUPS, managed to create a very structured piece of software with code that’s reasonably easy to understand.

If you just add a CUPS server’s print queue as a new printer on a CUPS client, it will work fine, but you might run into some inconveniences:

Problems

  1. The job might get run through a vendor-supplied filter twice, once on the server and once on the client. This usually works fine, but the print job might significantly increase in size (observed on an HP LaserJet).

  2. The page_log on the server might not contain the number of pages and copies a job consisted of and list 1 for both instead.

  3. The page_log on the server might not contain things like page format, duplex status or attributes you manually added to PrintLogFormat.

Reasons

  1. This happens if the PPD both on the client and on the server contains a line starting with *cupsFilter, which links to a vendor-supplied filter. Such a filter usually produces a MIME type of application/postscript.

  2. This happens if the job does not get run through the pstops filter by CUPS. CUPS bypasses that filter if the client submits the job with a MIME type of application/vnd.cups-postscript, i.e. it was already run through pstops on the client.

  3. This is either caused by the same things as (1) or (2), but I’m not sure which one.

Solution

Simply add the following lines to the PPD on the client. That way, it passes the job straight to the server for server-side processing.

*cupsFilter: "application/pdf 0 -"
*cupsFilter: "image/* 0 -"
*cupsFilter: "application/postscript 0 -"
*cupsFilter: "application/vnd.cups-postscript 0 -"
*cupsFilter: "application/vnd.cups-command 0 -"

By the way, if you use Mac OS X and let the “Add Printer” wizard automatically add a print queue from a remote CUPS server discovered via Bonjour, this is exactly what it does.

Notes

If you append something like %{SelectColor} to your PageLogFormat because that’s the attribute your printer uses to determine whether it should print in color or grayscale and you’d like to log that, please note that the default value (either as specified by the PPD or as specified by you via lpadmin -d printername -d SelectColor=Grayscale or via the CUPS web interface’s “Set Printer Defaults”) will never be written to the page_log. Only deviations from the default value will be logged. The defaults set on the server-side CUPS do not matter here, this is determined by the client-side CUPS.

Per the filter(7) documentation (italic comments were added by me):

Options passed on the command-line typically do not include the default choices the printer’s PPD file. […] use the ppdMarkDefaults [which sets all options to the defaults specified inside the PPD] and cupsMarkOptions [which sets the options to the values specified in the driver GUI] functions in the CUPS library to use the correct mapping, and ppdFindMarkedChoice [which reads from the options array composed from the defaults and the selected options] to get the user-selected choice.

CUPS on OS X hangs after a few days, reports “Internal Server Error”

If you set up CUPS on an OS X Server (version 10.8.5 in my case, but anything from 10.7 (where CUPS introduced sandboxing) through 10.9 (the current version) should exhibit this behavior), i.e. you enable Printer Sharing in System Preferences and run sudo cupsctl WebInterface=yes, and leave the system running for a few days, you’ll eventually run into the situation that http://localhost:631/printers will report “Internal Server Error”, and clients will no longer be able to print to the server.

Digging around CUPS’ debug log, you’ll see something like
D [27/Oct/2013:13:33:52 +0100] [CGI] sandbox_init failed: /private/tmp/05d735269fa67: No such file or directory (No such file or directory)
D [27/Oct/2013:13:33:52 +0100] PID 78980 (/usr/libexec/cups/cgi-bin/printers.cgi) stopped with status 1.

That missing file (named a different 13-digit hexadecimal name upon each restart) is the CUPS daemon’s sandbox profile.

Digging around further reveals that /var/log/daily.out contains exactly this file name:
Sun Oct 27 03:15:01 CET 2013
Removing old temporary files:
/tmp/05d735269fa67
[...]

All we need to do to prevent this from happening in the future is opening /etc/periodic/daily/110.clean-tmps in your favorite text editor and adding the line printed in bold:
set -f noglob
args="-atime +$daily_clean_tmps_days -mtime +$daily_clean_tmps_days"
args="${args} -ctime +$daily_clean_tmps_days"
args="${args} ! -group _lp ! -user _lp"
dargs="-empty -mtime +$daily_clean_tmps_days"
dargs="${dargs} ! -name .vfs_rsrc_streams_*"

Update February 2014: CUPS 1.7.1 is supposed to fix that issue; the release notes mention my reported bug. Now lets see how long it takes until Apple ships the updated CUPS with an OS X update.

Update March 2014: I just upgraded our server to OS X 10.9.2 and got CUPS 1.7.1 with it. Hooray, less than three months between bug reported and fix deployed. The sandbox profile now gets written to /var/spool/cups/tmp. In fact, that’s exactly what was changed in scheduler/conf.c in the CUPS source code: they added setenv("TMPDIR", TempDir, 1);

SSDs with TCG Opal or IEEE-1667 support

Recently, a few SSD models have been introduced that support Full-Disk Encryption per the TCG Opal standard. Many older SSDs already support AES encryption and use the ATA password for this, which is settable in the BIOS. The advantage of Opal is that it divides the drive into a small read-only segment (technically not a partition) with a special boot loader (which prompts you for the encryption password and passes it to the drive) and the encrypted segment which contains your traditional OS and data partitions. These special boot loaders can do much more than a BIOS: for example, they can provide means for key reset and they can talk to a server on the network. They can also have multiple passwords for multiple users and they can be configured entirely from within the OS, which also allows for central management in enterprise environments.

The downside of course is that you need a piece of software to use Opal. This includes WinMagic SecureDoc (for Windows and Mac), Wave Systems Embassy Security Center (for Windows only) and several others, but also BitLocker/eDrive in Windows 8 (however, this requires IEEE-1667 support as well). This is also an advantage as it does not require hardware or OS support; so even Macs could use them:

WinMagic SecureDoc already supports supported Macs until October 2013, but a version for OS X 10.9 was never released. Secude has announced FinallySecure Enterprise Full Disk Encryption with support for OS X and Opal; it hasn’t been released yet and was recently sold to a company named EgoSecure.

Probably the first drive to support Opal was the Seagate Momentus FDE, which was a spinning disk. Toshiba, Hitachi and a few others also made HDDs with Opal support.

Later, the Samsung PM830 (but not the Samsung SSD 830) and the Micron C400 SED (but not the Micron C400 or the Crucial m4) came, which were only available to OEM.

The first Opal-compliant mass-market SSD was the Crucial M500 (it’s also OEM’d as Micron M500), which is also IEEE-1667 compliant. As the M500 currently offers the best GB/$ ratio of all SSDs on the market, it’s been selling superb in the five months it’s been on the market and I hope this drives more software companies to support Opal.

The just-announced Intel SSD Pro 1500 will also support Opal, but apparently not IEEE-1667.

As far as I know, these really are all TCG Opal drives on the market, currently and previously. I expect there will be more coming, but I am kind of surprised that it took this long.

If you know of any others, let me know in the comments.

Update Dec 2013: The Samsung 840 EVO also does Opal.

Update Jan 2014: Wave Systems has a list of Opal drives that work with their software. It lists some Adata XPG SX900 models, the Kingston KC300 (only certain part numbers) and some LiteOn models.

Update Mar 2014: The just-announced Crucial M550, which is very similar to the popular M500, still supports Opal 2.0 and IEEE-1667, and is explicitly advertised as Microsoft eDrive compatible. Same goes for the almost identical ADATA SP920.

Update May 2014: The SanDisk X300s also has both and includes a license for Wave Embassy in case your computer does not support eDrive. Glad to see that Opal and IEEE-1667 are finally making it into a significant proportion of new midrange mass-market SSD models.

Update June 2014: The Crucial MX100 is similar to the M550 with cheaper NAND and supports the same encryption standards. The ADATA Premier SP610 is supposed to get Opal 2.0 through a firmware update later this year, but not IEEE-1667.

Update July 2014: The Samsung SSD 850 Pro has TCG Opal and IEEE-1667. The Intel SSD Pro 2500 has TCG Opal 2.0 and IEEE-1667.

Update September 2014: The Crucial M600 has Opal 2.0 and IEEE-1667, just like its predecessors M500, M510, MX100, M550.

Update October 2014: The Adata SR1010 has Opal 2.0 and IEEE-1667.

Update December 2014: Samsung SSD 850 EVO has Opal 2.0 and IEEE-1667.

Update January 2015: The Crucial MX 200, which is quite similar to the MX 100, has Opal 2.0 and IEEE-1667. The BX 100 does NOT have encryption and is based on a different controller.

Update October 2015: The Samsung SSD 950 Pro is supposed to get Opal and IEEE-1667 with a firmware update at some point.

Update January 2016: The SanDisk X400 is supposed to get a firmware update for Opal in April.

Update February 2016: The Samsung SSD 750 EVO, apparently intended to replace the 850 EVO, has Opal and IEEE-1667.

Update April 2016: The Crucial MX 300 does TCG Opal 2.0, IEEE-1667 and thus also Microsoft eDrive.

Update June 2016: The Micron SSD 1100 was announced with TCG Opal 2.0 and eDrive support.

HP StorageWorks P2000 G3

Hardware

To replace a 2006 Xserve and a 7TB Xserve RAID at the university, we recently got a Mac mini server, an ATTO ThunderLink FC 1082 Thunderbolt to 8Gbit Fibre Channel adapter, and a HP StorageWorks P2000 G3 MSA FC Dual Controller LFF (specifically, model number AP845B).

The P2000 is not explicitly on ATTO’s compatibility matrix, but when I asked their tech support about it, they said it was compatible and provided me with a pre-release version of their Multi Path Director driver for the Thunderlink which is officially compatible.

Evidently, the P2000 G3 is an OEM’d version of the Dot Hill AssuredSAN 3000 Series (specifically, the 3730), which is on ATTO’s compatibility list, so I assume the standard driver would work just as well. Update 2018: Since Dot Hill has in the meantime been sold, their support page has moved to Seagate.

We chose the Thunderlink/P2000 combo over a Promise solution because it was cheaper, fully 8Gbit capable and had four host ports. Also, I know that HP’s tech support is good and they’ll have spare parts around for many years. Plus, the P2000 is VMWare ESXi certified.

The obvious downside to the P2000 is that the disk bays do not have standard SAS connectors but require an interposer board to convert to a SCA-2/SCA-40 connector. The included slot blinds are in fact blinds and cannot be used to mount an actual drive. You can get empty caddies/trays for the P2000 on eBay or from some used SAN equipment dealer for around 100 euros, or buy your hard drives from HP for a premium of around 100-150 euros over the plain drives. (The interposer board itself appears to get sold under the model numbers 371595-001 or 60-272-02 on eBay, but I haven’t found a model number for the caddy frame yet.) If you’re buying plain drives, you can check HP’s hard drive model matrix to see what model of drive an HP part number corresponds to. For example, the 3TB SAS drive QK703A is a Seagate Constellation ES.2 ST33000650SS and the 2TB SAS drive AW555A, which we ordered, is a Seagate Constellation ES ST2000NM0001).

Firmware

I have verified that the firmwares are interchangeable between the AssuredSAN 3000 and the P2000 G3: I downloaded and extracted the TS250R023 from both Dot Hill and HP and both contain a file named TS250R023.bin with an MD5 sum of 7b267cc4178aef53f7d3487e356f8435. I assume that’s the file that can be uploaded through the web interface.

To extract the HP firmware, download the Linux updater (e.g. CP020030.scexe) and use a hex editor to find the offset of the line break after the end of the shell script at the beginning, then use dd to skip the plain text: dd if=CP020030.scexe bs=1 skip=8602 of=scexe_tmp24664.tar.gz. Now you can tar zxf scexe_tmp24664.tar.gz and pull out the TS250R023.bin.

To extract the TS250R023.bin, simply tar xf TS250R023.bin. If you want to poke around the root filesystem of the Management Controller, unsquashfs mc/components/app.squashfs. You may need to compile squashfs-tools yourself to get LZMA support (edit squashfs-tools/Makefile, set LZMA_SUPPORT=1 or LZMA_XZ_SUPPORT=1 and apt-get install liblzma-dev zlib1g-dev liblz-dev).

Setup and configuration

After unpacking the device, I first updated the firmware to the most recent version available from HP. Before you do that (I used the Windows utility), make sure to set static IP addresses or DHCP static mappings (otherwise the update might fail due to changing addresses). After you set the password for the manage user, you’ll need to SSH into the device to change the password on a hidden admin account about which HP issued a security advisory back in December 2010 (but still hasn’t fixed it in the firmware).

I created a RAID5 out of 4x 2TB drives and dedicated a fifth one as a global spare. In the global disk settings, I enabled spindown so the spare would not be running unnecessarily. The RAID initialization took close to two days, but as that runs in the background, you can already start using it.

Then I created a couple volumes (setting the default mapping to not mapped) and mapped two of them to our Mac mini server (on the Thunderlink) and a third to our two VMWare ESXi servers (on Qlogic QLE2460 HBAs). This was much easier to do than on our old Xserve RAID and I love that I can start out with smaller volumes (sized appropriately that they’ll last for the next year) and expand them later on. The P2000 does not do thin provisioning, but you can’t really expect that at this price point.

 

Xserve RAID and Atto Thunderlink FC 1082 are incompatible if used without an FC switch

We’re running a 2006 Xserve RAID at the university. Our old server was a 2006 Xserve with an Apple 2 Gbit Fibre Channel card. When we recently got a new Mac mini server to replace, we ordered an Atto Thunderlink FC 1082 to interface with the RAID. The Promise SANLink would have been a possible alternative, but the Thunderlink is capable of 8 Gbit/s, thus future-proofing our investment.

Unfortunately, when I hooked up the Thunderlink straight to the Xserve RAID using an Apple Fibre Channel Copper Cable, neither the Xserve RAID Admin utility nor the Mac mini showed a connection. After some googling around, it appears as if the Xserve RAID is not capable of negotiating links with HBAs that are capable of more than 2 Gbit/s. Turns out also says that you shouldn’t use their 4 Gbit card with the Xserve RAID: HT1769.

Since the RAID has been working fine for quite a while with two HP servers running VMWare ESXi with Qlogic QLE2460 controllers connected through a Qlogic SANbox 5200 2 Gbit FC switch, and I knew the Thunderlink worked with that switch, I simply used an FC Copper Cable between the Thunderlink and the switch and one between the switch and the RAID, configured the zoning, et voilà, the array mounted on the Mac mini.

Using C++11 on Mac OS X 10.8

Recent Xcode versions for Mac OS X 10.7 and 10.8 ship with Clang, a modern compiler for C/C++/ObjC based on LLVM. It fully supports C++11: simply add -std=c++0x or -std=c++11 to your CXXFLAGS. This already gives you all the new language features such as the auto keyword.

However, when you get more in-depth with C++, you’ll also want to use the new features of the standard library, such as <array> or <random>.  This however results in strange error messages:

gamelogic/Board.cpp:11:10: fatal error: 'random' file not found
#include <random>
         ^

As it turns out, your binaries get linked to the system-default libstdc++ version (/usr/lib/libstdc++.6.dylib) which is too old to support C++11. However, Mac OS X also includes libc++ (/usr/lib/libc++.1.dylib), a complete reimplementation of the standard library by the LLVM team that is fully C++11 compatible. Simply tell the compiler to use it using -stdlib=libc++ and tell the linker to link against it using -lc++.

So for a qmake .pro project file, all this might look as follows. The conditional makes it compatible with other compilers such as g++ on Linux that already ship with a C++11-compatible standard library.

QMAKE_CXXFLAGS += -std=c++0x
macx {
 contains(QMAKE_CXX, /usr/bin/clang++) {
  message(Using LLVM libc++)
  QMAKE_CXXFLAGS += -stdlib=libc++
  QMAKE_LFLAGS += -lc++
 }
}

UPDATE 2016: Mac OS X 10.9 and higher default to libc++ and don’t require the extra compiler flag. Since Mac OS X 10.8 is out of support anyway, there is no reason to use the flag anymore.

Fixing Microsoft Office 2011 SP2 Volume licensing

UPDATE 2012-11-15: The 14.2.5 installer no longer has this weird behavior (it does not include removables.txt files at all, however the postinstall script would still process them if they were there). Since it requires 14.2.3 as a prerequisite, you’ll still need to apply the fix mentioned below to 14.2.3 when chaining updates.

UPDATE 2012-11-30: I just obtained a copy of the 14.2.3 installer ISO from Microsoft VLSC. Copies of Office installed from it (or probably any 14.2.0+ installer ISO) do not exhibit the behavior explained here. The newer installer ships with flat-file Main.nib files that do not get removed by the removables.txt script.

UPDATE 2013-03-13: The 14.3.2 updater again contains a removables.txt which breaks Microsoft Office Setup Assistant.app. If you didn’t replace your installer ISO with a newer version, you will again need to apply the fix mentioned below when installing this update.

When you run Word, Excel, PowerPoint or Outlook 2011, it checks /Library/Preferences/com.microsoft.office.licensing.plist . If that file is not valid (such as after doing a fresh install of Microsoft Office 2011), it launches /Applications/Microsoft Office 2011/Office/Microsoft Office Setup Assistant.app. Microsoft Office Setup Assistant checks whether the DVD from which you installed is a volume licensed copy; if it is, it silently populates that plist and quits (allowing the app you initially started to start up); if it is not, it prompts you for a product key and activation.

If you install from the DVD, launch one of the Office apps to activate the license, quit it and then install all the available updates from Microsoft, everything is fine.

If you update to version 14.2.0, 14.2.1, 14.2.2, 14.2.3, 14.2.4 (or possibly future versions) right after installing from the DVD however, Microsoft Office Setup Assistant.app gets corrupted. This is due to ./Office 2011 14.2.X Update.mpkg/Contents/Packages/Office2011_all_core_14.2.X.combo.pkg/Contents/Resources/removables.txt, which gets run by ./Office 2011 14.2.X Update.mpkg/Contents/Packages/Office2011_all_core_14.2.X.combo.pkg/Contents/Resources/postflight. It deletes the contents of /Applications/Microsoft Office 2011/Office/Microsoft Office Setup Assistant.app/Contents/Resources/XX.lproj/Main.nib (which is a bundle-style NIB), however (unlike probably everything else listed in removables.txt) the update does not contain updated versions of them.

If you’re running an individually-licensed copy of Office 2011, that is no big deal: the Office apps themselves are able to prompt for a license key and activation.

If you’re running a volume licensed copy of Office 2011, you’re in trouble: You now get prompted for a product key by every Office app, which you obviously don’t have.

To fix this situation, you have two options:

1. Copying /Library/Preferences/com.microsoft.office.licensing.plist from a working install. You can do this using your favorite software depolyment tool, such as Munki. Please note that importing it as a Managed Preference (MCX) into Workgroup Manager (and probably Profile Manager) does not help. The file needs to be physically present on the client machine.

2. Move Microsoft Office Setup Assistant.app out of the way before updating. You can do this if your software deployment tool supports adding custom pre- and post-install scripts (Munki allows you to do that).

Here’s my pre-install script:

#!/bin/bash
cd "/Applications/Microsoft Office 2011/Office"
mv "Microsoft Office Setup Assistant.app" "SetupAssistantBackup.app"
exit 0

And my post-install script:

#!/bin/bash
cd "/Applications/Microsoft Office 2011/Office"
mv "SetupAssistantBackup.app" "Microsoft Office Setup Assistant.app"
exit 0

To find out whether you still need to do this on future updates (such as 14.2.5), open the installer package in a tool like Pacifist and check the following: a) Did they remove the  Microsoft Office Setup Assistant.app lines from removables.txt (go to the Resources tab and enter removables.txt into the search box to locate the file)? b) Does the update contain a new version of Microsoft Office Setup Assistant.app (go to the Package Contents tab and enter setup assistant into the search box to check for its existence)? If either one is true, Microsoft decided to fix the problem and you no longer need to use my pre-/post-install scripts.