Tag Archives: esxi

Wiring Fibre Channel for Arbitrated Loop

We have a small Fibre Channel SAN with three servers, a switch and a dual-controller RAID enclosure. With only a single switch, we obviously couldn’t connect all servers redundantly to the RAID system. That meant, for example, that firmware updates to it could only be applied after shutting down the servers. Buying a second switch was hard to justify for this simple setup, so we decided to hook the switch up to the first RAID controller and wire a loop off the second RAID controller. Each server would have one port connected to the switch and another one to the loop.

Back in the old days, Fibre Channel hubs existed for exactly this purpose, but nowadays you simply can’t get them anymore. However, in a redundant setup, you don’t need a hub, you can simply run cables appropriately, i.e. in a daisy-chain fashion. The only downside of not having a hub, the loop going down if one server goes down, is irrelevant here because you have a second path via the switch. For technical details on the loop topology, you can have a look at documentation available from EMC.

To wire your servers and RAID controllers in a fashion resembling a hub (only without the automatic bypass if a server goes down), you need simplex patch cables. These consist of a single fibre (instead of two, like you are used to) and have a connector that looks like half of a regular LC connector. You can get them in the same qualities as regular (duplex) patch cables and in single- and multi-mode as needed. They look like this:

These cables are somewhat exotic, so your usual cable dealer might not have them, but they exist and are available from specialized fiber cable dealers. As you are wiring in a daisy-chain fashion, you need one simplex cable per node you want to connect.

Once you have the necessary cables, you wire everything by connecting a cable from the output side of the FC port on the first device and connecting it to the input side of the FC port on the second device. The second device’s output side connects to the third device’s input side and so on, until you have made a full loop by connecting the last device’s output side to the first device’s input side, like this:

How can you tell the output side from the input side? Some transceivers have little arrows or have labels “TX” (output) and “RX” (input). If yours don’t, you can recognize the output side by the red laser light coming from it. To protect your eyes, never look directly into the laser light. Never connect two output sides together, otherwise you may damage the laser diodes in both of them. Therefore, be careful and always double-check.

That’s it, you’re done. All servers on the loop should immediately see the LUNs exported to the by the RAID controller.

In my setup however, further troubleshooting was required. The loop simply did not come up. As it turns out, someone had set the ports on one FC HBA to point-to-point-only and 2 Gbit/s mode. After switching both these settings to their default automatic mode, the loop went up and data started flowing. My HBAs are QLogic QLE2462 (4 Gbit/s generation) and QLE2562 (8 Gbit/s generation), and they automatically negotiated the fastest common speed they could handle, which is 4 Gbit/s. Configuring these parameters on the HBA usually requires hitting a key at the HBA’s pre-boot screen to enter the configuration menu and doing it there, or via vendor-specific software. I didn’t have access to the pre-boot configuration menu and was running VMware ESXi 6.0 on the servers. The QConvergeConsole for my QLogic adapters luckily is also available for VMware. It is not as easy to install as on Windows or Linux, unfortunately. You first install a CIM provider via the command line on the ESXi host, reboot the host and then install a plugin into VMware vCenter server.

HP StorageWorks P2000 G3

Hardware

To replace a 2006 Xserve and a 7TB Xserve RAID at the university, we recently got a Mac mini server, an ATTO ThunderLink FC 1082 Thunderbolt to 8Gbit Fibre Channel adapter, and a HP StorageWorks P2000 G3 MSA FC Dual Controller LFF (specifically, model number AP845B).

The P2000 is not explicitly on ATTO’s compatibility matrix, but when I asked their tech support about it, they said it was compatible and provided me with a pre-release version of their Multi Path Director driver for the Thunderlink which is officially compatible.

Evidently, the P2000 G3 is an OEM’d version of the Dot Hill AssuredSAN 3000 Series (specifically, the 3730), which is on ATTO’s compatibility list, so I assume the standard driver would work just as well. Update 2018: Since Dot Hill has in the meantime been sold, their support page has moved to Seagate.

We chose the Thunderlink/P2000 combo over a Promise solution because it was cheaper, fully 8Gbit capable and had four host ports. Also, I know that HP’s tech support is good and they’ll have spare parts around for many years. Plus, the P2000 is VMWare ESXi certified.

The obvious downside to the P2000 is that the disk bays do not have standard SAS connectors but require an interposer board to convert to a SCA-2/SCA-40 connector. The included slot blinds are in fact blinds and cannot be used to mount an actual drive. You can get empty caddies/trays for the P2000 on eBay or from some used SAN equipment dealer for around 100 euros, or buy your hard drives from HP for a premium of around 100-150 euros over the plain drives. (The interposer board itself appears to get sold under the model numbers 371595-001 or 60-272-02 on eBay, but I haven’t found a model number for the caddy frame yet.) If you’re buying plain drives, you can check HP’s hard drive model matrix to see what model of drive an HP part number corresponds to. For example, the 3TB SAS drive QK703A is a Seagate Constellation ES.2 ST33000650SS and the 2TB SAS drive AW555A, which we ordered, is a Seagate Constellation ES ST2000NM0001).

Firmware

I have verified that the firmwares are interchangeable between the AssuredSAN 3000 and the P2000 G3: I downloaded and extracted the TS250R023 from both Dot Hill and HP and both contain a file named TS250R023.bin with an MD5 sum of 7b267cc4178aef53f7d3487e356f8435. I assume that’s the file that can be uploaded through the web interface.

To extract the HP firmware, download the Linux updater (e.g. CP020030.scexe) and use a hex editor to find the offset of the line break after the end of the shell script at the beginning, then use dd to skip the plain text: dd if=CP020030.scexe bs=1 skip=8602 of=scexe_tmp24664.tar.gz. Now you can tar zxf scexe_tmp24664.tar.gz and pull out the TS250R023.bin.

To extract the TS250R023.bin, simply tar xf TS250R023.bin. If you want to poke around the root filesystem of the Management Controller, unsquashfs mc/components/app.squashfs. You may need to compile squashfs-tools yourself to get LZMA support (edit squashfs-tools/Makefile, set LZMA_SUPPORT=1 or LZMA_XZ_SUPPORT=1 and apt-get install liblzma-dev zlib1g-dev liblz-dev).

Setup and configuration

After unpacking the device, I first updated the firmware to the most recent version available from HP. Before you do that (I used the Windows utility), make sure to set static IP addresses or DHCP static mappings (otherwise the update might fail due to changing addresses). After you set the password for the manage user, you’ll need to SSH into the device to change the password on a hidden admin account about which HP issued a security advisory back in December 2010 (but still hasn’t fixed it in the firmware).

I created a RAID5 out of 4x 2TB drives and dedicated a fifth one as a global spare. In the global disk settings, I enabled spindown so the spare would not be running unnecessarily. The RAID initialization took close to two days, but as that runs in the background, you can already start using it.

Then I created a couple volumes (setting the default mapping to not mapped) and mapped two of them to our Mac mini server (on the Thunderlink) and a third to our two VMWare ESXi servers (on Qlogic QLE2460 HBAs). This was much easier to do than on our old Xserve RAID and I love that I can start out with smaller volumes (sized appropriately that they’ll last for the next year) and expand them later on. The P2000 does not do thin provisioning, but you can’t really expect that at this price point.

 

VMWare ESXi 5.1.0 breaks PCI Passthrough (Update: fixed in ESXi510-201212001)

After I upgraded to VMWare ESXi 5.1.0, my server crashed with a purple screen of death as soon as I fired up a VM that was using a passed-through PCI device (1244:0e00, an AVM GmbH Fritz!Card PCI v2.0 ISDN (rev 01)).I have been running the original version of ESXi 5.0.0 for a year and everything worked fine. In fact, I have never ever seen such a purple screen of death.

VMware ESXi 5.1.0 [Releasebuild-799733 x86_64]
#PF Exception 14 in world 4077:vmx IP 0x418039cf095c addr 0xl4
cr0=0x80010031 cr2=0x14 cr3=0x15c0d6000 cr4=0x42768
Frame=0x41221fb5bc00 ip=0x418039cf095c err=0 rflags=0x10202
rax=0x0 rbx=0x10 rcx=0x417ff9f084d0
rdx=0x41000168e5b0 rbp=0x41221fb5bcd8 rsi=0x41000168ee90
rdi=0x417ff9f084d0 r8=0x0 r9=0x1
r10=0x3ffd81972a9 r11=0x0 r12=0x41221fb5bd58
r13=0x41000168e350 r14=0xB r15=0x0
*PCPU3:4077/vmx
PCPU B: UUVU
Code start: 0x418039a00000 VMK uptime: 0:00:06:21.499
0x41221fb5bcd8:[0x418039cf095c]PCI_GetExtCapIdx@vmkernel#nover+0x2b stack: 0x41221fb5bd38
0x41221fb5bd48:[0x418039abadd2]VMKPCIPassthru_GetPCIInfo@vmkernel#nover+0x335 stack: 0x29000030e001
0x41221fb5beb8:[0x418B39ea2c51]UW64VMKSyscallUnpackPCIPassthruGetPCIInfo@<None>#<None>+0x28 stack:
0x41221fb5bef8:[0x4l8039e79791]User_LinuxSyscallHandler@<None>#<None>+0x17c stack: 0x418039a4cc70
0x41221fb5bf18:[0x4l8039aa82be]User_LinuxSyscallHandler@vmkernel#nover+0x19 stack: 0x3ffd8197490
0x41221fb5bf28:[0x418039b10064]gate_entry@vmkernel#nover+0x63 stack: 0x10b
base fs=0x0 gs=0x418040c00000 Kgs=0x0
Coredump to disk. Slot 1 of 1.
Finalized dump header (9/9) DiskDunp: Successful.
Debugger waiting(world 4077) -- no port for remote debugger. "Escape" For local debugger.

Turns out that is a bug in ESXi. Luckily, downgrading an ESXi is simple enough: just hit Shift-R at the boot prompt and tell it to revert to the previous version.

Update: Patch ESXi510-201212401-BG in version ESXi510-201212001 (build 914609), released on December 20th, fixes the PCI passthrough issue (PR924167) according to KB2039030.

Converting Xen Linux VMs to VMWare

A year ago I wrote about how to convert from Xen to VMWare (which is a similar process to a Xen virtual-to-physical or V2P conversion). Now I found a much simpler solution, thanks to http://www.zomo.co.uk/2012/04/moving-disks-from-xen-to-kvm/ .

In this example, I’m using LVM disks, but the process is no different from using Xen disk images.

  1. Install Debian Wheezy into a VMWare virtual machine. Attach a secondary virtual disk (it will be called /dev/sdc from now on) that’s sized about 500 MB larger than your Xen DomU (just to be safe). Fire up the VM. All subsequent commands will be run from inside that VM.
  2. Check whether your DomU disk has a partition table: ssh root@xen fdisk -l /dev/xenvg/4f89402b-8587-4139-8447-1da6d0571733.disk0. If it does, proceed to step 3. If it does not, proceed to step 4.
  3. Clone the Xen DomU onto the secondary virtual disk via SSH: ssh root@xen dd bs=1048576 if=/dev/xenvg/4f89402b-8587-4139-8447-1da6d0571733.disk0 | dd bs=1048576 of=/dev/sdc. Proceed to step 7.
  4. Zero out the beginning of the target disk: dd if=/dev/zero of=/dev/sdc bs=1048576 count=16
  5. Partition it and add a primary partition 8 MB into the disk: fdisk /dev/sdc, o Enter w Enter, fdisk /dev/sdc, n Enter p Enter 1 Enter 16384 Enter Enter, w Enter
  6. Clone the Xen DomU onto the secondary virtual disk’s first partition via SSH: ssh root@lara dd bs=1048576 if=/dev/xenvg/4f89402b-8587-4139-8447-1da6d0571733.disk0 | dd bs=1048576 of=/dev/sdc1
  7. reboot
  8. Mount the disk: mount -t ext3 /dev/sdc1 /mnt; cd /mnt
  9. Fix fstab: nano etc/fstab: change root disk from to /dev/sda1
  10. Fix the virtual console: nano etc/inittab: replace hvc0 with tty1
  11. Chroot into the disk: mount -t proc none /mnt/proc; mount -t sysfs none /mnt/sys; mount -o bind /dev /mnt/dev; chroot /mnt /bin/bash
  12. Fix mtab so the Grub installer works: grep -v rootfs /proc/mounts > /etc/mtab
  13. Install Grub: apt-get install grub2. When the installer asks to which disks to install, deselect all disks.
  14. Install Grub to MBR: grub-install –force /dev/sdc
  15. Update Grub configuration: update-grub
  16. Leave the chroot: exit; umount /mnt/* /mnt
  17. shutdown

Now you can detach the secondary virtual disk and create a new VM with it. If everything worked correctly, it will boot up.