Category Archives: Windows Server

Using a BIND DNS server in an Active Directory Environment

Years ago, I posted a script that allowed ISC DHCPd to update a Microsoft DNS server with dynamic records for DHCP clients. I haven’t used that method in a long time and there is a much simpler method: use ISC DHCPd together with the BIND DNS server like everybody else does, and only delegate the _mscds and _sites zones from the BIND server to the Microsoft DNS servers:

_msdcs.example.com. 86400 IN NS dc01.example.com.
_msdcs.example.com. 86400 IN NS dc02.example.com.
_sites.example.com. 86400 IN NS dc01.example.com.
_sites.example.com. 86400 IN NS dc02.example.com.

Then on all your machines, use the BIND server as DNS server (typically set via DHCP option 23). For Windows Domain matters, only records below _msdcs and _sites are ever looked up.

I believe you should even be able to point your domain controllers to the BIND DNS server — they should be able to follow the NS record so that whenever they try to update their own records, they do so on the Microsoft DNS server. As it turns out, the RFC 2136 DNS UPDATE method is used when domain controllers try to register their own records, so you’ll see error messages in your logs if you point your domain controllers to the BIND DNS server (on a Microsoft DC, these would refer to NETLOGON and dynamic DNS registrations, while on a Samba DC they would be about samba_dnsupdate). If you are running Samba 4.5 or higher, you should ensure that samba_dnsupdate is called with the –use-samba-tool flag, which can probably be done by setting the option below in your /etc/samba/smb.conf. If you are running an older Samba version or any Windows Server version, you need to resort to using your domain controllers’ IP addresses as DNS servers on on all domain controllers (Samba: put them into /etc/resolv.conf, Windows: set them in the network interface properties).

dns update command = /usr/sbin/samba_dnsupdate --use-samba-tool

For compatibility with Unix clients (including Mac OS X), you’ll want to add a couple of CNAME records for the SRV records:

_ldap._tcp.example.com. 86400 IN CNAME _ldap._tcp.dc._msdcs.example.com.
_gc._tcp.example.com. 86400 IN CNAME _ldap._tcp.gc._msdcs.example.com.
_kerberos._tcp.example.com. 86400 IN CNAME _kerberos._tcp.dc._msdcs.example.com.
_kerberos._udp.example.com. 86400 IN CNAME _kerberos._tcp.dc._msdcs.example.com.
_kpasswd._tcp.example.com. 3600 IN SRV 0 100 464 dc01.example.com
_kpasswd._tcp.example.com. 3600 IN SRV 0 100 464 dc02.example.com
_kpasswd._udp.example.com. 3600 IN SRV 0 100 464 dc01.example.com
_kpasswd._udp.example.com. 3600 IN SRV 0 100 464 dc02.example.com

The _kpasswd records unfortunately can’t be CNAMEs because they don’t exist in the _msdcs branch, so you manually need to keep them up-to-date when you add and remove domain controllers.

Wiring Fibre Channel for Arbitrated Loop

We have a small Fibre Channel SAN with three servers, a switch and a dual-controller RAID enclosure. With only a single switch, we obviously couldn’t connect all servers redundantly to the RAID system. That meant, for example, that firmware updates to it could only be applied after shutting down the servers. Buying a second switch was hard to justify for this simple setup, so we decided to hook the switch up to the first RAID controller and wire a loop off the second RAID controller. Each server would have one port connected to the switch and another one to the loop.

Back in the old days, Fibre Channel hubs existed for exactly this purpose, but nowadays you simply can’t get them anymore. However, in a redundant setup, you don’t need a hub, you can simply run cables appropriately, i.e. in a daisy-chain fashion. The only downside of not having a hub, the loop going down if one server goes down, is irrelevant here because you have a second path via the switch. For technical details on the loop topology, you can have a look at documentation available from EMC.

To wire your servers and RAID controllers in a fashion resembling a hub (only without the automatic bypass if a server goes down), you need simplex patch cables. These consist of a single fibre (instead of two, like you are used to) and have a connector that looks like half of a regular LC connector. You can get them in the same qualities as regular (duplex) patch cables and in single- and multi-mode as needed. They look like this:

These cables are somewhat exotic, so your usual cable dealer might not have them, but they exist and are available from specialized fiber cable dealers. As you are wiring in a daisy-chain fashion, you need one simplex cable per node you want to connect.

Once you have the necessary cables, you wire everything by connecting a cable from the output side of the FC port on the first device and connecting it to the input side of the FC port on the second device. The second device’s output side connects to the third device’s input side and so on, until you have made a full loop by connecting the last device’s output side to the first device’s input side, like this:

How can you tell the output side from the input side? Some transceivers have little arrows or have labels “TX” (output) and “RX” (input). If yours don’t, you can recognize the output side by the red laser light coming from it. To protect your eyes, never look directly into the laser light. Never connect two output sides together, otherwise you may damage the laser diodes in both of them. Therefore, be careful and always double-check.

That’s it, you’re done. All servers on the loop should immediately see the LUNs exported to the by the RAID controller.

In my setup however, further troubleshooting was required. The loop simply did not come up. As it turns out, someone had set the ports on one FC HBA to point-to-point-only and 2 Gbit/s mode. After switching both these settings to their default automatic mode, the loop went up and data started flowing. My HBAs are QLogic QLE2462 (4 Gbit/s generation) and QLE2562 (8 Gbit/s generation), and they automatically negotiated the fastest common speed they could handle, which is 4 Gbit/s. Configuring these parameters on the HBA usually requires hitting a key at the HBA’s pre-boot screen to enter the configuration menu and doing it there, or via vendor-specific software. I didn’t have access to the pre-boot configuration menu and was running VMware ESXi 6.0 on the servers. The QConvergeConsole for my QLogic adapters luckily is also available for VMware. It is not as easy to install as on Windows or Linux, unfortunately. You first install a CIM provider via the command line on the ESXi host, reboot the host and then install a plugin into VMware vCenter server.

PHP 5: ldap_search never returns when searching Active Directory

I recently moved a PHP web application from a server running PHP 5.3 on Mac OS X 10.6 to a newer one with PHP 5.4 on Mac OS X 10.9. This caused the following code sample, run against an Active Directory server, to hang at the ldap_search() call:

$conn = ldap_connect('ldaps://' . $LDAPSERVER);
ldap_set_option($conn, LDAP_OPT_PROTOCOL_VERSION, 3);
$bind = @ldap_bind($conn, $LDAPUSER, $LDAPPW);
$result = ldap_search($conn, $LDAPSEARCHBASE, '(&(samaccountname=' . $searchuser . '))');
$info = ldap_get_entries($conn, $result);
ldap_close($conn);

Wiresharking the connection between web server and LDAP server (after replacing ldaps:// with ldap://) showed:

bindRequest(1) "$LDAPUSER" simplebindResponse(1) success searchRequest82) "$LDAPSEARCHBASE" wholeSubtree
searchResEntry(2) "CN=$searchuser,...,$LDAPSEARCHBASE" | searchResRef(2) | searchResDone(2) success [1 result]
bindRequest(4) "" simple
bindResponse(4) success
searchRequest(3) "DC=DomainDnsZones,$LDAPSEARCHBASE" wholeSubtree
searchResDone(3) operationsError (000004DC: LdapErr: DSID-0C0906E8, comment: In order to perform this operation a successful bind must be complete on the connection., data0,

So it’s binding, receiving a success response, searching and then receiving a response and a referrer to DC=DomainDnsZones,$LDAPSEARCHBASE. Next, it opens a new TCP connection and follows the referrer, but does an anonymous bind.

The solution is simple: just add

ldap_set_option($conn, LDAP_OPT_REFERRALS, FALSE);

after line 2. If for some reason you actually need to follow the referrer, have a look at ldap_set_rebind_proc, which lets you specify a callback which then does the authentication upon rebind.

Update August 2015: Same goes when using Net_LDAP3, which is used e.g. by Roundcube’s LDAP integration. Here you need to add the following:

$config['ldap_public']['public'] = array(
[...]
 'referrals' => false,
);

Integrating BIND with AD-integrated Microsoft DNS

I recently set up BIND9 to run secondary zones for an ActiveDirectory-integrated DNS server (the reason being that I hated effectively losing internet access when I rebooted my W2k8R2 server). While that was really easy (add the Linux server to the nameservers tab in DNS Admin, allow zone transfers and notifications, add slave zones in the named.conf), I thought that it shouldn’t be too difficult to also automatically replicate AD-integrated Conditional Forwarders.

While they are easily found in the DC=DomainDnsZones and DC=ForestDnsZones branch inside the AD, it turns out that the server information is stored in dnsproperty attributes containing binary data. However, Microsoft actually provides a specification for their DNS data structures, which is certainly very commendable. But as it turns out, it appears to have been written by someone who had no clue about Endianness or how many bits are in a byte (*).
The essence is: everything is Big Endian, except for IP addresses (the spec claims they are Network Byte Order, but in reality they are Little Endian), and every occurence of “1 byte” in section 2.3.1.1 dnsProperty should be replaced with “4 byte”.

So after taking about two hours for something that I expected would only take a couple minutes to hack together, I ended up with 400 lines of code that generate a file you can include in your named.conf that will look something like this:
zone "google.com" {
type forward;
forward first;
forwarders { 74.82.42.42; 2001:470:20:0:0:0:0:2; };
};

zone “youtube.com” {
type forward;
forward first;
forwarders { 74.82.42.42; 2001:470:20:0:0:0:0:2; };
};
(For those curious, this sample configuration would point google.com and youtube.com at Hurricane Electric’s DNS server so that you get AAAA records, a.k.a. Google over IPv6)

After this worked, I decided to also pull my slave zone definitions through the same mechanism. It only took me a minute to do that.
zone "example.com" {
type slave;
file "slave_example.com";
masters { 10.0.0.1; };
allow-notify { 10.0.0.1; };
};

So here we are: BIND9 as a fully-blown sync partner for AD-integrated DNS zones. To add a zone or conditional forwarder to BIND, add it to AD, set it to replicate to all DNS/domain controllers in this domain or forest, add the BIND server to the nameservers tab and allow zone transfers and notifications, and wait for the cron job to kick in.

I ended up having to write this script in PHP because Python’s LDAP module appears to have a broken SASL implementation, and you need SASL to use Kerberos for an LDAP connection.

The PHP script takes two parameters (1. the AD server’s address or the AD DNS domain name; 2. the AD base DN) and requires a valid Kerberos ticket.
The shell script (which you will most likely want to run from a cron job), which shares much of its code with my script from ISC DHCPd: Dynamic DNS updates against secure Microsoft DNS, needs to be configured with your realm, domain, base DN, user name (principal) and path to a keytab for that user (instruction on how to generate the keytab using ktutil are in the script’s comments).

(*) After doing all this, I figured that people from projects like Samba that write open source software to re-implement or interface with Microsoft products are doing an absolutely amazing job. They most likely aren’t getting any better specs than the one I found on MS DNS (if they get specs at all), and yet still somehow create almost perfect software that is a lot more complex than the simple stuff I did here.

UPDATE 2011-10-30: Apparently, AD refuses all requests from Linux clients that come in via IPv6. To force IPv4, line 7 of the PHP script needs to be changed to $conn = ldap_connect(gethostbyname($adserver), 389);, which is also fixed in the downloadable script.

Xen 4.0 and Citrix WHQL PV drivers for Windows

Xen 4.0 is supposed to be able to use Citrix’s WHQL certified Windows paravirtualization drivers. Their advantage over the GPLPV drivers is that they are code-signed, meaning they run on 64-bit Windows without disabling some of Windows’ security features.

UPDATE 2011-10-17: Signed GPLPV drivers are now available. I have not yet tested them, but I assume the fix below is no longer necessary.

While the Citrix drivers included in XenServer 5.5 work (after making a single registry tweak), the more recent ones included in e.g. Xen Cloud Platform 1.0 do not work right away:

If you install the XCP drivers, make that registry tweak and reboot the DomU, you’ll notice messages like XENUTIL: WARNING: CloseFrontend: timed out in XenbusWaitForBackendStateChange: /local/domain/0/backend/console/[id]/0 in state INITIALISING; retry. in your /var/log/xen/qemu-dm-*.log and Windows just gets stuck during boot and keeps spinning forever. To get it back to work, you’ll need to
xenstore-rm /local/domain/0/backend/console/[id]
xenstore-rm /local/domain/0/backend/vfb/[id]

after starting the VM (thanks to Keith Coleman‘s mailing list post!).

To automatically run these commands upon DomU start, create a script named /usr/lib/xen/bin/qemu-dm-citrixpv with the following contents
#!/bin/sh

xenstore-rm /local/domain/0/backend/console/$2
xenstore-rm /local/domain/0/backend/vfb/$2

sh -c "sleep 10; xenstore-rm /local/domain/0/backend/console/$2; xenstore-rm /local/domain/0/backend/vfb/$2" &

exec /usr/lib/xen/bin/qemu-dm $*
and chmod +x it.

Then, edit your DomU config file and modify the device_model line and point it to your new script:
device_model = '/usr/lib/xen/bin/qemu-dm-citrixpv'

Now your Windows Server 2008 R2 x64 HVM-DomU is all set!