Debian winbind name resolution giving multiple addresses – how to select one?

Posted on

Problem :

I’m trying to get winbind set up on a Debian installation as a fallback for when our DNS server fails to work. I have to use winbind (instead of an alternatives like mDNS/Avahi) since I have to use a method that will work with our existing server setup.

The Debian installation consists of:

  • Debian squeeze guest on Virtualbox (4.1.18)
  • Windows XP host OS with NAT networking
  • Debian guest IP address
  • Used for Subversion/Bugzilla with LDAP authentication to domain controller

The Windows host machine has IP address

Our Windows domain controller is Server 2011 Essentials and I have no access to fix whatever is up with it, so I can only look for a workaround (using winbind) until it gets sorted. The IP address of this one is

I installed winbind, libnss_winbind and libpam_winbind on the Debian installation. I changed my hosts line in /etc/nsswitch.conf to hosts: files dns wins. If I use nmblookup servername then I get the following output:

querying servername on servername<00> servername<00>

It seems that there are two NICs in the server, one has a private address and the other has an address on our internal network (the 192… address). I’ve verified that’s whats the output represents by looking up another computer where I can check the addresses of all NICs.

My problem is that if I use something like ping then it uses the first address that is reported (the private 169… address), which is unreachable. The same applies for any other networking code, such as when apache performs LDAP authentication for Subversion or BugZilla.

Is there any way to configure the values that winbind returns, or make it perform a status check to see whether the IP address is reachable before returning it? I haven’t found anything in the winbind documentation or online.

route -n reports the following:

Desintation Gateway  Genmask       Flags Metric Ref Use Iface U     0      0   0   eth0       UG    0      0   0   eth0

The content of my /etc/network/interfaces is as follows, but I’m currently unsure what it has to do with subnet or routing setup (i.e. it doesn’t look like it has anything in there):

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp

Here’s the routing table from the host:

Interface List
0x1 ........................... MS TCP Loopback interface
0x2 ...00 13 72 e0 93 4d ...... Broadcom NetXtreme 57xx Gigabit Controller - Pac
ket Scheduler Miniport
0x3 ...08 00 27 00 90 b3 ...... VirtualBox Host-Only Ethernet Adapter - Packet S
cheduler Miniport
Active Routes:
Network Destination        Netmask          Gateway       Interface  Metric
       20       1       20       20       20       20       20       20       20       20       1       1
Default Gateway:
Persistent Routes:

Here is a result of manually pinging the IP addresses along the network. My traceroute output either only shows the final destination (if I use the -I option) or all asterisks. So the destination should be reachable with the above routing table on the host. I assume the VirtualBox guest address ranges do not show up since they are managed by the VirtualBox application itself and are not exposed to the host. I found out that is the VirtualBox gateway within the NAT network. is the IP address for the host’s

PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=63 time=0.498 ms
64 bytes from icmp_req=2 ttl=63 time=0.490 ms
64 bytes from icmp_req=3 ttl=63 time=0.516 ms
64 bytes from icmp_req=4 ttl=63 time=0.515 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.490/0.504/0.516/0.029 ms
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=128 time=0.755 ms
64 bytes from icmp_req=2 ttl=128 time=1.04 ms
64 bytes from icmp_req=3 ttl=128 time=0.545 ms
64 bytes from icmp_req=4 ttl=128 time=0.606 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.545/0.738/1.047/0.194 ms
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=128 time=0.610 ms
64 bytes from icmp_req=2 ttl=128 time=0.639 ms
64 bytes from icmp_req=3 ttl=128 time=0.570 ms
64 bytes from icmp_req=4 ttl=128 time=0.659 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 0.570/0.619/0.659/0.041 ms
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=128 time=1.15 ms
64 bytes from icmp_req=2 ttl=128 time=0.934 ms
64 bytes from icmp_req=3 ttl=128 time=0.941 ms
64 bytes from icmp_req=4 ttl=128 time=0.856 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 0.856/0.971/1.154/0.112 ms

And VirtualBox is definately using NAT (since the NAT gateway is reachable in the above log) and is not using a host only network. The Host Only Adaptor in my host routing table was a red herring as I have disabled it and can still ping as above. Also see the screengrab below that shows the VirtualBox guest network settings: VirtualBox guest network settings. Unless there is some bug that prevents it from using my configuration correctly. The relevant section of the config:

    <Adapter slot="0" enabled="true" MACAddress="08002780662C" cable="true" speed="0" type="82540EM">
        <DNS pass-domain="true" use-proxy="false" use-host-resolver="false"/>
        <Alias logging="false" proxy-only="false" use-same-ports="false"/>
        <Forwarding name="http" proto="1" hostport="80" guestport="80"/>
        <Forwarding name="https" proto="1" hostport="443" guestport="443"/>

Solution :

The network on Debian is the zero-configuration network. It is described on Wikipedia as:

Zero configuration networking allow novice users to interconnect network enabled devices without any (zero) configuration, it is used when there is no DHCP and DNS servers available on the network.

Zeroconf provides:

Auto assignment of network address (link-local)
Auto resolution of hostnames via multicast DNS
Automatic location of network services (i.e. printing) after auto discovering DNS servers.

It can be safely disabled on a VM, To do so, you modify the file /etc/default/avahi-daemon to contain this line:


When you do this (and after restart the avahi-daemon service), the 169… server will disappear.

EDIT: At any rate, if you to test connectivity, this small script will do:

ping -c1 TheIpWhoseConnectionYouWantToTest
if [ $? -eq 0 ]; then 
    Specify here the actions you wish to insert IF there is connection

If you also wish to determine the Gateway automatically, you can do it as follows:

IP=$(route -n | grep UG | awk '{print $2}')
echo $IP

This will return automatically your gateway IP

First of all I would like to state that you are searching for the problem in the wrong place.

  • All of the protocols including multicast DNS and winbind should be able to safely return all addresses that are available.
  • Then your application (including ping) should use the operating system’s API for getting the list of address information records.
  • Finally, your application should try the information records one by one until it successfully connects (ping may not be the right test tool here).

The trick is that even though the winbind (or any other plugin) returns a list containing multiple addresses, all of them should be valid addresses you can contact. Otherwise there’s a problem either in the server or in the network configuration. But even then when the application tries to connect, it should be refused with destination host unreachable and should immediately try the next item in the list.

Even if the above doesn’t apply and you cannot fix the server nor the network configuration nor the local application, you’re not lost. The operating systems name resolution API (the getaddrinfo() function) reorders the list according to some criteria. And you can influence that criteria by editing /etc/gai.conf which was introduced mostly to configure a balance between IPv4 addresses and various types of IPv6 addresses. That way, while your winbind nsswitch plugin returns a number of addresses, it’s you who has the final word on which of them will be preferred.

As stated in other answers, 169.254/16 address space is reserved for IPv4 link-local adresses. Operating systems generally don’t have very good support for IPv4 link-local addresses (as opposed to IPv6 link-local addresses, which have far better support). The usual way is to avoid IPv4 link-local addresses altogether for hosts that have a proper IPv4 address.

If none of the above is possible, it would be a good idea to deprioritize IPv4 addresses by /etc/gai.conf in your installations and probably even by default in Linux distributions.

Also, as your local system probably doesn’t have an address from the 169.254/16 subnet,
your resolver library should be able to remove such an address from the result because it is unreachable. It may be worth considering starting a discussion with distribution maintainers.

Another solution is to keep your winbind machines in a single ethernet segment where IPv4 link-local addresses work as expected. You would have to use network bridging instead of NAT for your virtualization.

Out of curiosity, what does your route show the default route as being? Something like a: route -n will show you the default route on your system.

I’m curious if, for some weird reason, the subnet is set as the default gateway somewhere. Maybe in your network configuration, or /etc/network/interfaces, do you have a subnet, network, and potentially routes setup for your “default” interface.

Leave a Reply

Your email address will not be published. Required fields are marked *