Investigating a tricky network problem with Python, Scapy and other network tools

We’ve had a fairly long-term issue at work with connectivity to one of our application servers. Every now and then you can’t login or connect and it has seemed fairly random. This finally annoyed myself and a customer enough that I had to look into it.

The connection is made to the server on port 1494 – Citrix ICA. Initially we suspected that the .ica file downloaded and opened by the Citrix Receiver software was incorrect or corrupt, but inspection and testing of this showed that it was OK. It really did look like the connection was just being randomly rejected.

It seemed that myself and a single customer were having far more frequent issues that other users. Of course it could just be my tolerance for whinging is lower than my colleagues.

Note that nearly all of the below was done on OS X – syntax of some of these commands differs under Linux and Windows. I have changed the host IP for security reasons.

telnet

Most applications that listen on a TCP port will respond to telnet, even if they don’t send anything back. Telnet is almost raw TCP – it has some control and escape sequences layered on top, but it is very simple at a protocol level.

ICA responds when connecting by sending back “ICA” every 5s:

But every now and then I was getting nothing back:

Oddly, whenever the Citrix Receiver failed to launch, I wasn’t always having problems with telnet, and vice versa. This is good – we’ve replicated the issue with a very simple utility using raw TCP rather than having to look into the intricate details of Citrix and whatever protocol it uses.

tcpdump

So let’s fire up tcpdump to see what happens when the connection is working. tcpdump is a command line packet analyser. It’s not as versatile or as easy to use as Wireshark, but it is quick and easy. You can use tcpdump to generate a .pcap file which can then be opened in Wireshark at a later date – this is good for when you are working on systems with no UI.

I filtered the tcpdump output to only show traffic where one of the two IPs was the server.

This all looks fairly normal – my laptop is sending a SYN to the server, which responds with SYN-ACK, and then I respond with an ACK. You can see this in the “Flags” part of the capture. S, S., . (. means ACK in tcpdump). Everything then progresses normally until I close the connection.

However, when the connection fails:

I get nothing back at all – it’s just telnet trying the connection again and again by sending SYNs. I was expecting part of the connection to succeed, but this looked like the host just wasn’t responding at all. This might indicate a firewall or network issue rather than a problem with Citrix.

I used Wireshark on the server side to confirm that no traffic was getting through. I could see the successful connections progressing fine, but I could see nothing of the failing connections. I wanted to check both sides because there were a number of potential scenarios where a client could send a SYN and not get a SYN-ACK back:

  1. Client sends SYN, server never sees SYN.
  2. Client sends SYN, server sees SYN, server sends SYN-ACK back which is lost.
  3. Client send SYN, server sees SYN, choses not to respond.

It seemed that 1 was happening here.

So what was causing this? Sometimes it worked, sometimes it didn’t. Did it depend on what time I did it? Was there another variable involved?

mtr

Let’s check for outright packet loss. ping and traceroute are useful for diagnosing packet loss on a link, but it can be hard work working out which step is causing problems. Step in mtr, or my trace route. This provides a tabular, updating output which combines ping and traceroute with a lot of useful information.

I let this run for a while and observed virtually no packet loss. It’s important to note that it is using ICMP pings – not TCP as Citrix uses. ICMP messages can be dealt with differently to TCP. mtr does support TCP pings but I can’t get it to work under OS X.

Python and telnetlib

So wrote a small Python program using the telnetlib module to periodically connect to the port using telnet and indicate when there were problems. The output was simple graphical representation so that I could spot any timing patterns.

So this prints a . for a successful connection and * for unsuccessful. After every 16 packets, the number of failures/total is printed.

What can we note?

  • There is some vague pattern there, often repeating every 8 packets.
  • The rate of failed to successful connections is nearly always 25%.
  • Varying the WAITTIME (the time between connections) had some interesting effects. With short times, the patterns were regular. With longer times they seemed less regular.
  • Using the laptop for other things would disrupt the pattern but packet loss stayed at 25%. Even with very little other traffic the loss was 25%.

What varies over time, following a pattern, but would show behaviour like this?

The source port.

Every TCP connection not only has a destination port, but a source port – typically in the range of 1025 to 65535. The source port is incremented for each connection made. So the first time I telnet it would be 43523, the next time 45324, then 45325 and so on. Other applications share the same series of source ports and increment it as they make connections.

When I run the test program with a short wait time, there is very little chance for other applications to increment the source port. When I run it with a longer wait time (30s or so), many other applications will increment the source port, causing the pattern to be inconsistent.

It really looked like certain source ports were failing to get through to the server.

netcat

I had to test this theory. You can’t control the source port with telnet, but you can with the excellent netcat (or nc, as the command is named). “-p” controls the source port:

As you can see – connections from 1025 and 1027 always succeed and 1026 always fail. I tested many other ports as well. We have our culprit!

Python and Scapy

Now, can we spot a pastern with the ports that are failing and those that are succeeding? Maybe. I needed something to craft some TCP/IP packets to test this out. I could use netcat and a bash script, but I’ve recently learnt about Scapy, a packet manipulation module for Python. It’s incredibly flexible but also very quick and easy. I learnt about it after reading the book Violent Python, which I would recommend if you want to quickly get using Python for some real world security testing.

The script needs to connect to the same destination port from a range of source ports and record the results. With Scapy, half an hour later, I have this (note, I did have some issues with Scapy and OS X that I will need to go over in another post):

This produced really helpful output. The failed packets are highlighted in the excerpt below:

At this point in the port range it appears that packets ending in 001 or 110 are failing.

Move further down the port range and packets ending 000 and 111 are failing.

In fact, at any given point it seems that the packets failing are either 000/111, 001/110, 010/101, 011/100 – complements of one another. Higher order bits seem to determine which pair is going to fail.

Curious!

What makes this even stranger is that changing the destination port (say, from 1494 to 80) gives you a different series of working/non-working source ports. 1025 works for 1494, but not 80. 1026 works for both. 1027 works only for 80.

All of my tests above have been done on my laptop over an internet connection. I wanted to test a local machine as well to narrow down the area the problem could be in – is it the perimeter firewall or the switch the server is connected to?

hping3

The local test machine is a Linux box which is missing Python but has hping3 on it. This is another useful tool that allows packets to be created with a great degree of flexibility. In many respects it feels like a combination of netcat and ping.

What does all this mean?

  • First parameter is the IP to connect to.
  • -s is the start of the source port range – hping3 will increment this by 1 each time unless -k is passed
  • -S means set the SYN flag (similar to the Scapy test above)
  • -i u100000 means wait 100000us between each ping
  • -c 20 means send 20 pings
  • -p 1494 is the offending port to connect to

And what do we get back?

The same sort of packet loss we were seeing before. Oddly, the source ports that work differ from this Linux box to my laptop.

Here’s where it gets even stranger. I then decided to try padding the SYN out with some data (which I think is valid for TCP, though I’ve never seen a real app do it and mtr’s man page says it isn’t). You use -d 1024 to append 1024 bytes of data. I first tried 1024 bytes and had 20% packet loss as before. They I tried 2048 bytes:

Wait? All the packets are now getting through?

Through a process of trial and error I found that anything with more than 1460 bytes of data was getting through fine. 1460 bytes of data + 20 bytes TCP header + 20 bytes IP header = 1500 bytes – this is the Ethernet MTU (Maximum Transmit Unit). Anything smaller than this can be sent in a single Ethernet frame, anthing bigger needs to be chopped up into multiple frames (although some Ethernet networks allow jumbo frames which are much bigger – this one doesn’t).

I then ran the hping3 test from my laptop and found that altering the data size had no impact on packet loss. I strongly suspect that this is because a router or firewall along the way is somehow modifying or reassembling the fragmented frames to inspect them, and then reassembling them in a different way.

At this point I installed the Broadcom Advanced Control Suite (BACS) on the server to see if I could see any further diagnostics or statistics to help. One thing quickly stood out – a counter labelled “Out of recv. buffer” was counting up almost in proportion to the number of SYN packets getting lost:

BACS

This doesn’t sound like a desirable thing. It turns out the driver is quite out of date – maybe I should have started here!

Conclusion

I’m still not sure what is going on here. The packets being rejected do seem to follow some kind of pattern, but it’s certainly not regular enough to blame it on the intentional behaviour of a firewall.

At this point we are going to try upgrading the drivers for the network card on the sever and see if this fixes the issue.

The point of all of the above is to show how quick and easy it can be to use easily available tools to investigate network issues.

5 thoughts on “Investigating a tricky network problem with Python, Scapy and other network tools

  1. Permalink  ⋅ Reply

    Kemp

    December 5, 2013 at 10:58am

    Interesting post. One quick question though: with netcat port 1026 always failed, while with your Scapy based script it was 1025 that failed. Is that a clue, anonymisation error, or something else?

    • Permalink  ⋅ Reply

      cybergibbons

      December 5, 2013 at 12:03pm

      So netcat sets a number of options, so the header is slightly longer. It really looks like bits in the middle of the TCP header are causing the packet to not get through.

  2. Permalink  ⋅ Reply

    cybergibbons

    December 5, 2013 at 11:02am

    A very interesting point, and a great reason why writing these things down and getting someone else to look at them is often helpful.

    It sounds like a clue – just tested, and yes, netcat always fails one port lower than the Scapy test. I might have to look in Wireshark and see what is going on.

  3. Permalink  ⋅ Reply

    Mike Dawson

    December 5, 2013 at 6:19pm

    so try swapping out switches, or changing MTU size, or enabling jumbo frames, or updating drivers (like you said you were going to do)

    Please let us know the result!!

    • Permalink  ⋅ Reply

      cybergibbons

      December 5, 2013 at 6:28pm

      Driver change is first step. Needs someone on-site and out-of-hours so going to do it tomorrow.

Leave a Reply

Your email will not be published. Name and Email fields are required.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">