Understanding Binary and Data Representation with CyberChef

A significant part of reverse engineering and attacking devices relies on viewing and recognising data in various forms and working out how to decode it.

We typically use Linux tools and scripts to do this, but you can make the first few steps using a really neat online tool called CyberChef.

What is binary?

All data is stored as a series of 1s and 0s. A single 1 or 0 is called a bit. We call this binary because there are two values.

The next largest common unit is a byte, which is 8 bits.

Beyond this, SI prefixes are used. 1 kilobyte (kB) is technically 1000 bytes and 1 kibibyte (KiB) is 1024 bytes. However, kB is frequently used for both 1000 bytes and 1024 bytes, even in technical contexts. During most reverse engineering, kB means 1024 bytes.

kB (kilobyte)1024 bytes
MB (Megabyte)1048576 bytes or 1024kB
GB (Gigabyte)1073741824 or 1024MB

To determine how many possible values can be stored in a data of a given length, you do the following calculation:

Values = 2^bits

^ means “to the power of”

For example, a single byte (8 bits) can store 2^8 or 256 values. 2 bytes (16 bits) can store 2^16 or 65536 values. Increasing the bit length by 1 bit will double the number of possible values.

You can see that by the time you have reached 64 bits, there are a huge number of possible values.


The number of potential values can be important when calculating the search space for performing brute-force attacks.

Although there are 256 values in a byte, the values normally start at 0. Therefore, the range is 0-255, covering all 256 values.

Binary data can encode information in many different forms. The following are all representations of the same data

  • 01000001 (binary)
  • 65 (decimal)
  • 41 (hexadecimal)
  • A (ASCII or text)
  • QQ== (base 64, a means of encoding binary as text)

What is hexadecimal? Well, instead of each digit representing 10 values (0-9 as in decimal), each digit represent 16 values (0-15). Clearly we can’t put 15 into one digit, so we use letters above 9.


As a single hex digit represents 16 values, this is only 4 bits (2^4 = 16). To represent a byte, we need to use 2 hex digits such as D4 or 8E.

We will frequently use the prefix of 0x to represent hex i.e. 0x41. Context is everything though – never assume how data is encoded! It can be text, part of a floating point number, or code.

You can use the built-in calculator in Windows and OS X to convert from one to the other if you switch to programmer view:

Onto CyberChef

A useful tool for many of these understanding data is called CyberChef. This is an online tool that runs entirely in the browser. None of the data entered leaves your machine, and it can be saved and run locally.

Yes, it’s GCHQ. No, they aren’t stealing your secrets. At least not using this tool.

Multiple operations can be chained together to form a pipeline. This includes simple conversions, but also complex things such as encryption and decryption.

Let’s start my putting some text into the “Input” section. This will be copied verbatim to the “Output” section as no “Recipe” has been created.

On the list of “Operations” on the left hand side, drag “To Hex” into the “Recipe”. The output will now show a hexadecimal representation of the text.

You can search the operations using the box on the top left rather than hunt through all the subsections.

The text is encoded using a method called ASCII. Each character is represented by a single byte. In reality, only 7 of the 8 bits in the byte are used, giving 128 (2^7) possible characters.

Converting to and from ASCII is a very common task. Tables showing all the values are available online.

Ranges and values worth becoming familiar with are:

  • 0x20 – space
  • 0x30-0x39 – 0-9
  • 0x41-0x5A – A-Z
  • 0x61-0x7A – a-z

It’s very common for the value of 0x41 – capital A – to be used when performing tests for buffer overflows. You’ll start to recognise long strings of 41414141 when looking at memory!

ASCII is not the only way of encoding text. Unicode is a common format that allows many more possible characters, but there are tens of different encodings. You can use the “Text encoding” operation to see these. UTF-16LE encodes each letter as 2 bytes (the 16 means 16 bits). Now each second character looks like a “.”.

If we now add the “To hex” operation, we can see that every second byte is 0x00 – a null. Those “.” just mean “I’m not sure how to display this”.

We can flip this round and firstly take hex, convert it to binary using “From hex”, and then decode the text using “Text decode”.

You can click this link to see this directly.

If we use the correct encoding, (UTF-16LE), then the text looks as expected. Change it to UTF-16BE though, and suddenly we have nonsense.

This is a vital point – binary data can be interpreted in many different ways. Our operating systems and programs use file extensions and metadata to determine how to handle the content, but as reverse engineers we often need to guess.

So far we have just looked at text. However, executables on our machines are also just binary data. We can load these into CyberChef and analyse them.

I have chosen to look at write.exe from C:\Windows\system32\ – it’s small enough that CyberChef can handle it but provide some interest.

You’ll immediately see some recognisable strings. Nearly all executables will have these in some form. They can be incredibly useful in reverse engineering software, allowing us to determine function, endpoints, passwords and more.

Another operation called “Strings” will filter out lengths of text longer than a certain limit. As you can see, it matches some data that is not text – this is just where binary data happens to decode as ASCII correctly.

We know that this is a Windows executable as we just read it from our own system. The .exe on the end is just part of the filename – it would still be the same binary data if it was called cheese.txt.

But frequently, we don’t actually know what a file is actually meant to do – is it an executable? A zip file? An image?

CyberChef has the operation “Detect File Type”. This fingerprints the file and gives you a best guess as to what it is. It’s not infallible, but it is helpful.

Let’s analyse a slightly longer text file of words.

Add the operation “Entropy”. What this does is look at the “randomness” of a file. By default, this uses something called Shannon entropy calculated across the whole file. Data in the middle ground has structure – it’s either text, an executable, or some other form of information.

Another useful tool to determine file content is “Frequency Distribution”. This counts how often given byte values occur across the file. Frustratingly, the lower axis in in decimal not hex, but you can see some clear spikes. The very high one – at 32 – is 0x20 or space. The cluster around 107 are lowercase characters.

Now try the same with write.exe and a zip file (>10Kbytes or so).

You’ll see that write.exe actually has entropy similar to text – despite it not being text. It has structure however and does contain some text. The frequency distribution is quite different though!

I’ve added another operation – remove null bytes – so that the huge number of 0x00 in the file are removed from the graph to make it more clear.

You can see the same “bump” around 107 – this is the text embedded inside the executable. But there is a much wider spread of values than with the text file.

The zip is a different story though. The Shannon entropy is 7.99 out of 8. It’s as random as it could be. This is nearly always a sign of compression or encryption. Nearly all compression works by spotting patterns and condensing them down – hence the structure is removed. Encrypted data should be indistinguishable from random noise.

This is a zip file – so the compression results in high entropy.

When we look at the frequency distribution, we can see a much more uniform distribution across all values. Again, a sign of compression or encryption. These are handy tools to determine what is in a file, especially firmware and the like.

To demonstrate this, you can encrypt a file using AES in Cyberchef and check the entropy and frequency distribution.

You can click this link to see the operations required.


I hope this introductory post helped you to understand binary and some of the tools we can use to convert, decode, and understand the purpose of various data.

BC Vault – is their security model better?

Yesterday, on the back and forth about BC Vault, their CTO, Alen Salamun, kept on saying their wallet was more secure, based on their product needing 5 items to be breached, and other wallets just 1.

To access the funds on BC Vault, you need:

  1. Global password
  2. Global PIN
  3. Wallet password
  4. Wallet PIN
  5. Device or backup file

To access the funds on other wallets, you need:

  1. The BIP39 words

I don’t see how you can possibly claim that BC Vault is more secure based on this comparison. All you can say is that it is different. It certainly is not “simple math”.

My BIP39 words are stored on a piece of paper around 200 miles from here, in a safe. I was told I would only have to enter them should my hardware wallet lose the key material. I do not need access to them, and probably never will. I do not need these words to spend funds. These words have never been entered into a computer.

Each time I want to use a BC Vault, I need to enter the passwords (which are entered into a computer) and a PIN (entered into the device). Entering data into a computer puts you at risk of phishing. Entering the PIN puts you at risk of shoulder surfing, among other attacks. A user will need to keep this information at hand to use the wallet, unlike BIP39 words.

In fact, I didn’t keep the BIP39 words on my Trezor, and hence it is impossible to access the funds without the device. This clearly demonstrates that you do not need the words to use the wallet.

This “simple math” is comparing apples and oranges, and is exactly the same path Bitfi went down. Bitfi claimed that their model of entering everything each time you used it was clearly better than storing keys in a secure box.

All we can say is that these are different security models.

It was inferred that I said this was worse or the same. It’s interesting how many vendors go down this route – when people compare their system to others, they automatically assume you said it was worse.

My issue isn’t that they are different. It’s the claim that it is clearly better. Prove that 5 regularly used items are more secure than 1 infrequently used.

It’s not as simple as 5 > 1.

BC Vault: Bitfi Mk2?

I wasn’t aware of BC Vault until a few days ago, when their CTO, Alen Salamun, popped up in response to a vulnerability disclosed in another hardware wallet.

What’s that? Another bounty, loaded onto a wallet, and sent out?

Does this sound familiar to anyone?

They even frame this as a “Guaranteed Security”:

All this bounty does is demonstrate that someone cannot recover a key from a given device – the stolen device threat.

It doesn’t provide any assurance around phishing, evil maid attacks, or the usability of the system. The bounty provides no guarantee whatsoever.

Then Dimitry Fedotov, who deals with BC Vaults business development, laid down the gauntlet:


1 BTC is currently $9,400.

Day rates for hardware testing are $2,000.

That’s less than 5 days pay.

A full security review and penetration test of a hardware wallet would easily run to 25-30 days of work, and cover many more threats than “someone stole my wallet”.

This bounty is just another rigged fairground game.

Bitfi – Some Requests

It’s been almost 11 months since we showed that the Bitfi Knox MD40 wallet was vulnerable to two quite serious attacks:

  • The keys persisted in memory for days, allowing them to be recovered over USB in a few minutes – the cold boot attack.
  • The device could be easily rooted and backdoored over USB in a few minutes, allowing an attacker to receive your keys when you used them – the evil maid attack.

In those 11 months, Bitfi have not informed users of their device that they were vulnerable, or if they continue to be vulnerable:

  • Is the USB bootloader still open on some or all Bitfi devices?
  • Can some or all devices still be rooted trivially?
  • Have reasonable precautions been made to wipe the RAM?
  • How does a user determine if their device is vulnerable?

Without knowledge of the vulnerabilities on their devices, users cannot take appropriate actions to mitigate risk.

If you take their threat model as truth – that it is safe from nation states – you have no idea if your funds are at risk or not.

This is completely unacceptable, especially when one of their co-founders claims that not informing users of security issues puts lives at risk.

Further to this, Daniel Khesin has stated they believe the attacks take at least 10 minutes and have a 25-30% success rate. In reality, it was 2 minutes, and we didn’t see them fail. This suggests a massive disconnect on Bitfi’s side – they don’t actually understand the issue.

Without acknowledging the ease with which the attacks were carried out, there is no way they can actually fix them properly.

I have some very simple – and reasonable – requests for Bitfi:

  1. Document the attacks clearly and concisely on your own site, bitfi.com, including which versions of hardware and software are still vulnerable.
  2. Inform your customers, by way of both email and the dashboard, of these issues.

A company unwilling to take these actions is, by their own words, putting people at risk.

Without these basic courtesies in place, I’m not even going to entertain looking at the devices at Defcon.

What threat model is Bitfi working under? Not a realistic one.

Daniel Khesin – Bitfi’s not-CEO – has recently started differentiating Bitfi as the only one that can protect against state actors.

This doesn’t hold up to scrutiny.

Daniel keeps on talking about “forensic labs” that can recover keys from Trezor’s STM32 MCU. This is a feasible task.

He also claims a “secure element” does very little to stop extraction. Recovering data from most secure elements is beyond the means of nearly every lab in the world. Those that could carry it out will be charging very large sums of money.

Finally, he claims that anything typed into a computing device can be recovered. As of 2019, there are no labs on this earth who can recover data from the RAM inside a powered down MCU.

But, let’s assume these labs exist.

There are three serious logical errors here.

Firstly, how does the state actor have such advanced capabilities in recovering data from a secure element, but are unable to backdoor a Bitfi?

If a state actor wants to access your funds on Bitfi, all they need to do is backdoor the device. We showed how easy this was last year. It’s a much easier attack than cold boot.

Secondly, if a state actor wants your funds, they are either going to get them or make your life unpleasant. In other words, YOU’RE STILL GONNA GET MOSSAD’ED UPON.

Thirdly, if these labs are capable of recovering anything typed into a computing device, this means that these same labs can recover the seed and phrase from a Bitfi.

Bitfi have created a threat model to which their own device is incredibly vulnerable to.

Bitfi Does Store Keys

Well, here we are again.

The topic of Bitfi has reared it’s ugly head. I’ve written about Bitfi several times before, but they are still banging on about how their device doesn’t “store” your keys. If it doesn’t store your keys, there is nothing to steal.

This is bullshit.

There are two options here:

  • It does not store the keys
  • It does store the keys

Let’s threat model these two.

Situation: Bitfi does not store the keys

Imagine there is a means by which the device, genuinely, does not store keys in any form, for any length of time.

This would stop all attacks that aim to steal they keys, because they do not exist on the device. This would include:

  • Cold boot attacks that recover the key after the device has been used.
  • Evil maid attacks where the firmware is modified to recover the key before it is used.
  • Side channel attacks where the device leaks information about the key.

As these attacks would be impossible, there would be no need to use mitigations to make them more difficult to carry out.

Situation: Bitfi does store the keys

Now back to reality. Bitfi does store keys for a finite length of time in RAM.

This means that:

  • Cold boot attacks are now possible as the keys did exist in RAM and may remain in some form.
  • Evil maid attacks are now possible, as modified firmware can read the key and send it elsewhere.
  • Side-channel attacks are now possible, as the device has to store the key

This, in turn, means that mitigations must be put in place to make these attacks harder (but not impossible) to carry out.

The efficacy of these mitigations is therefore key to the security of the device.

Bitfi has (some) of these mitigations in place. As far as I know they:

  • Attempted to reduce the amount of time the keys exist in memory.
  • Attempted to obfuscate the contents of memory.
  • Have prevented USB data access to the device.


If Bitfi didn’t store keys, there would be no need to mitigate against attacks that steal the keys. They do have these mitigations in place.

It’s dishonest to keep on claiming that it doesn’t store keys when it does.

How effective are these mitigations? Well, we have no idea. I doubt Bitfi do either though.

Eternal Vigilance is the Price of Liberty

Those that know me well will know that I hold privacy and liberty of the individual as one of my core principles. I believe we are entitled, as part of our human rights, to go about our lives without intrusion from either the government, business or individuals.

In the UK, we are subject to pervasive surveillance by CCTV, ANPR, and other monitoring technologies. The government, media and police attempt to use a “nothing to hide, nothing to fear” mantra to convince the public that these technologies are effective and have little downside. Unfortunately, the cost-benefit of many of these systems has not been demonstrated. Even if it were, the costs would be purely tangible ones – install, maintenance and operating costs – ignoring the impact it can have on our personal wellbeing. Over time, we become used to this surveillance and accept it without question.

Recent rises in crime, along with reduced police resources, have triggered community crime-fighting efforts. Neighbourhood Watch and volunteer patrols are often suggested and can have positive effects.

But a recent post on a Facebook group proposed a system that could invade privacy, would not comply with data protection law, could place homeowner’s networks at risk of attack, and has no demonstrated impact on crime levels.

Initially, I was willing to put this down to naiveté and a “anything-is-better-than-nothing” attitude, but it soon because clear that these were not the issue here.

Unfortunately, this is a closed group and I can’t just link to it. I hope that the following excerpts are representative of the whole.

I’ll summarise this, and subsequent posts:

To reduce crime, a network of Raspberry Pi based automatic number plate recognition (ANPR) cameras would be installed. This would in the area of a small town and would be operated outside of GDPR or any other data protection laws. The cameras would be located at knee-height on the perimeter of private properties, filming public roads.

Residents pay to install and operate the system. Residents can prevent their vehicles from being logged by registering their number plate with the system. This would require sending a V5 document or an image of the car in the driveway.

In the event of a crime, the list of unregistered plates would be used somehow. There is also suggestion that alerts could be raised on “bad” plates.

There is proposed expansion to record WiFi and Bluetooth identifiers alongside number plates.

I have a number of serious concerns around this scheme.


Elliot wishes to operate the scheme outside of current data protection laws.

He has made several incorrect or questionable claims here.

It seems to have been accepted by Elliott that recording images of people would mean the system would fall under GDPR.

There are a number of claims that need examining.

Firstly, it’s highly unlikely the cameras will only gather number plates. Whilst some ANPR cameras have a limited FOV and are virtually useless at capturing images of people, this is not the case here. They are general purpose, wide-angle cameras mounted at knee-height. If you stand 3m away from this camera, you will be captured from head-to-foot. The chance that a significant network of cameras does not capture images of people is vanishingly small.

Secondly, the notion that number plates do not constitute personal information is false. The ICO ruled on this in 2009: vehicle registrations of vehicles owned by individuals are personal information.

Thirdly, the camera network gathers more than just a number plate. There is also the time and location over a network of cameras, providing a route. This makes the information even more likely to lead to an individual being identified.

Fourthly, to register your car on the system, you are required to send your V5 or an image of the car in a driveway. This is certainly personal data.

Elliott was challenged about this. Rather than accept that the system may need to handle the data under GDPR, he doubled down around the V5s and images of cars in a drive.

He now tries to argue that the system isn’t being operated by an entity – it’s just citizens sending data to each other. It seems a very odd argument, given that the operator of the system would be taking payment of £50/year per camera – that sounds a lot like central entity. More to the point, GDPR doesn’t really care if it’s a business or individual, it cares about the data being gathered.

Fundamentally, Elliott is proposing a system that would gather other people’s data and that these people would not have any of their rights under GDPR. They would not be informed, they would not have the right to access, and they would not have the right to erasure.

To make things worse, the scope creeps to include Bluetooth and WiFi data gathering. Now your phone and smart watch will be tracked by the same system.

Without the controls that data protection laws provide, who knows what the data will be used for?

Information Security

The proposed system would place a network of Raspberry Pi’s on the networks of many homeowners.

I have three concerns here.

Firstly, I would be concerned that attackers without authorisation could take command of one or more of these devices remotely, viewing the cameras, injecting false data, or attacking the homeowner’s networks. IoT security isn’t easy, and I have seen many Raspberry Pi based systems fail badly and fail hard.

Secondly, I would be concerned that someone with authorisation to access the devices could attack the homeowner’s networks. Given that the system is operating outside of data protection laws, and that it isn’t operated by a company or entity, how do you know who has access to the devices? What controls have they put in place to protect you? What comeback would you have?

Thirdly, what would happen if one of the devices was stolen? What access would this permit the attacker? I often see credentials from a single device permit access to many more.


It is stated that the system will avoid capturing anything except images of number plates. As a result, it won’t actually capture images of crimes. It will just know which vehicles had been in the area at a given time.

If a crime occurs, all the system will be able to provide is a list of number plates of vehicles in the area. This list will contain residents who have not been registered, visitors, people passing through, vehicles that have not been registered as leaving due to coverage, and maybe the vehicle the criminal used.

I’m not sure what this list will be used for.

I’m not sure what the police would do with a list like this.

It’s certainly not obvious that it will provide any benefit.

Essentially, anyone with the audacity to enter the Oxted ring-of-steel will become a suspect.

If 100 cameras are installed, then it will cost £5,000 to install, and £5,000/year to operate – £30,000 over 5 years. Is it going to provide value compared to other options?

I’d want something more than an appeal-to-emotion to justify installing such a system.

Ironically, if the system gathered images of people and crimes, it would probably be of more use.


There is the explicit admission that he will try to avoid GDPR.


Data protection law is often maligned. It isn’t the evil beast that many make it out to be.

Entities that comply with data protection law have normally considered what data they gather, and how they will protect it.

Those that don’t comply with data protection law often gather more than they need and don’t adequately protect it, likely because they don’t think any of the penalties can apply to them.

GDPR doesn’t exist to stop people implementing ANPR systems; it exists to allow those surveilled by such a system to know what happens with the data.

It’s often less effort to comply with the law than it is to skirt around it.

Ask yourself why Elliott is trying to escape these responsibilities and what impact it could have on you.

Bitfi against the competition – updated 2019

Bitfi’s core concept is that the wallet itself does not “store” your private keys – it calculates them on the fly.

It is acting as a simple key-derivation function. A salt (8 characters) and a phrase (30 characters or more) are passed through a simple algorithm to generate longer private keys.

The Bitfi takes a short key and “stretches” it into a longer key.

There is no secret sauce in a Bitfi – you don’t need that user’s Bitfi (or any Bitfi, for that matter), the user’s email address, password, access to the Bitfi dashboard or anything else. All you need is the salt and phrase. With those, funds can be accessed.

The Competition

We need to see how others have tried to implement hardware wallets. Most of them don’t work like the Bitfi.

  • A private key is generated using a random number generator in the wallet.
  • The private key is displayed on the wallet itself. The user writes it down and stores it safely as a backup.
  • This is the only time the key is output from the wallet.
  • From this point onward, the wallet acts as black box, signing any transactions that are sent to it. There is no need to enter the private key, and the private key is never output from the device.

This is probably the most important security feature of a hardware wallet: the private key is locked away in a black box with a limited attack surface and on a device that has a single use. The private key never needs to exist on your malware-infected, general-purpose laptop, and is therefore much less likely to be compromised.

There are still some obvious security issues that need to be handled.

  1. If the wallet is stolen, it could be used to sign transactions. To prevent this, the wallet uses a PIN to guard access to the black box. Without this PIN, the device will not function. Brute-force protection makes it infeasible to try all combinations, preventing the private key from being accessed.
  2. If the wallet is stolen, the private key could be read out from the memory inside the device. To mitigate this, the keys are stored on internal flash on microcontrollers. There is no easy way to read the keys out without advanced hardware attacks.
  3. The wallet simply signs transactions that are sent to it. A user may be expecting to sign “send 2BTC to Bob”. This could be modified to be “send 200BTC to Eve”. To prevent this, the wallet displays the transaction details and asks the user to confirm using buttons on the wallet.
  4. The wallet will run firmware, which can be updated. A malicious firmware could be loaded, allowing the private key to be read, or hide modified transactions from the user. Signed firmware and secure boot are used to protect against this. Only the vendor can generate firmware that the wallet will accept.

These protections are not perfect; a determined and dedicated attacker can still circumvent them. This is a key point: all we need to do is raise the bar high enough that most attackers don’t succeed.

Here are some example attacks from wallets:


Bitfi is very different to those hardware wallets.

Each and every time you need to sign a transaction, the user enters their entire salt and phrase via the touchscreen display.

The wallet then derives the private key. There is no need for the wallet to store the private key long-term.

Bitfi proudly state that this means there is “nothing to hack” – if there are no keys stored on the device, how can a hacker possibly get the keys?

At first glance, this seems sensible. But if you dig a bit deeper, you quickly realise how broken this concept is. I’m going to describe the attacks we have developed against Bitfi, and how they stack up against competitors.

Cold-boot attack


With USB access to a stolen Bitfi, the salt, phrase and keys can be read from the RAM. This is called a “cold boot attack“.


The data persists in RAM for days whilst the device has a battery in it.

This only requires a USB cable and a laptop. The wallet casing does not need to be opened, and no specialist hardware is required. No skill is required as the attack is entirely scripted.

The attack takes less than a couple of minutes, and the device works as normal afterwards. It is feasible for this to be carried out during luggage X-ray at an airport, and returned to the user.

The attack has never bricked the Bitfi and has worked extremely reliably.

There is no requirement to cool the device to perform the attack.

Bitfi recommend the use of “diceware” passwords, which greatly facilitates the recovery of the phrase from memory. The use of a list of dictionary words means there is a lot of redundancy, which in turn means that the memory can degrade significantly and we can still recover the phrase.

Bitfi did not inform their customers of this issue.

Update – Summer 2019 – It has been claimed that the issue has been fixed, but no evidence has been provided, and no independent testing has been carried out.


To protect yourself, the battery must be removed from the wallet after use. To ensure the keys are no longer in memory, the wallet must be left powered down for at least an hour. During this period, you must make sure no one physically accesses the device.


The RAM on most other hardware wallets is protected from access via USB or debug protocols such as JTAG or SWD. We have observed no such protections on the Bitfi.

Even with this considered, most other wallets take steps to “zeroise” or wipe the memory immediately after it is used. This limits the window over which cold-boot attacks could be carried out. This is either not performed, or is wholly ineffectual on the Bitfi.

There are no published cold-boot attacks against other hardware wallets.

Malicious Firmware


With USB access to a Bitfi, the firmware can be modified so that the salt, phrase and keys are sent to an attacker the next time they are entered by the user. This permits “evil maid attacks” as well as supply chain tampering.

This only requires USB access to the device. The wallet casing does not need to be opened, and no specialist hardware is required. The attack could be carried out if the Bitfi is connected to a compromised charger or laptop. Using this vector, the attacker never requires physical access to the device.

The attack takes less than a minute, and there are no mechanisms for the user to detect the modification. It is feasible for the attack to be carried out by anyone with access to the device for a short period, either before (supply chain tampering) or after (evil maid) the Bitfi enters possession of a user.

This attack could happen before you receive the device, when going through airport security, or when it is left unattended at home.


There are no mechanisms for a user to check if the device has been tampered with at a firmware level. This has been confirmed by Bitfi developers.

A user could assume that the device is trusted as received from Bitfi. As long as the device does not enter the hands of an attacker, and is never connected to an untrusted USB power source, it could be considered secure from this specific attack.

Assuming the device is trusted as received is therefore high-risk.


The use of signed firmware updates and secure boot mean that other wallets cannot have their firmware modified in less than a minute using just a USB connection.

There are still other attacks that hardware wallets are vulnerable to.

The wallet can be stolen and replaced in entirety. The replacement will send the PIN to the attacker, allowing them to unlock the stolen wallet and access funds. The user would detect the issue as their key would not be on the replacement wallet, and they would not be able to access funds.

The wallet can be accessed, modified with a hardware implant, and returned. It could harvest the PIN, modify transactions, or spoof integrity checks. This attack is significantly more challenging that simply plugging the device into USB, and to-date, no feasible attack has been shown against any of the popular wallets.

Shoulder surfing


The entire salt and phrase need to be entered into the Bitfi each time it is used. This is entered using a conventional touchscreen, with a QWERTY keyboard, and displayed on the screen clearly.

It is entirely possible to read the salt and phrase from the screen and then use this to steal funds from the user.

Even without direct view of the screen, the finger position when typing allows characters to be inferred. The use of dictionary words means that even if certain characters cannot be determined, they can be inferred from those that can be seen.

As the salt and phrase contain all the information required to steal funds, a user may be entirely unaware that they have been compromised. The attacker can delay the attack until an opportune moment.


The Bitfi wallet cannot be used when someone can observe the device.


Whilst the PIN can be shoulder-surfed on other devices, an attacker still requires access to the wallet itself to obtain the key. This provides a significant additional layer of security.

Some other wallets mitigate the risk of shoulder surfing by randomising PIN layout.

Key entropy


The Bitfi device allows users to choose their own salt and pass phrase. Multiple studies have shown users are very bad at choosing and storing passwords, and there is no reason to assume that Bitfi will differ.

It was discovered that one user even used the sample phrase from the Bitfi instructions.


A salt and phrase of good quality must be used.


Competing wallets encourage the use of keys that are randomly generated using a good source of entropy, removing the human aspect.

Something-you-have and something-you-know


Bitfi only requires the salt and phrase, and nothing else. Wallets can be used interchangeably (at least, at a functional level – this is not recommended for security reasons).

If your salt and phrase leak via any means, an attacker has access to your funds. There are no flags to signal this.

This is termed single-factor authentication.


There are no good means to solve this issue.


Other wallets support passphrases as part of the BIP39 specification. To use the wallet, you need both the key stored in the wallet itself, and a passphrase that is stored elsewhere. This is something-you-have (the key on the wallet) and something-you-know (the phrase).

Use of a passphrase with BIP39 significantly elevates the security above that of a Bitfi.


The Bitfi wallet is less well protected than competing hardware wallets. If you ever let anyone else have access to it, ever connect it to an untrusted device, or use it in a public place, you are not safe.

Users of Bitfi must take significant and limiting steps to mitigate the risk they are exposed to.

Even ignoring Bitfi’s dishonest behaviour, the product has little to recommend it over any other wallet.

Can these issues be fixed?

We aren’t really sure.

There is no hardware root of trust on the Bitfi. This must be burned into the device before it leaves the vendor’s possession for it to be secure. Without this, secure boot cannot be implemented well.

The use of external RAM on a commodity chipset (without RAM encryption) will always leave the keys exposed, no matter how well you try to wipe them from software.

Android is a poor choice of operating system. It makes wiping memory very challenging – the salt and phrase are in tens of locations. It also makes limiting the attack surface very, very hard.

The Bitfi hardware wallet isn’t “unhackable”

Earlier this week, cryptocurrency news was full of stories about a new hardware wallet: the Bitfi.

What makes this one any different?

John McAfee claims it is “unhackable”. Not just “harder to hack”, but “unhackable”.

That’s a bold claim. They know it’s a bold claim, so they have set a bounty.

Sounds great, no?


The bounty deliberately only includes only one attack: key recovery from a genuine, unaltered device. And the device doesn’t store the key.

The only way to win the bounty is to recover a key from a device which doesn’t store a key.

There are many, many more attacks such a device is vulnerable to. The most obvious one: modifying the device so that it records and sends the key to a malicious third party. But this is excluded from the bounty.

Why is this?

Because the bounty is a sham. When it lays unclaimed, Bitfi can say “our device is unhackable”. What it actually means is “our device is not vulnerable to one specific attack”.

I’m going to put a challenge to them.

If their device is unhackable, then change your bounty terms:

  • A trusted intermediary is chosen e.g. a lawyer or judge.
  • We provide the trusted intermediary with three Bitfi devices, a laptop computer and a WiFi access point.
  • The trusted intermediary puts $1,000,000 directly onto each Bitfi device, using the laptop and WiFi access point we have provided.
  • They must follow the publicly available documentation, without interference from anyone.

These are much stronger security goals to meet, and much more accurately emulate the real world.

If Bitfi won’t change the terms, it’s clear to me that they don’t stand behind their claims that the device is unhackable.


Win a prize! If you log in using the link in this email!!!!

Email from Parentpay

Email from Parentpay

On 25th August, I received the above email purporting to be from ParentPay. ParentPay is an online payment system designed for use by schools – you can book and pay for school dinners, library fines, school trips etc.

I am a user of the application, but I’ve only casually (and observationally) looked at the security of their main web application. I have no complaints, although the SSL configuration is less than optimal.

This email looks like a textbook phishing email. I had to spend some time confirming it was genuine, and was only really convinced after they tweeted about the same competition.

Why does it look like a phishing email?

  1. The sender’s email address is not on the domain parentpay.com – it is parentpay@emarketing.education.co.uk. This teaches your users to accept that any email containing the word parentpay is genuine.
  2. You are tempting users with vouchers in return for logging in. This is a standard technique used by phishers.
  3. Amazon is not capitalised. Spelling and grammar mistakes are common in phishing emails.
  4. The login link labelled “Login to ParentPay” takes us to the ParentPay login page. In a phishing email, it would take us to a malicious site that may harvest our details or deliver malware. Conditioning users to login via links sent in email is a bad idea.
  5. The login link directs us to the education.co.uk domain, which redirects to ParentPay. Teaching users to follow links to third-party sites to login is a monumentally bad idea – a number of attacks can be carried out like this including a plain phishing page, tabnabbing etc.

Please don’t send emails like this – it doesn’t just impact the security of your site. Conditioning users to trust emails like this goes against a lot of user awareness training, regardless of which site they are accessing.