Nurserycam disclosure timeline

The issue described in the previous blog about Nurserycam has been present for a number of years.

This post collates the previous times that it was disclosed to Nurserycam that I know of. If you also reported an issue, please use the contact form on this page, or DM me on Twitter. I will not disclose any information you are not happy disclosing.

My opinion is that the root cause on all of these reports is identical. A direct connection is established to the DVR using admin credentials.

All four parents agree that this is the case.

There is the possibility that Nurserycam did have a system that didn’t rely on this mechanism to connect. This may have existed prior to 2015, or it may have existed on a subset of their customer systems. This misses the point – the weakest link in the chain is the one that matters.

February 2015

January 2020

A parent reports to their nursery that the connection is made directly to the DVR, and that the username and password are leaked to parents. The password is a derivative of the one found on the Nurserycam website, and is found to be common across a multiple nurseries in a chain.

This parent agrees that the issue they reported is the same as the issue in my blog.

October 2020

A parent reports to their nursery that they can see the admin username and password in the browser. Nurserycam take some action to resolve the issue for this particular nursery. As before, the password is as documented on the NurseryCam website.

This parent agrees that the issue they reported is the same as the issue in my blog.

February 2021

Another parent reports security issuses via their nursery. Again, this concerns the disclosure of the IP address, username and password to the parents. The password is the one documented on the Nurserycam website, as in 2015. Nurserycam take some action to resolve the issue for this particular nursery.

This parent agrees that the issue they reported is the same as the issue in my blog.

February 2021

I disclose the same issue in NurseryCam, inferred from the reverse engineering of their mobile app. Once a parent had confirmed the issues had been disclosed previously, I publicly disclosed immediately.


A warning to users of NurseryCam

This blog post is intended for a less technical audience – specifically parents and nurseries using the NurseryCam system.

NurseryCam is a camera system that is installed in nurseries, allowing parents to view their children remotely. There are tens of nurseries stating that they use this system. News articles go back as far as 2004.

Serious security issues have been found in the system. The statements that NurseryCam make about the security of their system do not align with reality.

These issues would allow any parent, past or present, to access the video feeds from the nursery. There is also the chance that anyone on the Internet could have accessed them.

I am a full-time security consultant who specialises in the security of the Internet of Things, including camera systems. The issues with NurseryCam are about as serious as it gets. NurseryCam were informed of these as early as February 2015 – 6 years ago.

The System

A Digital Video Recorder (DVR) is installed in the nursery, connected to cameras. These are like normal CCTV DVRs, used across thousands of businesses and homes in the UK.

The DVR has a web interface that can be viewed in a browser, but it would normally only be possible to view this when you are connected directly to the nursery’s network. This is because the DVR is behind the router’s firewall.

Without port forwarding,the DVR cannot be accessed.

To allow the DVR to be viewed remotely, something called port forwarding is used. This opens a hole in the nursery’s firewall, allowing the DVR to be accessed from the Internet.

Port forwarding allows access to the DVR from the Internet

To log in to the DVR, you need to know the username, password, and IP address.

When a parent wants to view the cameras, they log in to the NurseryCam website or mobile application. In the background, the parent is given the details for the DVR, including the username and password.

The normal log in procedure for a parent

The parent then establishes a direct connection to the DVR, allowing them to view the camera.

The Issues

For all parents connecting to a given nursery, they are given the same username and password for the DVR. In the examples I have been shown, the username is admin and the password are obvious words followed by 888.

This means that the parents, past and present, have all been given the administrator password for the DVR.

There are no indications that this password changes over time.

There is no need for the parent to login to the NuseryCam website to access the DVR.

There is no need to login to the NurseryCam servers to login to the DVR

With these details, the parent could connect directly to the DVR at any time of the day, view it for however long they want, and view all of the cameras, including ones you have not given them permission to view.

You can lock or delete the parent account on the NurseryCam website, but the username and password for the DVR will not change.

There is no way to stop the parent from logging into the DVR directly.

Anyone logging into the DVR would be seen as the admin user. It would be incredibly difficult for a nursery to determine if the login was from a genuine parent or someone else.

To make matters worse, the connection to the DVR is using HTTP, not HTTPS. It is unencrypted, allowing someone to eavesdrop on the video feed, username, and password.

The Risks

Any given parent for a given nursery could login to the DVR and view any and all cameras.

This could include:

  1. A current parent viewing cameras for longer than they are meant to.
  2. A current parent viewing cameras that they are not entitled to, such as rooms their child does not use.
  3. A parent whose child no longer attends the nursery viewing the cameras.
  4. Any parent who has been prevented from accessing the system (e.g. separation, abuse) viewing the cameras.

Worse still, because the password for the DVRs is common across multiple nurseries and openly documented on NurseryCam’s website, there is the potential for anyone on the Internet to access the DVR.

Port forwarding does not restrict connections to be from parents – anyone can access the DVR

The only missing piece of the puzzle is the IP address of the nursery. It would be possible to scan the entire of the UK for DVRs using this username and password in a matter of days.

Staff at NurseryCam would know the password and be able to access the DVRs without restriction.

The Discrepancies

NurseryCam state that their system is “safer than online banking”.

This is certainly not the case with the system seen here.

A common, shared, and openly documented login for the DVRs is passed to each parent.

There is no encryption used. There are no VPNs.

This is analogous to your local bank giving you the keys to their vault and just trusting that you will only take your money.

The same claims are repeated across multiple nursery websites.

The Disclosure

When security researchers find problems like this, we try to report them to the company so that they can be fixed.  The aim is to keep users of the system safe. We call this disclosure.

I reported these to NurseryCam on 6th February 2021.

On 12th February 2021, I blogged about these initial concerns and Tweeted them.

Former parents reading my Twitter feed got in touch, with one parent confirming that they had informed NurseryCam of almost identical issues in February 2015 – six years ago.

Even six years ago, the claims made about security did not line up with reality.

They have been aware of serious security issues for 6 years and have not fixed them.

How were these found?

The NurseryCam Android mobile application was downloaded and then examined. By viewing the code, it was possible to see how the system operates.

Several parent users of the system have contacted me. They confirmed that the system operated as I suspected and that the DVR usernames and passwords were the same each time they logged in.

There has been no attempt to hack NurseryCam webservers.

These issues were trivial to uncover, taking no more than 15 minutes.

What should you do?

In my professional opinion, you cannot quickly fix a system that is this badly broken. You also cannot regain the trust that has been lost by selling a product that is described so inaccurately.

If you, as a nursery, operate one of these systems:

  1. Unplug the network connection from the DVR.
  2. Contact NurseryCam and ask that they inform all impacted nurseries immediately.
  3. Ask why the system you have been paying for isn’t the one that is described on the NurseryCam site.

If you are a parent, I would advise contacting your nursery and request that they carry out the above steps.

Inadequate Fixes

Changing the username and password for the DVR is not a genuine fix – the username and password are still sent to the parents.

Adding encryption to the connections is not a fix – the username and password are still sent to the parents.

My Opinion

These issues are obvious and fundamental. They should not have existed in the first place.

Without the system being almost completely redesigned, it is hard to see how it can be secured adequately.

I have not tested their website, or looked at any of their other security practices.

Ask yourself if you could ever trust this company again with children’s data.










Serious issues in NurseryCam

I am not disclosing these issues lightly. They impact real people: children and parents.

I am not following the disclosure policy I follow normally as part of professional work due to the severity.

Given how the owner of FootFallCam have behaved, I cannot hold these back. The business managing these systems have not demonstrated they can handle data this important, and cannot handle this honestly.

When I saw the issues with FootfallCam, I noticed that there was a related product called NurseryCam which allows CCTV monitoring of children in nurseries.

Given the serious issues in FootfallCam, it concerned me that the same company could be handling the sensitive data like this.

From the site,, there is a link to an Android mobile application:

This application is called com.metatechnology.nurseryapps.

This application has some serious issues. Recent reviews are from 2021, strongly indicating that this is still actively used.

Issue 1:

The application does not use TLS to secure communications between it and API endpoints. There is no reason to not use TLS in 2021.

This has been unacceptable for years.

Issue 2:

The application sends the username and password of the parent in the URL to the endpoint

The username and password are passed in the URL directly.

Issue 3:

When logging in, the parent is returned a list of “ParentAccessModel” as JSON. This data is passed in plaintext, unencrypted

ParrentAccessModel is a list of nursery IP addresses, ports, usernames, and passwords to connect directly to the DVRs.

These are not per-parent credentials.

The user is admin.

The password are obvious words followed by 888.

These are accessible to any parent using the system.

Today, now, and in the past.

Issue 4:

The connection, directly to the nurseries, is then made over HTTP with the username and password of the DVR passed in the URL.

There is no encryption.

Issue 5:

Any access control enforced by their API would not be enforced by the DVRs.

The parent already has the IP address, port, username and password for the DVR.

Any controls around time limits, or locking out of accounts would not work.

The parents have credentials for the DVRs.The access controls are ineffective. The child may even leave the nursery and the parents account could be revoked on the web plarform, but they still have access to the DVR.

Issue 7:

There is no means for NurseryCam to remotely audit access to these DVRs. If the usernames and passwords have been used by a malicious party, there are no means of knowing this.

Summary of issues

The system directly exposes the DVRs in nurseries on HTTP. The username and password of the parents are passed without encryption, and then return the connection details for the DVR without encryption. Then, without encryption, the parents can log directly into the DVR. There is no means to stop them viewing the DVR whenever they want.

This is a massive difference from the claims on their site:

We are not talking about slight differences. The linked page is so far away from reality it’s unreal.


These are issues of critical severity. The implementation of this system places children at direct risk.

I make the following requests of NurseryCam:

1. Take down the NurseryCam service (mobile application and web application) before Monday 8th February 2021, 0800 GMT.

2. Within the next week (by Saturday 13th February)  take action to ensure that the nurseries running DVRs have either changed the DVR passwords to something secure, or stop them being exposed to the Internet.

3. Within the next 4 weeks (by Saturday 6th March) send a communication to all nurseries and parents (current and former) informing them of these security issues. You should inform these people that you have no reasonable way to determine if anyone has watched their children.

BC Vault – is their security model better?

Yesterday, on the back and forth about BC Vault, their CTO, Alen Salamun, kept on saying their wallet was more secure, based on their product needing 5 items to be breached, and other wallets just 1.

To access the funds on BC Vault, you need:

  1. Global password
  2. Global PIN
  3. Wallet password
  4. Wallet PIN
  5. Device or backup file

To access the funds on other wallets, you need:

  1. The BIP39 words

I don’t see how you can possibly claim that BC Vault is more secure based on this comparison. All you can say is that it is different. It certainly is not “simple math”.

My BIP39 words are stored on a piece of paper around 200 miles from here, in a safe. I was told I would only have to enter them should my hardware wallet lose the key material. I do not need access to them, and probably never will. I do not need these words to spend funds. These words have never been entered into a computer.

Each time I want to use a BC Vault, I need to enter the passwords (which are entered into a computer) and a PIN (entered into the device). Entering data into a computer puts you at risk of phishing. Entering the PIN puts you at risk of shoulder surfing, among other attacks. A user will need to keep this information at hand to use the wallet, unlike BIP39 words.

In fact, I didn’t keep the BIP39 words on my Trezor, and hence it is impossible to access the funds without the device. This clearly demonstrates that you do not need the words to use the wallet.

This “simple math” is comparing apples and oranges, and is exactly the same path Bitfi went down. Bitfi claimed that their model of entering everything each time you used it was clearly better than storing keys in a secure box.

All we can say is that these are different security models.

It was inferred that I said this was worse or the same. It’s interesting how many vendors go down this route – when people compare their system to others, they automatically assume you said it was worse.

My issue isn’t that they are different. It’s the claim that it is clearly better. Prove that 5 regularly used items are more secure than 1 infrequently used.

It’s not as simple as 5 > 1.

BC Vault: Bitfi Mk2?

I wasn’t aware of BC Vault until a few days ago, when their CTO, Alen Salamun, popped up in response to a vulnerability disclosed in another hardware wallet.

What’s that? Another bounty, loaded onto a wallet, and sent out?

Does this sound familiar to anyone?

They even frame this as a “Guaranteed Security”:

All this bounty does is demonstrate that someone cannot recover a key from a given device – the stolen device threat.

It doesn’t provide any assurance around phishing, evil maid attacks, or the usability of the system. The bounty provides no guarantee whatsoever.

Then Dimitry Fedotov, who deals with BC Vaults business development, laid down the gauntlet:

1 BTC is currently $9,400.

Day rates for hardware testing are $2,000.

That’s less than 5 days pay.

A full security review and penetration test of a hardware wallet would easily run to 25-30 days of work, and cover many more threats than “someone stole my wallet”.

This bounty is just another rigged fairground game.

Bitfi – Some Requests

It’s been almost 11 months since we showed that the Bitfi Knox MD40 wallet was vulnerable to two quite serious attacks:

  • The keys persisted in memory for days, allowing them to be recovered over USB in a few minutes – the cold boot attack.
  • The device could be easily rooted and backdoored over USB in a few minutes, allowing an attacker to receive your keys when you used them – the evil maid attack.

In those 11 months, Bitfi have not informed users of their device that they were vulnerable, or if they continue to be vulnerable:

  • Is the USB bootloader still open on some or all Bitfi devices?
  • Can some or all devices still be rooted trivially?
  • Have reasonable precautions been made to wipe the RAM?
  • How does a user determine if their device is vulnerable?

Without knowledge of the vulnerabilities on their devices, users cannot take appropriate actions to mitigate risk.

If you take their threat model as truth – that it is safe from nation states – you have no idea if your funds are at risk or not.

This is completely unacceptable, especially when one of their co-founders claims that not informing users of security issues puts lives at risk.

Further to this, Daniel Khesin has stated they believe the attacks take at least 10 minutes and have a 25-30% success rate. In reality, it was 2 minutes, and we didn’t see them fail. This suggests a massive disconnect on Bitfi’s side – they don’t actually understand the issue.

Without acknowledging the ease with which the attacks were carried out, there is no way they can actually fix them properly.

I have some very simple – and reasonable – requests for Bitfi:

  1. Document the attacks clearly and concisely on your own site,, including which versions of hardware and software are still vulnerable.
  2. Inform your customers, by way of both email and the dashboard, of these issues.

A company unwilling to take these actions is, by their own words, putting people at risk.

Without these basic courtesies in place, I’m not even going to entertain looking at the devices at Defcon.

What threat model is Bitfi working under? Not a realistic one.

Daniel Khesin – Bitfi’s not-CEO – has recently started differentiating Bitfi as the only one that can protect against state actors.

This doesn’t hold up to scrutiny.

Daniel keeps on talking about “forensic labs” that can recover keys from Trezor’s STM32 MCU. This is a feasible task.

He also claims a “secure element” does very little to stop extraction. Recovering data from most secure elements is beyond the means of nearly every lab in the world. Those that could carry it out will be charging very large sums of money.

Finally, he claims that anything typed into a computing device can be recovered. As of 2019, there are no labs on this earth who can recover data from the RAM inside a powered down MCU.

But, let’s assume these labs exist.

There are three serious logical errors here.

Firstly, how does the state actor have such advanced capabilities in recovering data from a secure element, but are unable to backdoor a Bitfi?

If a state actor wants to access your funds on Bitfi, all they need to do is backdoor the device. We showed how easy this was last year. It’s a much easier attack than cold boot.

Secondly, if a state actor wants your funds, they are either going to get them or make your life unpleasant. In other words, YOU’RE STILL GONNA GET MOSSAD’ED UPON.

Thirdly, if these labs are capable of recovering anything typed into a computing device, this means that these same labs can recover the seed and phrase from a Bitfi.

Bitfi have created a threat model to which their own device is incredibly vulnerable to.

Bitfi Does Store Keys

Well, here we are again.

The topic of Bitfi has reared it’s ugly head. I’ve written about Bitfi several times before, but they are still banging on about how their device doesn’t “store” your keys. If it doesn’t store your keys, there is nothing to steal.

This is bullshit.

There are two options here:

  • It does not store the keys
  • It does store the keys

Let’s threat model these two.

Situation: Bitfi does not store the keys

Imagine there is a means by which the device, genuinely, does not store keys in any form, for any length of time.

This would stop all attacks that aim to steal they keys, because they do not exist on the device. This would include:

  • Cold boot attacks that recover the key after the device has been used.
  • Evil maid attacks where the firmware is modified to recover the key before it is used.
  • Side channel attacks where the device leaks information about the key.

As these attacks would be impossible, there would be no need to use mitigations to make them more difficult to carry out.

Situation: Bitfi does store the keys

Now back to reality. Bitfi does store keys for a finite length of time in RAM.

This means that:

  • Cold boot attacks are now possible as the keys did exist in RAM and may remain in some form.
  • Evil maid attacks are now possible, as modified firmware can read the key and send it elsewhere.
  • Side-channel attacks are now possible, as the device has to store the key

This, in turn, means that mitigations must be put in place to make these attacks harder (but not impossible) to carry out.

The efficacy of these mitigations is therefore key to the security of the device.

Bitfi has (some) of these mitigations in place. As far as I know they:

  • Attempted to reduce the amount of time the keys exist in memory.
  • Attempted to obfuscate the contents of memory.
  • Have prevented USB data access to the device.


If Bitfi didn’t store keys, there would be no need to mitigate against attacks that steal the keys. They do have these mitigations in place.

It’s dishonest to keep on claiming that it doesn’t store keys when it does.

How effective are these mitigations? Well, we have no idea. I doubt Bitfi do either though.

Eternal Vigilance is the Price of Liberty

Those that know me well will know that I hold privacy and liberty of the individual as one of my core principles. I believe we are entitled, as part of our human rights, to go about our lives without intrusion from either the government, business or individuals.

In the UK, we are subject to pervasive surveillance by CCTV, ANPR, and other monitoring technologies. The government, media and police attempt to use a “nothing to hide, nothing to fear” mantra to convince the public that these technologies are effective and have little downside. Unfortunately, the cost-benefit of many of these systems has not been demonstrated. Even if it were, the costs would be purely tangible ones – install, maintenance and operating costs – ignoring the impact it can have on our personal wellbeing. Over time, we become used to this surveillance and accept it without question.

Recent rises in crime, along with reduced police resources, have triggered community crime-fighting efforts. Neighbourhood Watch and volunteer patrols are often suggested and can have positive effects.

But a recent post on a Facebook group proposed a system that could invade privacy, would not comply with data protection law, could place homeowner’s networks at risk of attack, and has no demonstrated impact on crime levels.

Initially, I was willing to put this down to naiveté and a “anything-is-better-than-nothing” attitude, but it soon because clear that these were not the issue here.

Unfortunately, this is a closed group and I can’t just link to it. I hope that the following excerpts are representative of the whole.

I’ll summarise this, and subsequent posts:

To reduce crime, a network of Raspberry Pi based automatic number plate recognition (ANPR) cameras would be installed. This would in the area of a small town and would be operated outside of GDPR or any other data protection laws. The cameras would be located at knee-height on the perimeter of private properties, filming public roads.

Residents pay to install and operate the system. Residents can prevent their vehicles from being logged by registering their number plate with the system. This would require sending a V5 document or an image of the car in the driveway.

In the event of a crime, the list of unregistered plates would be used somehow. There is also suggestion that alerts could be raised on “bad” plates.

There is proposed expansion to record WiFi and Bluetooth identifiers alongside number plates.

I have a number of serious concerns around this scheme.


Elliot wishes to operate the scheme outside of current data protection laws.

He has made several incorrect or questionable claims here.

It seems to have been accepted by Elliott that recording images of people would mean the system would fall under GDPR.

There are a number of claims that need examining.

Firstly, it’s highly unlikely the cameras will only gather number plates. Whilst some ANPR cameras have a limited FOV and are virtually useless at capturing images of people, this is not the case here. They are general purpose, wide-angle cameras mounted at knee-height. If you stand 3m away from this camera, you will be captured from head-to-foot. The chance that a significant network of cameras does not capture images of people is vanishingly small.

Secondly, the notion that number plates do not constitute personal information is false. The ICO ruled on this in 2009: vehicle registrations of vehicles owned by individuals are personal information.

Thirdly, the camera network gathers more than just a number plate. There is also the time and location over a network of cameras, providing a route. This makes the information even more likely to lead to an individual being identified.

Fourthly, to register your car on the system, you are required to send your V5 or an image of the car in a driveway. This is certainly personal data.

Elliott was challenged about this. Rather than accept that the system may need to handle the data under GDPR, he doubled down around the V5s and images of cars in a drive.

He now tries to argue that the system isn’t being operated by an entity – it’s just citizens sending data to each other. It seems a very odd argument, given that the operator of the system would be taking payment of £50/year per camera – that sounds a lot like central entity. More to the point, GDPR doesn’t really care if it’s a business or individual, it cares about the data being gathered.

Fundamentally, Elliott is proposing a system that would gather other people’s data and that these people would not have any of their rights under GDPR. They would not be informed, they would not have the right to access, and they would not have the right to erasure.

To make things worse, the scope creeps to include Bluetooth and WiFi data gathering. Now your phone and smart watch will be tracked by the same system.

Without the controls that data protection laws provide, who knows what the data will be used for?

Information Security

The proposed system would place a network of Raspberry Pi’s on the networks of many homeowners.

I have three concerns here.

Firstly, I would be concerned that attackers without authorisation could take command of one or more of these devices remotely, viewing the cameras, injecting false data, or attacking the homeowner’s networks. IoT security isn’t easy, and I have seen many Raspberry Pi based systems fail badly and fail hard.

Secondly, I would be concerned that someone with authorisation to access the devices could attack the homeowner’s networks. Given that the system is operating outside of data protection laws, and that it isn’t operated by a company or entity, how do you know who has access to the devices? What controls have they put in place to protect you? What comeback would you have?

Thirdly, what would happen if one of the devices was stolen? What access would this permit the attacker? I often see credentials from a single device permit access to many more.


It is stated that the system will avoid capturing anything except images of number plates. As a result, it won’t actually capture images of crimes. It will just know which vehicles had been in the area at a given time.

If a crime occurs, all the system will be able to provide is a list of number plates of vehicles in the area. This list will contain residents who have not been registered, visitors, people passing through, vehicles that have not been registered as leaving due to coverage, and maybe the vehicle the criminal used.

I’m not sure what this list will be used for.

I’m not sure what the police would do with a list like this.

It’s certainly not obvious that it will provide any benefit.

Essentially, anyone with the audacity to enter the Oxted ring-of-steel will become a suspect.

If 100 cameras are installed, then it will cost £5,000 to install, and £5,000/year to operate – £30,000 over 5 years. Is it going to provide value compared to other options?

I’d want something more than an appeal-to-emotion to justify installing such a system.

Ironically, if the system gathered images of people and crimes, it would probably be of more use.


There is the explicit admission that he will try to avoid GDPR.


Data protection law is often maligned. It isn’t the evil beast that many make it out to be.

Entities that comply with data protection law have normally considered what data they gather, and how they will protect it.

Those that don’t comply with data protection law often gather more than they need and don’t adequately protect it, likely because they don’t think any of the penalties can apply to them.

GDPR doesn’t exist to stop people implementing ANPR systems; it exists to allow those surveilled by such a system to know what happens with the data.

It’s often less effort to comply with the law than it is to skirt around it.

Ask yourself why Elliott is trying to escape these responsibilities and what impact it could have on you.

Bitfi against the competition – updated 2019

Bitfi’s core concept is that the wallet itself does not “store” your private keys – it calculates them on the fly.

It is acting as a simple key-derivation function. A salt (8 characters) and a phrase (30 characters or more) are passed through a simple algorithm to generate longer private keys.

The Bitfi takes a short key and “stretches” it into a longer key.

There is no secret sauce in a Bitfi – you don’t need that user’s Bitfi (or any Bitfi, for that matter), the user’s email address, password, access to the Bitfi dashboard or anything else. All you need is the salt and phrase. With those, funds can be accessed.

The Competition

We need to see how others have tried to implement hardware wallets. Most of them don’t work like the Bitfi.

  • A private key is generated using a random number generator in the wallet.
  • The private key is displayed on the wallet itself. The user writes it down and stores it safely as a backup.
  • This is the only time the key is output from the wallet.
  • From this point onward, the wallet acts as black box, signing any transactions that are sent to it. There is no need to enter the private key, and the private key is never output from the device.

This is probably the most important security feature of a hardware wallet: the private key is locked away in a black box with a limited attack surface and on a device that has a single use. The private key never needs to exist on your malware-infected, general-purpose laptop, and is therefore much less likely to be compromised.

There are still some obvious security issues that need to be handled.

  1. If the wallet is stolen, it could be used to sign transactions. To prevent this, the wallet uses a PIN to guard access to the black box. Without this PIN, the device will not function. Brute-force protection makes it infeasible to try all combinations, preventing the private key from being accessed.
  2. If the wallet is stolen, the private key could be read out from the memory inside the device. To mitigate this, the keys are stored on internal flash on microcontrollers. There is no easy way to read the keys out without advanced hardware attacks.
  3. The wallet simply signs transactions that are sent to it. A user may be expecting to sign “send 2BTC to Bob”. This could be modified to be “send 200BTC to Eve”. To prevent this, the wallet displays the transaction details and asks the user to confirm using buttons on the wallet.
  4. The wallet will run firmware, which can be updated. A malicious firmware could be loaded, allowing the private key to be read, or hide modified transactions from the user. Signed firmware and secure boot are used to protect against this. Only the vendor can generate firmware that the wallet will accept.

These protections are not perfect; a determined and dedicated attacker can still circumvent them. This is a key point: all we need to do is raise the bar high enough that most attackers don’t succeed.

Here are some example attacks from wallets:


Bitfi is very different to those hardware wallets.

Each and every time you need to sign a transaction, the user enters their entire salt and phrase via the touchscreen display.

The wallet then derives the private key. There is no need for the wallet to store the private key long-term.

Bitfi proudly state that this means there is “nothing to hack” – if there are no keys stored on the device, how can a hacker possibly get the keys?

At first glance, this seems sensible. But if you dig a bit deeper, you quickly realise how broken this concept is. I’m going to describe the attacks we have developed against Bitfi, and how they stack up against competitors.

Cold-boot attack


With USB access to a stolen Bitfi, the salt, phrase and keys can be read from the RAM. This is called a “cold boot attack“.

The data persists in RAM for days whilst the device has a battery in it.

This only requires a USB cable and a laptop. The wallet casing does not need to be opened, and no specialist hardware is required. No skill is required as the attack is entirely scripted.

The attack takes less than a couple of minutes, and the device works as normal afterwards. It is feasible for this to be carried out during luggage X-ray at an airport, and returned to the user.

The attack has never bricked the Bitfi and has worked extremely reliably.

There is no requirement to cool the device to perform the attack.

Bitfi recommend the use of “diceware” passwords, which greatly facilitates the recovery of the phrase from memory. The use of a list of dictionary words means there is a lot of redundancy, which in turn means that the memory can degrade significantly and we can still recover the phrase.

Bitfi did not inform their customers of this issue.

Update – Summer 2019 – It has been claimed that the issue has been fixed, but no evidence has been provided, and no independent testing has been carried out.


To protect yourself, the battery must be removed from the wallet after use. To ensure the keys are no longer in memory, the wallet must be left powered down for at least an hour. During this period, you must make sure no one physically accesses the device.


The RAM on most other hardware wallets is protected from access via USB or debug protocols such as JTAG or SWD. We have observed no such protections on the Bitfi.

Even with this considered, most other wallets take steps to “zeroise” or wipe the memory immediately after it is used. This limits the window over which cold-boot attacks could be carried out. This is either not performed, or is wholly ineffectual on the Bitfi.

There are no published cold-boot attacks against other hardware wallets.

Malicious Firmware


With USB access to a Bitfi, the firmware can be modified so that the salt, phrase and keys are sent to an attacker the next time they are entered by the user. This permits “evil maid attacks” as well as supply chain tampering.

This only requires USB access to the device. The wallet casing does not need to be opened, and no specialist hardware is required. The attack could be carried out if the Bitfi is connected to a compromised charger or laptop. Using this vector, the attacker never requires physical access to the device.

The attack takes less than a minute, and there are no mechanisms for the user to detect the modification. It is feasible for the attack to be carried out by anyone with access to the device for a short period, either before (supply chain tampering) or after (evil maid) the Bitfi enters possession of a user.

This attack could happen before you receive the device, when going through airport security, or when it is left unattended at home.


There are no mechanisms for a user to check if the device has been tampered with at a firmware level. This has been confirmed by Bitfi developers.

A user could assume that the device is trusted as received from Bitfi. As long as the device does not enter the hands of an attacker, and is never connected to an untrusted USB power source, it could be considered secure from this specific attack.

Assuming the device is trusted as received is therefore high-risk.


The use of signed firmware updates and secure boot mean that other wallets cannot have their firmware modified in less than a minute using just a USB connection.

There are still other attacks that hardware wallets are vulnerable to.

The wallet can be stolen and replaced in entirety. The replacement will send the PIN to the attacker, allowing them to unlock the stolen wallet and access funds. The user would detect the issue as their key would not be on the replacement wallet, and they would not be able to access funds.

The wallet can be accessed, modified with a hardware implant, and returned. It could harvest the PIN, modify transactions, or spoof integrity checks. This attack is significantly more challenging that simply plugging the device into USB, and to-date, no feasible attack has been shown against any of the popular wallets.

Shoulder surfing


The entire salt and phrase need to be entered into the Bitfi each time it is used. This is entered using a conventional touchscreen, with a QWERTY keyboard, and displayed on the screen clearly.

It is entirely possible to read the salt and phrase from the screen and then use this to steal funds from the user.

Even without direct view of the screen, the finger position when typing allows characters to be inferred. The use of dictionary words means that even if certain characters cannot be determined, they can be inferred from those that can be seen.

As the salt and phrase contain all the information required to steal funds, a user may be entirely unaware that they have been compromised. The attacker can delay the attack until an opportune moment.


The Bitfi wallet cannot be used when someone can observe the device.


Whilst the PIN can be shoulder-surfed on other devices, an attacker still requires access to the wallet itself to obtain the key. This provides a significant additional layer of security.

Some other wallets mitigate the risk of shoulder surfing by randomising PIN layout.

Key entropy


The Bitfi device allows users to choose their own salt and pass phrase. Multiple studies have shown users are very bad at choosing and storing passwords, and there is no reason to assume that Bitfi will differ.

It was discovered that one user even used the sample phrase from the Bitfi instructions.


A salt and phrase of good quality must be used.


Competing wallets encourage the use of keys that are randomly generated using a good source of entropy, removing the human aspect.

Something-you-have and something-you-know


Bitfi only requires the salt and phrase, and nothing else. Wallets can be used interchangeably (at least, at a functional level – this is not recommended for security reasons).

If your salt and phrase leak via any means, an attacker has access to your funds. There are no flags to signal this.

This is termed single-factor authentication.


There are no good means to solve this issue.


Other wallets support passphrases as part of the BIP39 specification. To use the wallet, you need both the key stored in the wallet itself, and a passphrase that is stored elsewhere. This is something-you-have (the key on the wallet) and something-you-know (the phrase).

Use of a passphrase with BIP39 significantly elevates the security above that of a Bitfi.


The Bitfi wallet is less well protected than competing hardware wallets. If you ever let anyone else have access to it, ever connect it to an untrusted device, or use it in a public place, you are not safe.

Users of Bitfi must take significant and limiting steps to mitigate the risk they are exposed to.

Even ignoring Bitfi’s dishonest behaviour, the product has little to recommend it over any other wallet.

Can these issues be fixed?

We aren’t really sure.

There is no hardware root of trust on the Bitfi. This must be burned into the device before it leaves the vendor’s possession for it to be secure. Without this, secure boot cannot be implemented well.

The use of external RAM on a commodity chipset (without RAM encryption) will always leave the keys exposed, no matter how well you try to wipe them from software.

Android is a poor choice of operating system. It makes wiping memory very challenging – the salt and phrase are in tens of locations. It also makes limiting the attack surface very, very hard.