Arduino misconceptions 5: you’ll wear out the flash memory

On the ATmega328P and most other Atmel microcontrollers, code is stored and executed in flash memory. Every time you “upload a sketch”, you are communicating with a small piece of code called the bootloader, which then programs the flash with your code.

Flash has a finite number of program/erase cycles – you can only write to it a certain number of times before bits will either be programmed incorrectly or become stuck at 1 or 0. With an ATmega328P, this will render the device unusable unless you invest a lot of time fiddling with the toolchain.

Now and then, someone will either ask “Will I wear out the chip?” or someone will admonish a newbie for so frequently programming the chip.

The reality of it is you are highly unlikely to wear out the flash memory on an Arduino.

Atmel spec 10,000 cycles. I don’t know the maths behind it, but it means they are highly confident a large proportion of chips will reach this level.

If we put that in real terms – if you are a hugely dedicated hobbyist who spends 2 hours each weekday and 8 hours over the weekend on their Arduino, flashing it once every 5 minutes, you will get almost a year of use before the chip could fail.

For a much more reasonable use case of about 8 hours per week, flashing it every 15 minutes, you get 6 years of use.

For the <£5 that the chip costs, this seems entirely reasonable to me.

Further to this – take into consideration that 10,000 cycles is almost guaranteed. Many will get far higher than this. Dangerous Prototypes have a project called the “Flash Destroyer“, which has the sole purpose of performing program/erase cycles on EEPROM to see how far it will go. A 1,000,000 cycle EEPROM got to 11,500,000 cycles before failure.

So that one year could become 10, and the 6 years become 60.





One thought on “Arduino misconceptions 5: you’ll wear out the flash memory

  1. Permalink  ⋅ Reply

    Clive Robinson

    February 21, 2013 at 5:51pm

    The 10,000 cycle count is very much worst case conditions.

    The life expectancy of flash data retention is for instance highly dependent on temprature, likewise the bit programming is very dependent on the voltage used as well.

    But consider the problem this way, what is it about the software development method some people are using that requires so much re-programing?

    I’ve developed moderatly complex FMCE products in asembler in microcontroleer chips at a time when flash cycles were very low and actual testing required soldering the chips onto the target board so real re-use was down to at best 10 cycles befor the chip or PCB gave up. Worse the chips were leading edge and in effect foundry prototypes and had lead times measured in months.

    Thus thoughtless debug cycles had very real costs, so through actual thinking and carefull design you got the test cycles down to very very low numbers. One way to do this is by careful managment of the code functionality. It was possible in most cases to break out code blocks in such a way that you could put ten or so seperate code blocks that needed step by step testing onto one chip. You then did the tests on each block and got the required results, with just one programing cycle not ten.

    Most microcontroler tool chains these days have quite reasonable emulators so testing the bulk of code off chip is easily possible, again with a little thought and experiance you can usually quickly design low level code in a way that only very tiny amounts are time / target test critical.

    One way to do this is with fast and slow interupts. You have realtime interupts for time sensitive response times. However the responses can usually be pre-encoded and put in buffers etc by the slow interupts.

    A simple example would be debouncing a key press. Each time the state changed an interupt is generated. This can be used to drive a simple pair of counters in the fast interupt. The slow interupt driven by say the system pacemaker clock would read the counter information and put it through a low pass filter etc. The result is the high frequency noise gets removed and the keypress signal cleaned up without requiring expensive external components. But also only the very simple fast interupt code needs testing in situ on the target, the inputs to the slow interupts can be easily simulated in the emulator and tested to levels not actually possible on the target system thus also improving product reliability whilst also saving expensive target test time.

    It’s also important to remember this because although ICE’s are nice, they have their failings due to amongst other things lead lengths, thus even having the luxury of an ICE may not actually help you. Knowing how to develop a development methodology that minimises target test time and target impact on code is a fairly vital skill to have it can make you upto 100 times faster in your code delivery time as well as considerably uping code reliability in the final system.

    Developing code for RT systems is an “Engineering task” which means a proper “Engineering methodology” is needed, because the normal artisanal aproach of patterns, rapid development and other big software project mythology and code cutter techniques realy realy do not work.

Leave a Reply

Your email will not be published. Name and Email fields are required.