torstai 31. elokuuta 2017

Battery backups


Here's a pretty common scenario: You've got a low powered device with a RTC (real time clock) - be it internal to the processor or external chip. When your device is powered the clock uses external power - like mains or larger battery - and when not, either small lithium battery (coin cell type) or a super capacitor.

Typically this is achieved by having two separate VCC supply lines; one for external power which is used when available, and one for battery backup when main power is not available. 

Lithium batteries (CR1232 or something like that) are nice - lots of "slow" capacity in small package. Enough to run your clock about two to five years (depending on battery size and power consumption of your RTC). However, when it runs out it runs out, everything that was on chip (and I'm assuming that storage is RAM like it usually is) is gone, for good. If there were critical data on the RAM, it's gone. But then again, if your device has been unpowered for years, that data might not be relevant anymore. Or then again, you might have wanted to keep the data anyway.

Supercaps on the other hand have capacity for maybe a week or two, but at least they can be charged again and again. This is nice if your device is externally powered most of the time (like mains power), but if power does run out, same limitations as mentioned above apply. The time will be much shorter though - week or two, tops, with common supercap sizes.

But wait, what if you wanted to use simple, cheap capacitor? Something like 330u electrolytic caps are cheap these days. Well, sure, if your device is almost always externally powered, like, say, an alarm clock. It would be able to keep charge for maybe few hours, max, but at least it would tolerate short power outages. Unlike my current one, which isn't even cheapest one available but still loses all the settings even if power goes out for few minutes.

But dissing Sony designs aside, how do you choose your back up solution? That, of course, depends.

How long you want the data to be retained?
What happens when power eventually runs out?

There is no single answer, it's something that you'll need to weigh case by case.

Older Nintendo Pokemon games are an example of this. The save game data in the cartridge was stored in low-power RAM, backed up with large coin cell battery. Those batteries are now dying, and your precious Pokemon collection is about to vanish - unless you replace the battery (not an easy task, as cartridge must have that backup power supply enabled during entire operation!)

Personally I prefer actual nonvolatile storages, like EEPROM (or MRAM/FRAM these days). On paper they have limited life - quoted as 20 years or so - but in practice it is much longer. And they keep their data without being externally powered. Internal clock may stop after its power runs out but it is easy to set again - other data however may be worth a lot more. Like those Pokemons.
But apparently few pennies more for actual nonvolatile memory was too much for them...





torstai 24. elokuuta 2017

I could have used SD card?!?


I have stayed away from SD cards (in my projects) because of two assumptions;
1) They require difficult and fast SD control hardware and software, and
2) They're unreliable pieces of s***.
2B) It is possible that I have heard at one point about licensing required and thus (subconsciously) didn't even bother. But I'm not sure so this doesn't really count.

Only now I found out, almost by accident, that SD cards have SPI mode. I love SPI. It's dead easy to implement even in software, allows many devices to share a bus (kinda-sorta) and is very reliable. So if I, all this time, could have just put a SD socket on board and instantly gained huge amount of nonvolatile memory... Daaaaamn.

But point 1) still kinda-sorta stands. If I would need to implement FAT filesystem (instead of treating it as raw bulk storage), that would still take some software to do. At the moment I'm kinda struggling with current product line as I'm almost out of program flash, and there is still years of development to do. Even few kilobytes more would make my life very difficult.

But if could treat it as a raw storage. Instead of "small" SO8 flash chip (of just 16 or 64Mbit or so), SD card could be put there with more space than I could ever use. For my current purposes even 2 GB card would be nearly unlimited.

Did I mention I still like to program small? I just checked - Google home page is over 200kB, excluding any pictures (like the logo). That's just a bit less than program area of the current MCU I use, and it is packed full of functionality (the MCU, that is). Data structures I transmit over web (between devices) are few hundred bytes each. HTTP headers are often larger than that! So, comparatively, essentially unlimited storage.

But then we get back to point 2). Failure mode of flash memories is notoriously bad. At least early SSDs had habit of suddenly going completely inaccessible. One moment it would work fine, no issues, the next -- nothing. You couldn't get anything out of it. 
SD cards have similar track record, although being commonly removable, exact failure is harder to track, and might often be related to less than careful handling of cards.

I've dealt with those already mentioned SO8 flash chips. They may have "just" 64Mbit of storage or so, but they often promise something like 10000 erase cycles. That number is actually something I could trust - or at least believe.

SD cards, on the other hand, often seem to like to hide that number. There is this load balancer circuit that tries to hide that stuff from view, replacing it with meaningless "real world use" numbers for marketing purposes. 

But still. Take 2 GB memory card. Write, say, 512 bytes to it, every second. Let's assume that this specific card can tolerate only 100 erase cycles. This number, by the way, is highly conservative - even cheapest of the cheap flash chips these days offer at least 1000 erase cycles.

Even then this would mean more than 10 years of usable life for the product. Assuming, of course, that the balancer doesn't lose it mind in between and destroy everything. 

It's still very, very tempting...



maanantai 21. elokuuta 2017

"Are you sure?"


Kid, now 6, loves to play Lego Batman 2 on my laptop. I even got him XBox controller to play it with. At first he needed help with some parts (some tricky jumps mostly) but it seems he has gained some skill there as he managed to completely finish the story with just small help here and there.

Then he found out that after finishing the game that you can play with any (unlocked) characters you want, anytime. Talk about excitement! I'm guessing the save he was using was nearing 70% completion or so.

Now, I deal with the problems of my clients often. Many people seem to learn technology with a method of "press this, then three times this, then this". They do not read what is on the screen, they just press the memorized-by-rote combination. And if anything they didn't expect shows up (like an error message), they way too often don't even bother reading it and instead start wildly pressing anything and everything to make the error go away.

After that they typically call me.

Take a deep breath. Count to ... well, three, as it is a phone call and I can't really delay much longer. And calmly explain them how to salvage whatever's salvageable at that point. Really, that's all I can do. And quite often, that's all that needs to be done. Next time they (hopefully) are wiser.

I'm pretty sure kid isn't like that. He seems to be - in general - remarkably resourceful when figuring out how to get whatever he wants. At the moment he doesn't read because he can't - yet at least. And I'm guessing that's why he chose "new game" and then his (70-ish percent complete) save slot.. and then even confirmed the erase of the old save...

...And proceeded to ask why there was the (unskippable) intro movie playing, again.

After I figured out and explained to him what had happened he took it relatively calmly. But he hasn't asked to play that game again afterwards either... But then again, I wouldn't be too happy to start everything over again either.

Lesson learned. Maybe. Hopefully.


 


lauantai 5. elokuuta 2017

Transmit errors


When you dig down deep enough, everything is analog. And this means certain degree of uncertainty.

I've mentioned WiFi module I'm using previously, and deep down the communication with it - via serial link - is again nothing more than stream of bytes. It's up to application to assign meaning to that with software.

And boy, is that fun. The protocol the module uses assumes that serial link is mostly immune to transmission errors - that is, that absolutely no bytes are corrupt or lost during transmission from the main processor to the module or the other way around. If a singly byte is lost, synchronization is lost and data received becomes incomprehensible. And since frame format has no checksum, single corrupt byte may render entire transmission (either single frame, or your gigabyte data within) irrelevant or at least broken until repaired.

So, while the underlying TCP/IP link (assuming that one is open at the moment) itself may be resistant to errors, suddenly the data link from the module to processor might not be (and in case of my application, it definitely isn't due to module being placed on an external module connected with cable). And this will easily lead to corrupt data being transmitted.

Fortunately I have a personal, deep distrust on data integrity of any such transmission link - be it point-to-point one inch serial link contained within a board, or wireless link over thousands of kilometres, so I have tendency to design my applications' data transfer protocols to contain rudimentary integrity checks too, in order to avoid just this kind of situation. If data is lost (to either way), connection (read: entire module, if error is persistent) is reset and application will try again.

Whether this is efficient depends on your point of view. Is it fastest way to transmit data? Absolutely not, as constant data transmits and respective acknowledgements take a lot of time, especially over slower links (protocols designed for fast but high-latency links are nice, but I don't have any at my disposal at the moment.)  But then again, I'm not transmitting blockbuster movies at 4k resolution here, but infrequent, small(ish) data packets that I want to be processed by receiver exactly once. It's up to receiver to make sure received package is handled only once.

If you happen to be modern (web) programmer, working exclusively with high-level languages, this may seem a bit excessive to you. But when you're working on lower level, with no operating system or high-level language libraries to hide all the nasty details of unreliable links beneath, these things need to be done.

And guess what? I still like it, just the way it is. It might be unreliable, but still is fun to work with.