Random thoughts about software, hardware and electronics. And other things too...
sunnuntai 30. joulukuuta 2018
Bluetooth printer
Some time ago I got my hands on a couple bluetooth-enabled thermal printers. These also had RS-232 input, and were relatively cheap so I was a bit excited; I've been looking for this kind of printers for some time for my purposes.
Very quickly my excitement died, though. Although printer could be connected with RS232 and bluetooth at same time, these interfaces couldn't be used when both were connected. So effectively, this printer is useless to me.
I contacted the manufacturer for help, and their response was essentially that this is by design and there is nothing they can do. If RS232 connector is connected, bluetooth printing is disabled to prevent situation where printing were requested from both sources at same time.
While I understand the reasoning - after all, I wouldn't like the situation either where I had data from both sources mixed on same paper - I was mostly dismayed because they essentially refused to acknowledge that this is actually an very easily solvable problem.
You only need two print buffers. Data received from RS232 goes to one, and data from bluetooth to another. When printer is idle, printing from one buffer is committed, and other stays there, gathering data, until first receipt is done, and then buffers are switched - possibly after short idle timeout. Simple, effective and exactly what I'd need - there printers after all are used only occasionally to print single receipt and then it sits idle for a period.
But no, whatever the reason is, they just couldn't or wouldn't do it. So guess I'll need to keep looking for better units.
keskiviikko 26. joulukuuta 2018
PHEV Outlander: Winter update
Well, I've had the Mitsubishi Outlander PHEV for more than half a year now, and so far I've driven about 16700km with it. The overall fuel usage has been around 5 litres gasoline per 100km, which is obviously way more than rated consumption, but still not too bad for 2-ton 4WD SUV.
The main factors for this high reading of course are the battery size and average length of my (daily) trips. For example we go to our summer cottage (about 100km one way) on average once a month, more often on summer, so it doesn't really help that about 150km of this 200km is driven with gasoline. And then there are longer journeys...
As long as driving is local and less than 35km per day, it can be done on battery alone. Great!
Well, except now. Winter here is cold - right now it is around -15 degrees C, and I expect that it will get much much colder yet on January (despite climate change). Our Christmas trip out (about 200km one way) took whopping 8,5l/100km! Not good, not good at all. And even with warmer weather (say, -5 C) battery is good for just 20km, at best, as computer tells me. Let's just say that I'm not exactly pleased with these figures right now.
Now, after that shocking fuel consumption figure I did some reading, and it turns out that the car's computer has serious aversion of letting the gasoline engine getting cold. So, just to keep it warm it is run much much more in cold weather that strictly necessary for driving.
Hmm.
It's common to add a "mask" at the front of cooler around here during winter, even with normal cars (and especially with diesels, as they run cooler than gasoline engines already), just to keep engine warmer in cold weather. Even with this mask in place my old diesel just barely reached nominal running temperature during winters (on sub -15 C weather), and when stopping (e.g. at lights) the engine started getting colder immediately, even when idling.
So added a mask to PHEV too, just pieces of 2mm thick rubber matting I had handy that I carved into shape with knife and attached with few cable ties, and was pleasantly surprised as our return trip (exact same 200km) took only 6,5l per 100km! Two liters per 100km savings. Weather was of course a bit warmer now (-15..-21 going there; -5 ..-15 coming back), but nevertheless, DIY mask absolutely appears to help.
Now, before you block entire air intake of your car, I should warn you that this has drawbacks too. If weather gets warm, your car might overheat without sufficient cooling air, so don't overdo it. I left maybe 15-20% of air intake uncovered, but your proverbial mileage may vary. I'd suggest starting with half and watching how it affects things. If the cooler fan ever starts, you've covered to much already.
This of course doesn't help with that lousy 20km-with-full-battery figure, but guess that can't be helped. But at least now the car doesn't use - waste - that much gasoline for nothing...
lauantai 17. marraskuuta 2018
Where are they?
Fermi Paradox and Drake Equation are - depending on where you stand - fun or really un-fun thought experiments. Latter tells us us that probability of life is so high that it is almost completely certain that there are other intelligent species out there. But we haven't heard even a hint of anyone. Why is that?
What if almost civilizations rise to point where we are now, and higher - and then fall, never to be able to climb back up again? (well, I guess this would fall into "transmitting only briefly" category, in a fashion).
At this point, we, the humankind, have utilized almost every easily accessible source of raw materials (minerals and oil, primarily) that there are, and only by using fairly advanced tech we are able to extract more inaccessible resources right now.
We have managed to dodge a proverbial bullet with nuclear holocaust so far (but that story isn't at end yet), as well as biological pathogens and weapons (ditto there). Climate change is one of major issues at this point, and I can only guess how bad it could get. What if, say, 100 million people find their current location inhabitable and choose to move to better location -- say, North America or Europe? That scenario looks right now way too plausible for comfort, and would quite certainly trigger a major wars - in plural, all over the world.
This isn't the only scenario. Our globally high level of civilization is built on very fragile framework of technology where majority of people are very narrowly defined specialists, and any major disaster might bring it crashing down. If large enough number of these people are gone, the remaining population could very easily find their knowledge lacking in every field there is, from basic agriculture to ... well, anything. In just generation or two there could be huge monuments of high tech behind that we could no longer use (lack of knowledge or [electrical or chemical] power), care (lack of maintenance) or build (technology is always build on just slightly less advanced tech).
Again, easily available resources are now gone, and we no longer have tech to get to the hard-to-get ones. Could we ever rise up again as technological power? I am not certain that we could. Mankind might be doomed to live here, on this earth, never being able to get past the orbit or reach out for anyone.
How many civilizations have fallen like this, and will it be our destiny too?
“Ask ten different scientists about the environment, population control, genetics and you'll get ten different answers, but there's one thing every scientist on the planet agrees on. Whether it happens in a hundred years or a thousand years or a million years, eventually our Sun will grow cold and go out. When that happens, it won't just take us. It'll take Marilyn Monroe and Lao-Tzu, Einstein, Morobuto, Buddy Holly, Aristophanes .. and all of this .. all of this was for nothing unless we go to the stars.”
(in Babylon 5, written by J. Michael Straczynski,)
sunnuntai 4. marraskuuta 2018
CFL lifetime
When the ban of incandescent light bulbs came into effect, common argument against that was the lifetime if CFLs (and later, LEDs) that were to replace them were as short as incandescents, or even shorter.
It took a while (a long while), but now I got some anecdotal evidence on that argument.
We've lived in this house for over 10 years now, and we have never changed the outside lights (four of them in total). Until now. One of them had gone bad. A 11-watt CFL. The glass tube seems that have neatly broken at one point and let magic smoke out, or something to that effect. Other three identical CFLs are still going strong.
So these things have been out there for at least 10 years, in varying weather from -40 degrees C to +30 degrees C. Possibly over 20 years, even. During that time we've had some half dozen other CFLs in the house that, granted, have been used more, but so far none of them has broken.
Going back even more, I got my first larger apartment around 2003 or so. At that time I bought a light fixture to kitchen that I still have (some plastic parts have gone bad since, though). For that fixture I bought a CFL bulb that is still in use. So that would be 15 years, and that one has been used a lot during that time, considering that is has been in the kitchen, and lately, in my office kitchen/break room where it gets easily 6+ hours of use a day in winter (much less in summer, of course). I'd say that I've gotten far more "bang for a buck" out of CFLs, all in all, than ever from incandescents.
Now, that being said, CFLs have other issues, like some brightening very slowly, but lifetime certainly isn't one of their weak aspects.
Now, LEDs, so far, seem to be be falling in middle. At the moment we have maybe 10 LED bulbs around here, with maybe half being dimmable and oldest being somewhere between 5 and 8 years old (no firm anchor point on time for those like for CFL examples above, you see). So far one has failed (and I did open it and post here too, and once again semi-clever topic names bite me in the back as I can't find the post right now). Still, quite many hours of use we've gotten out of those, too.
I, for one, won't be missing incandescent lights, except maybe in color rendering purposes for photography. For that I'll keep some in spare. But I wouldn't be surprised even if that situation were to change in near future too.
perjantai 26. lokakuuta 2018
Cyberpunk eyes
Everything cyberpunk has taught us is that it will make us better through implants - or augmentations, or whatever term you prefer. There is cost of course - monetary at least, and very likely social too (see: deus ex games). However..
Our visual cortex and eyes have been honed by millions of years of evolution, and even still it isn't even good, nevermind perfect - I do have to wear glasses after all. That being said, I don't really expect much more of it; after all, evolution is always looking for better overall result and not for singular results.
That makes me think.
Let's say we could replace my eyes with a perfect camera (not an option I'd pick first from a list of possible improvements, but for argument's sake) .. would my brain - or specifically, my visual cortex - keep up with it?
I think that overtime, our eyes and brain have evolved together, to offer just enough data at our focus to give sufficient resolution, but also process incoming data in most effective matter. This is the reason so many of the visual tricks work. "Focus on the jugglers .. now, did you notice the gorilla walking past?" just to name one example.
Our senses - eyes in this case - feed our brain massive amount of information, so before anything else happens, this wealth of data must be processed and filtered. Almost from birth our brain can detect movement very efficiently, as well as shapes - human faces specifically seem to take a lot of processing for identification - and some other very specific but very important things that helped our survival in the past.
If I were to feed my brain with that camera footage instead of what my natural eyes feed them, could my brain make any sense of it? I guess the answer for that would be yes - but only after long period of my brain getting used to it. That might take a long time, months or years even.
But the better question is, would that actually make me perform better? Replacing just the eyes that is. I do have a feeling that answer to this is no. The same processing would need to be applied to incoming data. I just changed the source, but processing did not change. For optimal results the brain needs an upgrade there too.
For example, some predatory birds the live in forests have essentially two visual processors. One keeps track of surroundings - essentially making sure they don't hit a trees or other such things. Other tracks their prey as they flee, to keep it from escaping. We don't have that capacity to focus on two very different (visual) things at once, although some people appear to thing they do, especially when driving.
Some frogs apparently have processing in their brain that only fires then small, round-ish item (like a bug) is moving on their vision (disclaimer: or so I've recently read from somewhere). What a perfect way to conserve energy by idling when there's nothing to do... (I think I wrote a post where I talked about this "do nothing/sleep until you need to do something" in context of MCUs but can't find it right now so no link, sorry).
So in the end, without some brain processing upgrades, upgraded eyes might not actually be that great. Granted, you could put processing in the the new eyes too, to provide a HUD or something like that to make them more useful, but even then, there is that "one focus thing only" limitation. Context switching is expensive in processors, and in our our brain, even more so (say, switch between focus points in vision.)
That eye upgrade - or at least versions 1 through 5 of them - might be quite disappointing. But even then, for someone with no (properly) functioning eyes, they still might be almost a miracle and absolutely worth the cost. I'll just wait upgraded version of eyes - and other things too - for a bit longer, thank you.
keskiviikko 17. lokakuuta 2018
Life of an NPC
Lately I've been way too busy to write (or even think up, really) new posts, nevermind ones that would be even close of being on-topic. So unfortunately you may need to just accept wildly off-topic posts like this.
In computer games, NPC refers to roughly "non-player character". Essentially, a character controlled by computer.
I've been playing WoW (that is, World of Warcraft) on and off for .. well, long time. After new expansion came out, I picked up again, for some time at least. And running around in Boralus harbor I saw some guards standing there in attention, and suddenly my imagination formed idea on what's going on "in their mind"...
"..So, my job is to stand here, in attention, for the length of this expansion? ..Well, could be worse, I could have been put to next hamlet over, to be eaten alive by huge monster whenever here comes to town, so once every hour or so I reckon..."
So, at this point you may be asking, I'm too busy to write not still have time for games? Well, like I said before, you need to relax in order to retain your productivity, and do know that unless I take a proper time out at some time, my productivity just plummets. So, at the moment, this is one of the ways I relax a bit when off from work.
In computer games, NPC refers to roughly "non-player character". Essentially, a character controlled by computer.
I've been playing WoW (that is, World of Warcraft) on and off for .. well, long time. After new expansion came out, I picked up again, for some time at least. And running around in Boralus harbor I saw some guards standing there in attention, and suddenly my imagination formed idea on what's going on "in their mind"...
"..So, my job is to stand here, in attention, for the length of this expansion? ..Well, could be worse, I could have been put to next hamlet over, to be eaten alive by huge monster whenever here comes to town, so once every hour or so I reckon..."
So, at this point you may be asking, I'm too busy to write not still have time for games? Well, like I said before, you need to relax in order to retain your productivity, and do know that unless I take a proper time out at some time, my productivity just plummets. So, at the moment, this is one of the ways I relax a bit when off from work.
maanantai 8. lokakuuta 2018
Compiler bugs?
I've been writing code, in one form or other, for some 30 years now. And over all that time, I've found exactly two compiler bugs, and one of those is kinda grey area anyway.
First one was a C compiler for C51 processors. The bug was that when a did something like this;
int var;
void func()
{ var = var*2; }
... var = 3;
func();
printf("var=%i\n", var);
It would print out 3. Whether this actually counts as bug (variable wasn't volatile, after all) is another issue, but this did take a few moments of head scratching to figure out.
Another one was with Microsoft Visual Studio 6 (or might been few versions newer MSVC too) or so. I don't remember the exact details here, but somehow, when #including things is a very specific order, compiler (well, pre-processor) decided to completely drop one of the included files from compilation, resulting a very, very strange error complaining about undeclared classes/namespaces/whatever (again, exact details escape me).
This one took again few hours to figure out (I had to actually make compiler to print out pre-processed intermediate output to find out what exactly happened), and just switching include order of two files (one that was dropped and another) fixed the issue.
Unfortunately, this was part of very complex project, so making simplified case of this to report the issue wasn't possibly, so I never submitted a bug report.
What am I trying to say here?
Do you suspect that error is a compiler bug? I am willing to bet that it isn't - it's your bug.
Yet, even then, I am a bit... shall we say, reluctant, to upgrade the compiler I'm using for my ARM builds. Although the possibility of finding an actual compiler bug is nearly zero, possibility of triggering some obscure bug in some part of (my!) old code because some detail somewhere changed doesn't make a fun possibility...
lauantai 25. elokuuta 2018
Take a break. Really.
Here in Europe the "live-to-work" attitude so common at America or (eastern) Asia isn't really that widespread. For example in Finland work week is 40 hours, anything above that is overtime (and there are legal limits to that, too), and people working have up to six weeks of vacation over year, not counting official/bank holidays (like eastern, Christmas, independence day or so on.)
Some people still do work more, though, and right now I have to admit that I haven't had a real vacation this summer either. Granted, since weather has been very hot (close to 30 degrees C for several weeks straight - essentially unheard of here as one or two days, maybe one week of sequential 25+ degree days is more typical), and air conditioning isn't really widespread, it's nicer to sit in office with AC on than at home anyway.
I've been really busy working out final kinks out of our new product. This isn't first time I've made a new product either, but even then amount of work needed to finalize a product still catches me every time. All the small details, previously postponed, suddenly turn very important.
Even during this process I multitask. By necessity of course - many very different things to do (including upkeep of older products) - but I don't really mind that. It's actually good. It allows taking a bit of distance to certain things.
For example, I've had this damn annoying random crash issue with one product for some time I couldn't figure out. It just crashed, seemingly for no reason. It's been there for a long time. Previously I just gave up, deemed it to be minor enough issue that it could be postponed and went with it; the product, after all, recovered gracefully (as designed) and resumed operation after a second or two.
Spoiler alert: It turned out not to be minor at all. Constant issues, especially unexpected, seemingly unconnected ones. Crap.
However, several months of break with that code base allowed a completely new, fresh perspective on the bug hunt. So after only a few days of troubleshooting I had the answer. Interrupts within interrupts. They behaved badly in this codebase. Had always done so, but before, the module did much less, and interrupts managed to interrupt other interrupts much less often, so issues were much less frequent, almost hidden.
Few very small tweaks and problem appears to be gone. Now there is just the issue of distributing the fix, to devices that are not field-upgradeable... D'oh. But thas is beside the point I'm trying to make.
And this is something I've said before.
The fix here was the same as so many times before. And you know it too. All the week you've been banging your head onto an issue, without solution, working overtime. On weekend you go home, get wasted or whatever, return to work on Monday and the problem is obvious and fixed in no time whatsoever.
Can you see the solution here? It's time actively not working on the issue.
Take that break! Go to have a long weekend! Throw the keyboard on the wall and walk away! The problem will be there when you come back after a few days - or weeks - and by then you'll probably know the solution to it already. And that will be only by not working on it.
Your brain has this amazing capacity of background processing. Use it wisely, and you will never have to work past that magical 40 hour mark - or maybe 30 hours even - to finish your work.
lauantai 18. elokuuta 2018
Tiny drone
Some time ago I bought, mostly out of curiosity, a tiny drone from verkkokauppa.com, named RedBird Nano Cam. It wasn't too expensive, but then again, it isn't too great at flight either - fairly unstable and hard to direct even in best circumstances.
Here's the thing, without propellers (rotors?), with two euro coin to give a bit of size reference. Although the bird on top looks quite blue to me...
It got some abuse too, in the twitchy hands of kids, so it got dropped often. So I guess it was to be expected that it would be short-lived toy. And eventually it did stop charging. Best guess, kid left it on and that killed the battery dead.
So what a curious engineer does when that happens? Yes, break out trusty tools to see what's inside...
Top cover off at this point, and I had already done some damage by not noticing four screws that keep top and bottom covers together. D'oh. Nevertheless, the green board is the camera module, connected to main board only with only three wires. Interesting.
Camera module off, it wasn't even glued or taped down. Camera module also has place for MicroSD card it.
On the main board there are three main chips;
Invensense MPU-6050C - combined gyroscope/accelerometer module, apparently one that is very widely used. I've been considering using similar chip for one application I've been thinking about, but unfortunately I haven't found suitable gyroscope yet - for the application I'm thinking about I'd need around +/- 10000 degrees/sec operating range, but most seem to be around +/- 2000 degrees/sec range - this one included. Not even close to sufficient.
ST Micro STM32F031K4 processor.
XN297, a 2.4GHz transceiver chip.
The other side. In corners there are motor driver transistors, and in middle a clock crystal and few programming test points. All in all, quite simple thing.
I tried measuring the battery, and it read exactly zero. When trying to charge it, voltage rises to about 0,2v. It's dead, and I don't feel like trying to figure out why either (earlier speculation aside) Might as well get rid of it, except that fun-looking camera module ... I can think a few uses for it immediately.
perjantai 6. heinäkuuta 2018
Electric cars with space
So far all the fully electric cars have had ... well, anemic interior space. With family and two large dogs, small hatchback just doesn't cut it.
Now I found out that there is Audi e-tron coming, and judging from exterior shots, this one might actually work for me, although promised 400km range, while sufficient, isn't great - I had hoped for 600km range at this point since charging stations at the moment are unfortunately rare around here, limiting the route selection possibilities severely.
Now, will I be reserving one? Well, that's almost certain "no" at this point.
Like Tesla, this appears to be targeted towards luxury/high-end market, with gadgets, features and power to match.
This will, very likely, make it Very Expensive Car to purchase - they are not telling the price, but I will be very surprised if it's below 80k€ around here (granted, Finland has pretty high tax for cars) for base model (whatever that means in this context).
That price is simply way too much for me. If it were, say, 50k, I could consider it. No, wait, that's almost certainly wrong, I'd almost surely buy it at that price point! That figure is, by the way, what the hybrid Outlander cost me. But anything above that I just can't really stomach. Even when the car I am looking to replace - gas-guzzling Santa Fe '03 - is starting to show its age, and not in a good way.
So guess I'm back to waiting for the tech to come down a price group or two...
perjantai 29. kesäkuuta 2018
The final walk.
Again, one completely post with no electronics content. This one is about dogs. You have been warned.
When we were having our (first) dog to be inoculated for the first time, we waited in the waiting room of the vet with all the other waiting to see the vet.
While waiting, I noticed a man, maybe in his 60s or so, coming from behind a corner and just slowly walking towards the exit, eyes strictly forward, with no expression on his face, holding an empty leash in his left hand.
Few seconds later a woman, also in her 60s or so, walks around the same corner quite obviously in very serious distress, all but crying, following few steps behind the man towards the exit.
It took me a few seconds to make the connection, but the dog of this couple must just have had his very last walk on this earth.
The dog we had inoculated that day is still with us now, nearing his first full decade of life, but with some ailments that mildly degrade his quality of life. Nothing that would ail him seriously, not yet at least. He will still have many good years here, but day by day, the day when I will be taking him to his final walk is approaching.
And I am quite certain that when that day comes, I can't be as .. unemotional .. as the man I mentioned before.
perjantai 15. kesäkuuta 2018
Bluetooth LE
I've been playing around with a new Bluetooth module kit, and specifically BLE (Bluetooth low energy) lately. At first entire bluetooth system - nevermind new LE-specific features - seem really daunting, but after essentially diving in it is becoming easier - piece by piece.
All the profiles, advertisements and whatnot seemed overwhelming at first, but when I got to tackling them one by one - essentially by playing around with this developer kit first, then doing some reading, then some coding - things get much easier. Although a "big picture" view, when getting started, might have helped a lot.
Main point of BLE is there, right in the name - low energy. Peripheral devices, whatever they might be, are supposed to do everything and anything to keep their power consumption low, by spending most of their time doing what they do best: absolutely nothing. This is how all electronics with extremely low power usage work - you want to do absolutely nothing as much as you can, only waking up every now and then to do as much works as is needed, as quickly as possible, then going back to sleep.
100mA power consumption in active mode may seem a lot, but when you only spend, say, 1 millisecond in active mode and remainder of a second in deep sleep (where current consumption might be just microamp or so), the overall figure is still pretty good, although still dominated by that short burst of current usage. Therefore you'll do anything and everything to keep time being awake as short as possible and as infrequent as possible. Can you do energy-intensive actions only every 10 seconds or so, instead of every second?
Counter-intuitively this also means that your MCU (or BLE module) should run at highest possible clock rate for the short duration there is work to be done, then go back to sleep.
Now, the MCU or controller or BLE module you're using is not the only part of the circuit. In order to reach that minimal current consumption, you must also design rest of the circuit so that, too, takes as little power to run as possible, especially in deep sleep. After all, it doesn't really help to have your MCU at sleep, taking only 1uA, when rest of your circuit takes 1mA on top of that!
This part can be very challenging though. In case of my idea the circuit needs to both be always active (so every short event is noted; MCU can sleep, but remaining circuit needs to stay active) and highly tolerant to external interference (which in this case requires higher currents to be used in measurement part). A bit of a conundrum, that. At the moment I don't have a good solution to this at hand, but I do have some solutions - not great, each with some drawbacks, but nevertheless, I can make this to work. Not at optimal level, but it's still better than none at all.
lauantai 9. kesäkuuta 2018
Found it!
Almost two years ago, I managed to lose my BT headset module. Now I found it! Apparently it ended up very near the place I looked for it at, just a meter or so from where I tried to look for it, near a wall. As the spring plants today hadn't grown yet so it was really standing out there.
After few moments of thought I tried charging it with typical wall USB charger. Surprisingly, it lit up, indicating being charged. Unfortunately it stopped charging very quickly, which I considered bad omen already. So next attempt - turn it on.
No luck. It's dead. Next few charging attempts ended up with similar fate - tries to charge, then stops very soon - can't. Not very surprising, really, this was on when I lost it, so it would have drained its battery very quickly, and after that it was allowed to drain the battery further for almost two years. If it weren't dead, I'd had been more surprised.
So, next thing was to tear it apart, to see what's inside. My first guess: One PCB with one to three active chips and a battery.
I thought that the enclosure might be ultrasound-welded or something like that, but no, it opened way too easily for that. Just snapped together. Absolutely not even water-resistant, really, but then again, I don't think that it even advertises as such, so no problem there.
Snapping it open, first glance was pretty much what I expected. One PCB, with buttons, leds, few passives and PCB antenna at the end.
Taking out two tiny screws, we get to see the other side. Battery and rest of the PCB components. Curiously batter could have been larger (I'd really want that!), but guess they thought 4-ish hours is enough. Hint: It isn't.
On PCB the main chip appears to be ISSC Technologies' IS1681S chip. With brief search, I found IS1690S datasheet, which, unsurprisingly, is a small BLE 3.0 chip with 8051 controller built in (is there a place where 80C51 isn't -- that chip is amazingly common, and amazingly powerful too, considering its roots in early 1980s). The other chip there seems to be 24C34 (I don't have my microscope handy at the moment), which would mean I2C EEPROM for storing pairings and other such data.
And that's about it. Unlike devices of 80s and 90s, these days this is way too common setup. Dedicated chip, with few support components, and not much else. Granted, this allows to bring the price down, but it's still kinda let down...
torstai 31. toukokuuta 2018
Card games
I've never been to card games (be it real - say, Poker - or virtual - say, Heathstone or Gwent) much. Any game I've tried, I've lost interest pretty quickly. I guess those just aren't for me, for whatever reason .
That being said, I have tried Magic the Gathering too. This was somewhere around 1996-1997, I think, so more than twenty years ago now (...damn, doesn't that make me feel old). Even then I pretty quickly found out that it wasn't for me, so I just packed up and put my cards into storage. There wasn't any really great or even good cards there, by that day's standards at least, so I didn't even bother trying to sell them. Now, though, I have to wonder if those cards would have gained some value over the years...
The thing is, I've packed a lot of things away, in cardboard boxed, in relatively random order. Without any labels. And there are many, many boxes at this point.
Finding those cards might require some serious digging.
lauantai 26. toukokuuta 2018
PHEV (part 2)
Continuing my experience referred in last part.
At the moment I've driven some 4100km with said Outlander, and since then gasoline price here went up, to 1,60€/l (about 7 USD/gal!), following the raising price of crude oil. Looking at the price of gasoline, it feels great to be able to drive around with almost zero gasoline usage.
For example, this week, we've driven maybe around 200km total so far, effectively 100% of this being on battery power (daily totals being in 20-40km range, allowing full recharge overnight). The average electricity consumption of the car (as reported by the car) is about 21kWh/100km, which means about 2€/100km energy cost. Not bad, not bad at all, especially when comparing to what gasoline would cost.
Quick edit: At this point, total gasoline consumption has been about 4,6 l/100km; not exactly surprising as there have been two longer trips (1000km+ each) without ability to charge during it - mostly due to lack of charger network at the moment.)
Yet, at this point, I have to acknowledge that up front cost of hybrid, compared to gasoline-only car, is much, much higher, and it will take very, very long time before lower energy cost will offset that. I was fully aware of this when I made this choice, with undestanding that I need to drop my CO2 emissions from current levels. I need to get around anyway, so I might as well choose to do it with near-zero per-kilometer CO2 emissions, as opposed of what gasoline would produce.
Again, producing car consumes energy (and produces CO2) too, but those figures aren't publicly available so making calculations of them is, at best, guesswork. I would have had to upgrade the car anyway soon-ish, so that reduces that number to difference of what could have been, and what is. Again, unknown numbers.
Now, to practical issues. It took a while to get used to plugging car in every night to recharge, but eventually I managed to adjust my mental processes for that, and now it's habitual, takes effectively no extra effort anymore. Apparently some people never manage this and give up with EVs very quickly.
The charge port location (right side, at the rear of the car) is a bit troublesome too, as most charging stations seem to assume charging port being at front of the car. The cable thus typically is very short - 2 to 4 metres, forcing either backing up to the station or fiddling around with extensions. Which, in case of high power (4kW+) electronics is not something that is safe over time!
Included charger (from normal house socket) fortunately has fairly long cable, but dedicated, higher-power charging station at home would better over time (especially if I replace other gas-guzzler later with fully electric car with 40kWh+ battery). But with those, typical charger cable lengths are again an issue.
Storage. Previous Skoda Octavia also had loads of different storage compartments and spaces around the cabin. Outlander doesn't, which is a bit of a problem, since I like to put my keys, phone, wallet and everything somewhere when I'm driving. I'll need to get to aftermarket parts for that, and even then space will be somewhat limited.
Power windows. I have no idea why the driver's "cut off" button prevents also driver from operating other three windows, but whatever the reasoning is, that is simply brain dead behavior. I want to be able to prevent kids from operating their windows themselves, while being able to operating them myself if needed, without having to touch the lock button.
It also seems that all fuel consumption gauges have been designed to show MPG, which of course is completely retarded way of measuring consumption of fuel, and conversion to litres/100km has been made almsot as an afterthough. That is the only way I can figure out why the bar in this display starts from 18 and not from zero...
This isn't even the worst display, as history information shows data scaled to 0 - 60-ish l/100km range. As the effective consumption of car has so far been less than 10l/100km (and last I checked, just now, it was at zero over view duration), this scaling makes this entire display completely useless. All the small variations are invisible due to this huge, useless scale.
Not that these displays showing momentary usages are very useful anyway, as during normal use engine powers up, charges battery for some minutes, then powers back down again. During that time instant fuel show something, but it's not real consumption figure. You can't get instant consumption from this thing because of this behavior.
All this are mostly minor annoyances. Now, if someone would just make a car like my old Skoda Octavia Combi, with its cargo capacity (those dogs again...) but fully electric with 300km+ range ...
perjantai 18. toukokuuta 2018
Loving a good spreadsheet
Even since microsoft decided so badly f*** up the usability of build-it Windows calculator I've found myself using "excel" more and more for all kinds of simple calculations. Excel in quotes, because I don't really use Excel but LibreOffice's equivalent, Calc, but "Excel" is essentially synonymous to spreadsheet today.
Need to quickly calculate some VATs? Calc.
Check current and voltage of resistor divider? Calc.
Track my daily spending? Calc. (Well, at least that is one thing spreadsheet is actually intended for).
Calculate 10+20+30? ... Well, if I am submitting those figures to customers, I might do the math in my head, and double-check with Calc. Just in case.
In the end, I may have few different sheets open at the end of the day, often each just containing few cells' worth of numbers and quick calculations. These I often discard when they're not needed any more.
But not always. Sometimes a sheet might contain something I may need to look up again. Conversion tables (like resistances to voltages to matching temperatures, or relative humidity to absolute, or whatever) and I save it for next time.
Invariably, when the "next time" comes I open a sheet, hoping to find beautifully formatted table, ready to calculate all my figures, but it seems it never is. It's a mess of figures and formulae, thrown together on a single page, possibly even without any notation anywhere (except maybe units somewhere.) And it takes almost longer to figure out how the sheet actually works than it were to write all of it again.
Of course, when a sheet gets used often enough, I do at some point seem to remember to add some notes and formatting to it. But it tends to take a few iterations.
Nevertheless, all hail Calc, one of the most useful multi-purpose tools in engineers' arsenal!
keskiviikko 9. toukokuuta 2018
PHEV
I previously mentioned considering buying a hybrid car, although main topic was a bit broader. The main point still stands - namely, the absolute requirement (of entire mankind) to cut carbon emissions. There just is no way around it, no matter the (monetary) cost.
I also mentioned considering getting a PHEV - that is, Plug-in Hybrid Electric Vehicle - in essence meaning a car with both fossil fuel engine (usually gasoline) and electric motors along with batteries. At the time my choice was Kia Optima. Since then I've changed my opinion.
Eventually I went and replaced my Skoda Octavia (1.6 TDI - that is, diesel, with approx 5l/100km - 47-ish mpg-us) with Mitsubishi Outlander PHEV. The price of Outlander and abovementioned Optima was almost same - but in the end mostly electric drive train with four-wheel drive won me over.
As all hybrids, this is "automatic", although in this case there actually is no gearbox anywhere. Up to around 120km/h the car is fully electric - rear and front axles have both 60kW separate electric motors. When going faster than that, the combustion engine is directly coupled to front wheels. So no gearbox. A real series hybrid, at least most of the time with my driving.
So far the consumption isn't looking too bad either. So far I've driven 2100km, with approximate average consumption of around 5l/100km. While this doesn't sound that great, I have to note that about 75% of this has been highway driving where gasoline engine needs to provide almost all energy to keep the car moving.
With full battery (with capacity of about 12kWh) I can get some 30-40 km out of the car, which easily covers most or even all of my daily driving, bringing gasoline consumption when near home to around 1 liter/100km or so, or even full round zero when cabin heating isn't needed.
This isn't typical highway/city driving ratio for me, however, it just happened that there's been a lot of travelling lately. Typically the ratio is closer to 50:50, which, with figures listed above, would drop average consumption to maybe around 3,5 l/100km range.
So, if you only even drive short trips, PHEV is great. But if all your driving is long road trips, this technology isn't for you, not yet at least.
Also, it is really cool to drift around majestically in almost complete silence. Loud engines (and/or exhausts) are nothing but childish, I want my car quiet! Although wheel noise is something that really can't be fully removed.
It isn't all wonderful though. Like I said, full battery charge can take care of my daily driving, but there's always the next day. Meaning that the car needs to be plugged in for recharge almost every single day. And the charging isn't quick either - from standard 10A 230v mains socket (so about 2,3kW supply) the full charge time is around 5 and half hours. I'm looking for dedicated charge station that could provide around 10kW, but that will cost some extra to install. At least I've got a home where such addition - and even ability to charge the car at all - is possible - I fully realize not everyone has this possibility available.
Already I'm thinking of replacing our other car with fully electric one, but at the moment there really aren't any suitable models available. Lack of large enough trunk for all the stuff - and dogs - makes every single one of current options useless for me. Too bad, guess I have to wait a bit more yet for that...
sunnuntai 6. toukokuuta 2018
Interrupted
The zone. That is what every programmer is looking for - when you get to work, you drop in the zone and get loads of work done very, very quickly and efficiently.
I'm quite certain that the zone exists in other fields too, although I guess it's mostly within the creative fields - writing, paining, imagining, programming, designing - where it's most prominent. If you haven't encountered it, the best description I have for it is very focused mode of mind. In the zone you are absolutely focused to this task on hand - be it new painting, program feature, chapter in book, fix to old software or whatever, you have it all - the entire vast thing, whatever it may be - right there, fully realized in you mind, within your grasp and because of this can do incredible amounts of work very quickly.
If the zone is interrupted - mostly due to some unwanted external stimuli, like phone ringing, someone talking to you (or even some talking to their colleague few cubicles over) or whatever - if feels bad. And worse, it shatters the zone. Focus is lost, and hours, even days worth of work to be done, gone, just like that.
It sucks. And that feeling when it happens is the worst.
At my old work, I was surprised to hear that some of the younger (career-wise) people were kinda afraid to come to talk to me about issues they were having, but, on reflection, I guess it was because they had the tendency to interrupt me when I was in the zone. No creative person responds well to that, and I guess I was letting it show. I never wanted it to be this way, and always wanted - and tried - to help them the best I could. But that moment when the zone shatters - it's nasty.
Even now, when I have realized what is going on, losing the zone causes issues. Someone talking, doorbell ringing, whatnot - it all interrupts me. But now I just take few deep breaths to let the worst of the anger out and get on with it.
Curiously enough, though, phone never had similar effect to me. I can answer phone, have short conversation and still resume from where I left off. This is pure guess, but I think it's because it's verbal only - when you don't see the other person, all that subconscious processing of postures, gestures and expressions isn't there to break the concentration.
That is the zone. And if you are one that has them, or know one that has them, it's good to realize what is going on. In the end, it makes co-operation much easier.
sunnuntai 29. huhtikuuta 2018
Suit up!
Before I start, I know that many people will not like what I say here. That's perfectly okay. This is how I think, you're free to have your thought on this issue too.
So, It seems that vast majority of engineers do not like suits. Many seem to even passionately hate them, in fact, to the degree that "not even owning a suit" seems to be mark of honor.
I don't wear suits either - not very often. Not even collared shirts and slacks. I prefer T-shirts and jeans most of the time - that is, when I expect to work in the office all day, without any face-to-face customer contact.
But not always. When it is time to dress properly, I prefer to go all-in.
I was once told that when in public event one should always be dressed better (as in, better by one "step") than your customers. Like I said, this isn't everyday, but at least in more formal occasions, like trade fairs and so on. And of course, when I go to meet a client specifically, I do dress a bit more formally than usually. What this means, exactly, varies by occasion.
Some times this means that I wear a suit. And other times it may mean jeans and T-shirt. And sometimes it's something in between - like slacks and collared shirt. But the choice is always case by case.
Like I already said, many engineers absolutely hate suits. I did, too, at one time, but no more. I discovered the joys of tailored (or at least custom made) suits.
If you just go and buy a suit, without bothering to have it fitted for your body, you will end up with expensive disappointment. It will look bad (just look at the current clown at white house...), and worse, it will feel bad (some clowns have no self respect at all, it seems.) Just don't. Because if you do get that off-the-rack suit, you will hate suits forever.
Instead, for your first real suit I suggest that you go to see a tailor, or at least a shop that will serve you personally and will have the suit you want to be fitted for you. Yes, it will cost a bit more, but trust me on this - you do want the suit to feel and look good on you - and that is where fitting it on you comes in. And it won't cost you that much more, either.
I by no means claim to be expert here, but I dare to say that there are three (or four) classes of suits;
1) Professionally tailored.
2) Custom order
3) Off the shelf
Fourth class might be between 2) and 3) ; off-the-shelf fitted. This "2.5" is the minimum you should strive to. Pick a suit that is close to your body (and you will need professional help here, especially if you haven't bought a suit before) and have it fitted to you.
If your body isn't of a common type (that is, no suit in stores are even close), you may need to go to class 2 immediately. You send your measurements to a tailor, who will order a suit from factory and have it fitted for you. A bit more expensive than option 3) or 2.5), but absolutely worth the cost.
Class 1 is the high end. Here, too, are many options. The most expensive might be to go to, say, Paris or Rome and pick one of the very best names in business. You will of course get the best, but you will also pay for it. Not really worth it unless you need to mingle with the Very Rich all the time.
If I want to get a suit tailored, I would have to go to Helsinki - that is the nearest tailor from here, 600km away. Not a cheap option, but in dire need (say, when going to close a deal of €50k+ or so), I'd definitely consider it.
Second best option is custom order. I take my measurements (or have them taken, for example by my wife), send them to tailor (again, nearest being in Helsinki, as far as I know) and have them to send suit for me. At this point, I'd rather not do this - if something is measured wrong, it will cost me a lot, since wrong measurement will look and feel bad. Not good.
Some shops do offer service where they take your measurements, order factory-made suit and have it fitted for your measurements too. I'd suggest this if in doubt, but if nearest shop is far away, this might be a bit tricky. Unless you're willing to go there for the first measurements, and then order new suits (pants/shirts/jackets/whatever) from them remotely. Not a bad idea, actually.
There is a lower cost option also (kinda-sorta).
If you happen to be in suitable place in South East Asia region, such as Bangkok, Hanoi or other - well, I might as well say it straight: tourist - cities that draw many western people in them - you may find a tailor shop in almost every block.
There a fully tailored suit with a shirt or two can bought for some €200-€500, depending on location and materials. Mind you, you do need to "shop around" first, preferably by browsing customer opinions in the net. You do want at least to beat the store-bought quality, right?
But which ever is the way you pick, as long as you go for a quality suit, it will fit you nicely and feel good on you, you know that the choice was a good one.
Don't be afraid to wear a good suit. Be afraid to wear a bad one.
perjantai 20. huhtikuuta 2018
A lesson on forward design
I've got a product that is now over ten years old now (measuring from start of design), with several years left of lifetime before its production will ramp down. And then maybe another five before most of the units are removed from service.
This product is combined of two parts; main unit, and relay/aux unit that are connected together with serial link (essentially RS-232 with no flow control.) Relay/aux in this context means that it controls both some high(er) current devices as well as communication center to some auxiliary devices (via several other serial links). The plan didn't go above 38400bps anywhere, with the usual speed being 9600 bps, with a lot of time in between communication bursts, so all in all, pretty easy stuff.
So, as it usually starts, I chose to use relatively simple, ASCII-based command/response protocol with fairly long timeouts in the first iteration of protocol. This makes it very easy to test at first, using simple terminal program and manual commanding, making product release faster.
Unfortunately limits of that design became apparent very quickly just after few (this being very relative unit) devices were delivered. Long timeouts means that in case of any error, there is long time before system can even start to try recovery. Then, attempting to retry an operation might cause some operations to fire twice before acknowledgement came through. When setting state of relay this isn't a problem, it's still "on", no matter if you set it once or thousand times. But when sending packets of serial data... Yeaaah, not so nice. So manual-friendly commands proved to be less than machine-friendly, and then I started to need capability to communicate with several devices at same time (previously need was strictly sequential; main unit opens exclusive link to one of connected device, and when done with that for time being, opens another exclusive link to another direction).
Enter revision two. This was fairly early in the product lifecycle and this was a "quick fix". Improved protocol allows easier "muxing" of data; data is sent in frames, and frames have target (where they should go; relay control parts, aux devices and so on) and checksums now, making system otherwise much more robust. Of course, commands are still essentially version 1 commands, now just wrapped in frames. This fixed many of the original issues, but not all of them, most importantly the problem where a command might still fire twice before acknowledged, as the implementation had no guards for that. Not exactly your typical "two army problem", but close to it.
At that time this wasn't major issue so I let is slide.
Fast forward some years to this day. I need to interface a WLAN module that uses 115kbps with flow control and can potentially send large amounts of data at bursts. With a design originally planned to run at max 38.4kbps. Now the "two army problem" is getting to be a serious nuisance, as receiving duplicate data packets is huge no-no for data integrity. I've grown very fond of data integrity over the years, you see. Much more so when I started this design. This trouble here might have been one reason for it.
Data integrity - defined here as ability to "lower level interface" to transmit and receive data with as few errors as possible, and with ability to correct those errors whenever possible - is fairly complex issue when you get into it. When you are working with TCP/IP stacks provided by your operating systems it is very easy to overlook complexities on that level, as that provides you with huge amounts of integrity already - not necessarily error-correctness, but at least you packets will arrive in order and exactly once (to your application, that is.)
Now, back to issue at hand. At this point there are fairly large number of devices deployed already, with aux modules with old and older revisions installed (which, unfortunately, cannot be field-updated - it requires a hardware programmer). On plus side hardware did not need any changes, so I was able to hack together flow control with software and signalling lines included in design. But lack of software update capability means that I can't fully overhaul the system, as it should be able to work with old modules to a certain degree - at least when not using advanced features like ones needed by that WLAN module.
Eventually I figured out a way to do this, by expanding revision two frame system by few commands that have packet sequence identifiers and respective acknowledge/timeout mechanisms. And now the system is fairly robust.
But it is carrying a lot of compatibility burden. So now I have v3 frames, that contain v2 frames, that contain v1 commands in them. With fallback to use just v2 frames if this aux unit doesn't happen to speak v3 language. Oh joy...
But hey, next time I design a protocol for similar use, I will know better.
(you may want to read that as "I will have existing and tested protocol (almost) ready to use - and strip out the bad parts" ...)
lauantai 14. huhtikuuta 2018
Meet your friendly, fallible robotic driver
It seems that the prediction is that self-driving cars are here, and very soon. Quite a few companies - most (in)famously Tesla - have been developing self-driving technology for some time now. Tesla's tech isn't really self-driving however, despite the name they use ("autopilot"), it's just somewhat better version of lane control systems used in higher-end cars now.
I am not against self-driving cars, really, but I will not be an early adopter here. The tech absolutely must mature a lot before I am willing to trust myself in the care of robot driver.
One figure thrown around lately is 90%. That is, today the most advanced systems are about 90% reliable, and remaining 10% requires human attention. Tesla's recent accidents are mostly because of that last 10% - the driver didn't catch on that system was failing and end result wasn't pretty.
That 10% is today. But it gets worse. Much, much worse before it gets better.
I was absolutely certain that I had written a related post earlier, about machine translation, but I don't seem to be able to find it right now. Oh well.
The 10% error figure is a bit difficult, but let's - for sake of argument here - define it to mean that in about 10% of typical trips it will need human to take over, and very quickly.
See, with 10%, today, people are already having problem concentrating on the road enough to catch on when things are start going wrong. Already here we're having first problem - the systems aren't good enough to notice themselves that things are going wrong. It's the human driver that must notice this, often quickly ("what, why car is running towards that media---crash").
But that isn't the worst. What if the failure figure is lower; let's say from year from now it's down to 5%; year after that 2%; another year to 1%?
If people are already having serious problems catching on when things go wrong 10% of time, how are - how could they - catch on when errors are even more rare? The short answer of course is that they can't. We're only humans. We will doze off, play with our phones, daydream, whatever, anything and everything except watch the road.
The translation post I though I had written (or have written, but can't find it right now) was tangentially related. Today machine translation is - again - 90% good, so we need to proofread and correct the mistakes. But when it gets to 98-99% range, we don't bother anymore, and there will be embarrassing mistakes in our brochures and technical documents and whatever.
The difference is, of course, that bad translation doesn't (well, generally, there are some exceptions) get people killed. Bad driving most certainly does.
And this is the reason I won't be adopting self-driving tech anytime soon. 1% is no way low enough failure figure here. I'll be waiting for 0,01% figures first. Or, alternatively, slightly worse device that actually can yell out early enough that it can't handle the situation so I can take over. But then again, that is also very difficult to detect - if it were easy, those cars that have crashed would have stopped instead. When things go wrong, they tend to go wrong quickly.
sunnuntai 8. huhtikuuta 2018
Keeping time 4: The hardest part
Everything we've handled so far in this series has been easy, compared to the real challenge that remains. Now that we got those parts ticking come the hard parts: Leap years and seconds, time zones, daylight savings time and date math.
Last of those meaning for example "how many days/hours/seconds is there between 30.4.1432 [julian] and 29.3.2018 [gregorian]"? Ok, I cheated here a bit, this is just about the hardest calculation you can imagine, and it's unlikely your device has to deal with such math... but again, it's better to at least acknowledge that such issues exist.
Most (well, essentially all) of the RTC chips out these just ignore all these (with exception of leap years - I think most handle them properly now) and leave them up to the application software to handle the rest. You might be tempted to handle them yourself, but there is this wisdom floating out there in the internet: "Thinking of writing your own calendar code? DON'T!"
So my first suggestion is to just to follow that piece of wisdom. Find a library that does that all to you, and use it with the constraints of the chip that you are using - the chips typically can handle normal calendar operations (like moving directly from 28.2. to 1.3.) and leap years (so month doesn't change on 28.2. but 29.2. on leap years) but not anything else. Leap seconds are kinda overkill, as your typical crystal won't be accurate enough for them to matter, but nevertheless, it's always good to keep such things in mind - if for nothing else, in case someone asks. It's always good to be prepared in such case.
But if you, despite of all this, choose to write your own routines, I have just one universal suggestion: Make your device to always use UTC, and handle all the translation on your "client" side. You'll still need a library, but if you happen to be making (for example) IoT device, you can push the hardest parts to the server where you do have libraries to do the heavy lifting for you.
Or you could just ignore all that. Make the device dumb enough not to even know of these and leave it to the user to handle them properly (manually set the clock after DST changes and so on.)
Otherwise, good luck on your chosen path. It will be a difficult journey, but it also will teach you many, many things that will help you further along the way.
tiistai 3. huhtikuuta 2018
Privacy by directive: It's coming up
European GDPR will be in full effect in less than two months now. Last time I wrote about it things were still a bit messy, but since then things have gotten clearer. To me, at least.
In the mean time, following discussion on the "other side of the pond" has been quite interesting. Huge majority of people writing about this on the US side appears to be thinking that this will essentially kill any and every business opportunity in Europe.
In the same time people (I don't claim that they are the same people, but I am sure that there is some amount of overlap) complain about newest privacy issues with Facebook and other companies whose entire business strategy is to grab as much Personally Identifiable Information as possible and to sell that to
Take infamous "shadow profiles" for example (I won't provide a link; you can search for that yourself if you haven't heard of them already.) Or companies' refusal to remove personal data. This kind of behavior is exactly what GDPR was made to get rid of! GDPR makes entire practice of collecting this kind of "shadow information" explicitly illegal, although that line is kinda blurry. Knowing an IP address (or random cookie id) visited, say, toyshop.com? Might be fine. But more and more information you accumulate there, the more illegal status gets are information gets more explicit, until there is no way one can deny it - it's Personally Identifiable Information. Thus it is always better not to collect that "anonymous" information at all. User wins again!
After doing some reading, I found out that there isn't actually that much we have to do to get compliant ourselves. It certainly helps that parts of our business where this is applicable are already services where we keep customers' data for them. Meaning that insidious data collection, analysis and sales has never even been part of our business plan, so filling the gaps wasn't really that difficult. Not all of our GDPR-related updated are out yet, though, but the hardest parts are already done.
I don't get the people complaining how GDPR will ruin the internet. To me, it's completely the opposite - we're (well, at least we the Europeans) getting the control back! But of course, if your business is based on shady practices, I certainly am not surprised if users' access to their own information hurts the operations and therefore bottom line.
Meanwhile, we, the good people, are adapting for things to come, with smile on our faces.
("good" may not be best word here, but I can't think of better one right now, so that has to do)
sunnuntai 1. huhtikuuta 2018
Too sterile?
When I was young(er), MOD music was pretty hip on the scene. In case you don't know what I am talking about, MOD was music/sound format originally from Amiga computers in about mid 80s or so - it could play four-channel digital samples at independent rates and volumes at a time ('time' being late 80s/early 90s), which was pretty amazing at the time. MOD, or 'module' format was essentially list of samples coupled with the instructions on how to play them. Later other formats came along, and PCs also started to be able to play them - first with software, then with hardware (sound cards).
I, too, wrote a mod player of my own, for Sound Blaster (I'm talking about the original 8-bit one here) and Gravis UltraSound. And obviously it could play S3M format modules too, aside typical 4-32 -channel MODs and maybe few others.
These days all this sounds very primitive, and why not - it was different time. Storage space costed a pretty penny and MP3s came along only much later (along with lowered storage costs and increased CPU power - my 486 back then couldn't even decode MP3s in real time!), and today games are wasting (yes, I indeed mean that) huge amounts of space with uncompressed audio...
But all that is beside the point here.
I once asked a die-hard live music person to listen to a pretty nice mod and tell me what he thought of it. He said that it sounded too perfect. No imperfections at all. In a word, too 'sterile' for his liking.
And that it certainly is. If you ask MOD player to play a sample at note, say, F5, it will do that. Every single time, at same exact rate, at same exact speed, the same exact sample.
So I added a few small subroutines to my player. It added small random delay to samples - few tens of milliseconds or so, IIRC. It also varied the playback rate a bit - plus minus percent or so (I don't remember exact figures any more, but it was fairly subtle). It couldn't do anything to the actual digital samples, though, so those had to remain. This was maybe mid-to-late 90s.
I tried the played with this 'filter' on and off, and really couldn't tell any difference, at least that easily. I suspect I didn't play changed version for the person originally commenting about this, either. And I don't think that I tuned the playback alterations very long either - just did the changes and went on with other things.
Whether I was onto something back then, I can't really tell. And don't really care too much either, aside slight academic interest.
When sound cards moved from 8 to 16 bits and sampling at 44.1kHz, I quickly figured out that by interpolating original 8-bit samples at new sample rate, the sound could be much better. And indeed it did. I don't recall exact interpolation method I used anymore, only that it used some combination of look-up tables and fixed point math to make it cost just few clock cycles more per sample. This was still big thing, back then.
Only later, at university, I found the out the name for this issue: quantization noise.
This was around the time all MOD players started using interpolation, but there were some people that insisted that interpolation ruins the "intent of composer" as the music doesn't sound exactly as it originally was. Obviously, I chose to completely ignore this argument, as it was pretty damn obvious that less noisy signal (and this was very audible indeed!) is always better than original very noisy ones.
This all look pretty quaint these days, I'm sure.
keskiviikko 28. maaliskuuta 2018
Keeping time 3: Accuracy
To keep clock in time, there needs to be a timing source. In case of embedded systems, the most typical is 32768 Hz clock crystal (why 32768? For any programmer worth their pay, this is obvious, but that happens to be nice power of 2; 2^15 to be exact. And due to how digital systems work, powers of two use least amount of energy, giving optimal battery life).
The datasheet of the clock chip you're using will specify the required frequency and load capacitance needed for clock functionality. If the frequency is 32768 Hz, you're almost there; there are many different types of crystals to choose from. Then you'll only need to pick one with correct load capacitance and fine tune the circuit.
Furthermore, datasheet of the RTC chip will specify a circuit (something like one below) that should be used and the load capacitance required for proper function. Datasheets of clock crystals specify their load capacitance. So just pick ones that match, right?
Unfortunately that isn't that easy. In some cases this works directly (typically with chips that are specifically tuned for one type of crystal), sometimes they work but are inaccurate, and sometimes oscillator completely fails to oscillate. And only way to find out is to test the circuit in practice.
The arrangement below is just about the most complex one you might encounter, as specified by datasheet of a Microchip PIC24 MCU. The values for resistors and capacitors can be approximated (or even guessed), but for best results, you'll have to test them in practice.
Typically resistor R110 is not needed, and R67 can work as 0 ohm resistor (or short) - if in doubt, you may use above 100k as first guess (try 10k or 1k or even short if oscillator doesn't start). For the capacitors - C109 and C110 here - a good starting value might be around 10pF for both. These are the most critical passives, as they set the accuracy of your clock (and, with some chip/crystal combinations, they're not needed at all as chip and crystal just are tuned well enough already.)
Unfortunately this cannot be tested on bread- or prototyping board; it has to be the circuit board you are going to use in production. This is because the capacitances in circuit are so small; different board layout, with its stray capacitances will be enough to throw testing way off. But worry not; just make room for the components, make educated initial guess and start testing. (just be aware that even if you find out that you don't need some of the components there - resistors most likely - the capacitance of circuit will be changed just by removing the pads for them. Re-test with every change of board layout!)
Set time on your chip (careful! you'll need very good reference time here for this to work. Fortunately web is full of those these days). Let it run and come back few days later. Check how badly clocks had drifted, and correct circuit accordingly. If it runs fast, make capacitors bigger (10pF -> 12pF or 15pF). If it runs slow, make then smaller (10pF -> 8pF or 6pF). Repeat and adapt, until you get close enough.
The problem is that unless you have very, very good way of setting and getting time out of the chip, this takes time. Typical crystals - and thus your target accuracy - are 10 or 20 ppm range; in practice, this means 12 seconds of drift per week - or less than 2 seconds per day. Accurately timing this kind of short time periods with manual methods is practically impossible. Thus, when you get close enough, you'll need to let system run for long periods before checking the results.
This will take some time. Even many, many weeks of trial and error before you zero on correct capacitances. Unfortunately, there really isn't much better way to do this, especially for your first attempt.
Good hunting.
tiistai 20. maaliskuuta 2018
Keeping time 2: Capacitors?
In previous post I talked about ways how you can make your embedded design to keep time, even when not powered.
Mostly I spoke of coin cell batteries as backup source. There is also other options, most common being capacitors and super- or ultracapacitors, and possibly some others. I'll just ignore bigger lithium- and other larger batteries here, since those generally imply that your device is primarily powered from that battery and thus the same battery will be used to keep time when device is "off", or minimal power drain mode.
There are pros and cons with each type of energy storage. The short list, as I see it;
- Large batteries (Li-ion and others): large, expensive, require charging circuitry. Not great for backup supply (these usually have tendency to self-discharge at certain rate), but this is the choice when you want your design to operate fully without external power.
- Lithium coin cells and equivalents: relatively small and inexpensive and they carry lots of energy to keep your clock ticking, up to several years without external powers. Bad side is that they are expensive (if you use them wrong, i.e. waste their power), must be replaced (if you allow them to run empty) and can't really power up your entire design - unless it's very - no, ultra-low power (like TI's MSP430-series)
- Super/ultracapacitors. Compact ones (in millifarad range) can keep your clock running for several days or up to a week or so, and are essentially infinitely rechargeable. So if you expect your product to encounter few days of power outage (but not much longer) occasionally, these might be enough.
- Electrolytic capacitors, few hundreds of microfarads or so. The can keep your clock running for several hours, so useful for common, but short breaks, and they are also very cheap.
- MLCCs - that is, ceramic multilayer caps; tens of microfarads. Now we are down to tens of minutes, but for common but short breaks they will be enough - and they're very cheap too.
- None at all. If your design is connected, you might be able to get away with device fetching current time on startup from internet or other such place. This means that it will take a while (connection latency) before your clock is in time again, and this is assuming that your device can fetch the time when it powers back up immediately.
perjantai 2. maaliskuuta 2018
Keeping time 1: Keep it running
Having an accurate clock is one of the potentially most annoying problems you can have with embedded electronics. Not least because of the inaccuracy of typical oscillators you might want to use, but there is also the issue with powering the clock all the time.
I do remember the first time I thought of this. It was something like this, thoughts in my head;
"Clock sure would be useful here, but what of power outages... Oh, I know! I'll store the time in the EEPROM and when the power resumes... Oh, wait, damn..."
But back to this day. Let's start with the oscillator first, and with assumption you want to do the device on a budget - fairly minimal BOM cost. Mind you, unless you are expecting production runs in millions of units, you really should not aim for minimum-cost BOM. The minimum monetary cost will cost you dearly in other fronts. But that is completely different topic so I'll just ignore it here.
I'll skip a bit. I'll assume that you've worked out that you want to have a clock, and it would be really nice if it kept time when when device itself is unpowered (meaning, without mains or whatever your primary power source is) too.
So right, here is, for example, a PIC24 processor, let's say PIC24FJ64GB106 (because I happened to have a few). Faily cheap 16-bit MCU with pretty nice peripheral devices in it. Comes with RTC (Real-time clock) and everything. Except this one isn't really built for this kind of use. The RTC doesn't run without power, and there is no back-up battery supply possibility. Meaning that the MCU must be fully powered from some kind of battery all the time when not connected to mains. Not exactly great if you want to clock to run from, say, small coin-cell battery for months and months.
Mind you, there are plenty of power saving options on the MCU, bringing the quiescent current draw to fairly low levels, but from design point of view, this isn't greatest option, and you'll need to take care of all the other circuitry too when in low-power mode, including safe and uninterrupted switch between mains and battery power.
So let's go for next best option. There are external RTC chips available - let's just mention MCP79400 as an example, because it is from Microchip too and was first item listen on their RTC page. This makes things somewhat easier. Separate Vsupply and Vbat lines - meaning that you can just connect battery to Vbat and essentially forget about it - chip will take care of those nasty power switching issues when mains goes off and on again. You'll only need to add clock crystal there and communicate with it via I2C bus. Simple and easy design, although it'll raise your BOM cost slightly. And these chips are designed to be low-power, meaning that simple 3v coil-cell battery will keep it running for a year - or ten, depending on chip you'll choose. Simply great if your device may be disconnected for extended periods between uses.
Or, in case you are already planning to have a bit beefier MCU, you could pick for example STM32F4-series MCU, which has RTC and external battery supply included. This MCU will consume a bit more power than dedicated clock chip, but in practice this means mostly that instead of 5 years of running with small coin cell battery, you get "only" 2-3 years. Whether this is an issue depends on your design, naturally.
There is really no single superior strategy, it's all about balancing your design constraints.
Next time: Capacitors?
lauantai 24. helmikuuta 2018
Loop forever
I ran onto an interesting bug just now. I've this entire debug module (as in C source), highly configurable, that can be used with many of my projects. Since I don't really like to dive into JTAG debuggers unless I really have to, I like that debug module a lot. Over time I've build quite a few features in it that help me to get things done.
Just now I took a very simple piece of a branch from one project and moved it to another one. Simple copy-paste of already proven code, no problems whatsoever, right?
Not so. When I loaded this program to the MCU, it just hung immediately.
Since the only changes I had made were to this single module, approximate location of problem was immediately evident - those new changes. Just about 20 copied lines of code. Should be simple.
Yeah, not so. Cursory review of changes revealed nothing. Nothing wrong here. So, eventually I went with JTAG debugger route - which in this case means openOCD and GDB. Both of these are extremely powerful tools, but highly hampered by lack of usable (or maybe I should say, easily usable) GUI or IDEs for my purposes (I have mentioned that I'm kinda old-fashioned, preferring text editors and makefiles?) Wait, let me paraphrase that last: I haven't managed to find a GUI for them that I'd find usable enough, as there actually might be one I haven't seen or tried yet.
But anyway, after some debugging, these revealed that program had inserted it into infinite loop within this new code. I don't have the generated assembly handy here, but it essentially boiled down to:
<compare something with something>
<if not equal, jump to this same instruction again>
...what?
I readily admit, I am not familiar enough with ARM assembly to read it fluently. And I couldn't write my way out of wet paper bag [in assembly] either. No reason to, really, except these occasional cases when my expectations don't meet reality. And crashes, of course, as having pointer to offensive instruction is makes debugging so much easier.
The problem here was, the C code read something like this:
if (debug_string_array[0])
printDebug(debug_string_array);
debug_string_array[0] = 0;
No loops here. Yet program somehow entered infinite loop anyway. So what is going on here?
I dug a bit deeper then.
printDebug(char *str)
{
#ifdef NO_SYNCHRONOUS_DEBUG
printDebug(str);
#else
printDebugS(str);
#endif
}
Oh, right, there's the problem. Silly copy paste issue elsewhere in the code which just happened to be realized now, and compiler helpfully optimized the recursive part out, leaving just the infinite loop.
Although irrelevant here, synchronous debug, in this context, means that debug output is sent out immediately, before returning from function, as opposed to "asynchronous" where it's buffered and sent out via serial transmit interrupt. Both of these have their uses but I won't go there in depth right now, as it isn't the topic here.
Code above isn't the actual source, mind you, but quick, cleaned-up rewrite of it.
perjantai 16. helmikuuta 2018
Renewable fuels
At the moment I'm drooling to get a Kia Optima PHEV to replace our aging Skoda Octavia TDI. I'm looking at that model because that is just about the only similar car that has tow mount and relatively large cargo space. We've got two large dogs, after all. The car ain't cheap (and our government's lack of tax breaks worth a crap for hybrids certainly isn't helping), and if I were to do the strict cost/benefit math alone, I'd never, ever choose this car. But this isn't that simple.
But for a moment here, let's go back to current car, Skoda Octavia TDI 1.6, model year 2011. This is one of the "emission cheat" vehicles, and since it was pretty much necessary, I had it emission-fixed. In this case, this meant software update, along with installing of some kind of intake air flow filter that smoothes air flow or something.
The usual expectations were that there would be major loss of power of efficiency. As far as I can tell, neither happened. On the contrary, I haven't noticed any loss of power, and my recent road trip (work-related) actually had my best consumption ever; 3,3l/100km, or about 71-ish US-MPG on return half of the journey. Not too bad. Then again, whenever it's mostly short trips, like typical city driving, mileage seems lower than before; hovering around 6l/100km, so that might have been affected.
But back to the actual topic here;
These days there are news about renewable energy just about everywhere. Biofuel was used to fuel a car, or a plane, or whatever. And it always, with absolutely no exceptions, end up with someone commenting how much land area was used or energy was spent to refine that fuel. Obviously the idea is to compare this about oil-based fuels, where far less energy and/or land area is used.
This comparation is of course completely false. Oil-based fuels have had the energy spent already, millions of years ago. The energy that was spent back then is not renewable, nor is it re-usable, and it will also release lots of carbon now (in a very short period, as opposed to millions of years it took to "refine" it) causing serious damage here and now. Yet, somehow, this apparently doesn't count when comparing against biofuels. Curious, that.
Yes, biofuels do (at this moment) use lots of energy, and take lots of land to produce. But here is the thing; we are even now - when renewable energy is just starting to, well, get started - producing massive amounts of excessive energy with other them - namely solar and wind power. For running the electric grid alone, as it is now, these two types of power are ridiculously bad. They provide the most power then it is needed the least, or essentially at random times, thus needing huge backup supplies - which, at the moment, are mostly coal or LNG, with some hydro thrown in. Aside hydro (which is pretty much at its maximum at the moment anyway), these will just make our situation worse in the long run. So not good.
However, this excess power could be used directly for other purposes - like battery storage (which is seriously expensive), pumped hydro (likewise expensive up front and seriously limited by geography), or -- and I seriously like this idea - refining biomass for fuels whenever there is excess power.
Yes, it will still use a lots of land, but the thing is, there is lots of land that can't produce food suitable for human consumption, for whatever reason, but can still produce biomass. In the past these areas have been used for cattle or grazing, but now we could have a better use for that; biomass for fuels.
That, in my vision, is the future.
It doesn't mean that we get to live like this - wasting energy everywhere like there's no tomorrow - but at least we get to keep on living, with fairly comfortable lives.
Our western way is using way too much energy, in many forms. Each and every joule we can transfer from harmful fossils to renewables will be net win, in the long run. And that is the reason I want that plug-in hybrid. That'll allow me to reduce my fossil fuel usage (for moving around) by half or so, just by using electricity for those short 10km trips. And if I could get a personal biofuel refinery to go along with my array of solar panels -- well, even better!
perjantai 9. helmikuuta 2018
sizeof(wchar_t)
Buffer overflows aren't fun. This is why I always (when not using some more advanced interface) use something like this;
...
char buff[32]; snprintf(buff, sizeof(buff), "a=%d b=%s", a,b); ...
Unfortunately this doesn't work too well when you transition to wide chars;
...
wchar_t buff[32]; _snwprintf(buff, sizeof(buff), ...) ...
Note the second parameter. It should be number of characters, and not bytes as returned by sizeof. So that previously good old habit is now causing potential buffer overflow, not preventing it ...
perjantai 2. helmikuuta 2018
AD converter, try 1
Just recently I ran for the first time in a situation where I actually needed pretty high-precision AD converter for a specific measurement. Previously I had only used 10- or 12-bit converters, and even those were generally situated in less than optimal location on the boards (read: in middle of digital circuitry), and those designs weren't... great, so to say, so I was kinda nervous about this thing.
But nevertheless, it needed to be done, and so I took every single tidbit of information I had read from anywhere and applied it to this one board. Ground planes? Check. Low-noise power supply and reference? Check. Separate power supply for digital logic? Check. Truckload of ground vias? Check. And so on and so on. In the end I ended up with 4-layer board had scarcely more stuff on it than power supplies, AD converter chip itself , connectors and some passives. I might've gotten away with just two layers, but why push it too much.
And what I got out of it?
20 bits. 20 bits of clear, noise-free signal. This uses 3,3v reference, so this means about 3,15 microvolts per bit. And of course further averaging could bring noise down even further, but in this case, this result is a great success, but for my application this was easily enough. And board didn't even need second design round, it worked perfectly the very first time.
For a first attempt, I really think this was pretty nice result.
sunnuntai 28. tammikuuta 2018
Everything's fine.
You know, there's something oddly familiar sounding with the windows defender notifications...
Oh, wait, I figured it out!
'Nuff said.
keskiviikko 24. tammikuuta 2018
Write it again, Tony.
Has Windows ever crashed on you when working? Oh, don't bother lying, we both know damn well it has. Numerous times. Less, lately, but still. There is a reason I mostly kept away from Windows 3 and 9x series, and only started taking it more seriously when Windows 2000 hit the market.
But has it crashed on you so bad it wiped out your project too? Well, Windows 10, with its newest feature update installed just did that to me.
I was playing around with new features of C++Builder 10.2, trying to figure out how to write an program that runs on Windows and Phones.Well, on Android, at least.
And then suddenly Windows froze completely. This isn't anything new (to me), so after few seconds of waiting - if it just were nice enough to come around by itself (don't look at me like that, it has happened. Not often, but few times...) - I just killed power and started over. (well, I tried common three-finger salutes first, of course, with no effect)
Only to find that just about all of the files I had open when Windows froze were ... corrupted. Half a dozen project files, gone. Thanks, Windows, great service.
Not a major loss, this being mostly things I was playing with, figuring out how they work together, but nevertheless, writing it all over again would be annoying at best. Granted, this doesn't happen too often, not these days, but whenever it happens, it's still a real PITA.
Oh, as I found out just few moments later, Builder keeps backups. So loss wasn't as bad as I expected first, just latest few dozen lines or so, but it's still always annoying to re-write everything.
And to be fair, Windows doesn't crash even close as often as it used to, these days.
Tilaa:
Blogitekstit (Atom)