Random thoughts about software, hardware and electronics. And other things too...
sunnuntai 29. huhtikuuta 2018
Suit up!
Before I start, I know that many people will not like what I say here. That's perfectly okay. This is how I think, you're free to have your thought on this issue too.
So, It seems that vast majority of engineers do not like suits. Many seem to even passionately hate them, in fact, to the degree that "not even owning a suit" seems to be mark of honor.
I don't wear suits either - not very often. Not even collared shirts and slacks. I prefer T-shirts and jeans most of the time - that is, when I expect to work in the office all day, without any face-to-face customer contact.
But not always. When it is time to dress properly, I prefer to go all-in.
I was once told that when in public event one should always be dressed better (as in, better by one "step") than your customers. Like I said, this isn't everyday, but at least in more formal occasions, like trade fairs and so on. And of course, when I go to meet a client specifically, I do dress a bit more formally than usually. What this means, exactly, varies by occasion.
Some times this means that I wear a suit. And other times it may mean jeans and T-shirt. And sometimes it's something in between - like slacks and collared shirt. But the choice is always case by case.
Like I already said, many engineers absolutely hate suits. I did, too, at one time, but no more. I discovered the joys of tailored (or at least custom made) suits.
If you just go and buy a suit, without bothering to have it fitted for your body, you will end up with expensive disappointment. It will look bad (just look at the current clown at white house...), and worse, it will feel bad (some clowns have no self respect at all, it seems.) Just don't. Because if you do get that off-the-rack suit, you will hate suits forever.
Instead, for your first real suit I suggest that you go to see a tailor, or at least a shop that will serve you personally and will have the suit you want to be fitted for you. Yes, it will cost a bit more, but trust me on this - you do want the suit to feel and look good on you - and that is where fitting it on you comes in. And it won't cost you that much more, either.
I by no means claim to be expert here, but I dare to say that there are three (or four) classes of suits;
1) Professionally tailored.
2) Custom order
3) Off the shelf
Fourth class might be between 2) and 3) ; off-the-shelf fitted. This "2.5" is the minimum you should strive to. Pick a suit that is close to your body (and you will need professional help here, especially if you haven't bought a suit before) and have it fitted to you.
If your body isn't of a common type (that is, no suit in stores are even close), you may need to go to class 2 immediately. You send your measurements to a tailor, who will order a suit from factory and have it fitted for you. A bit more expensive than option 3) or 2.5), but absolutely worth the cost.
Class 1 is the high end. Here, too, are many options. The most expensive might be to go to, say, Paris or Rome and pick one of the very best names in business. You will of course get the best, but you will also pay for it. Not really worth it unless you need to mingle with the Very Rich all the time.
If I want to get a suit tailored, I would have to go to Helsinki - that is the nearest tailor from here, 600km away. Not a cheap option, but in dire need (say, when going to close a deal of €50k+ or so), I'd definitely consider it.
Second best option is custom order. I take my measurements (or have them taken, for example by my wife), send them to tailor (again, nearest being in Helsinki, as far as I know) and have them to send suit for me. At this point, I'd rather not do this - if something is measured wrong, it will cost me a lot, since wrong measurement will look and feel bad. Not good.
Some shops do offer service where they take your measurements, order factory-made suit and have it fitted for your measurements too. I'd suggest this if in doubt, but if nearest shop is far away, this might be a bit tricky. Unless you're willing to go there for the first measurements, and then order new suits (pants/shirts/jackets/whatever) from them remotely. Not a bad idea, actually.
There is a lower cost option also (kinda-sorta).
If you happen to be in suitable place in South East Asia region, such as Bangkok, Hanoi or other - well, I might as well say it straight: tourist - cities that draw many western people in them - you may find a tailor shop in almost every block.
There a fully tailored suit with a shirt or two can bought for some €200-€500, depending on location and materials. Mind you, you do need to "shop around" first, preferably by browsing customer opinions in the net. You do want at least to beat the store-bought quality, right?
But which ever is the way you pick, as long as you go for a quality suit, it will fit you nicely and feel good on you, you know that the choice was a good one.
Don't be afraid to wear a good suit. Be afraid to wear a bad one.
perjantai 20. huhtikuuta 2018
A lesson on forward design
I've got a product that is now over ten years old now (measuring from start of design), with several years left of lifetime before its production will ramp down. And then maybe another five before most of the units are removed from service.
This product is combined of two parts; main unit, and relay/aux unit that are connected together with serial link (essentially RS-232 with no flow control.) Relay/aux in this context means that it controls both some high(er) current devices as well as communication center to some auxiliary devices (via several other serial links). The plan didn't go above 38400bps anywhere, with the usual speed being 9600 bps, with a lot of time in between communication bursts, so all in all, pretty easy stuff.
So, as it usually starts, I chose to use relatively simple, ASCII-based command/response protocol with fairly long timeouts in the first iteration of protocol. This makes it very easy to test at first, using simple terminal program and manual commanding, making product release faster.
Unfortunately limits of that design became apparent very quickly just after few (this being very relative unit) devices were delivered. Long timeouts means that in case of any error, there is long time before system can even start to try recovery. Then, attempting to retry an operation might cause some operations to fire twice before acknowledgement came through. When setting state of relay this isn't a problem, it's still "on", no matter if you set it once or thousand times. But when sending packets of serial data... Yeaaah, not so nice. So manual-friendly commands proved to be less than machine-friendly, and then I started to need capability to communicate with several devices at same time (previously need was strictly sequential; main unit opens exclusive link to one of connected device, and when done with that for time being, opens another exclusive link to another direction).
Enter revision two. This was fairly early in the product lifecycle and this was a "quick fix". Improved protocol allows easier "muxing" of data; data is sent in frames, and frames have target (where they should go; relay control parts, aux devices and so on) and checksums now, making system otherwise much more robust. Of course, commands are still essentially version 1 commands, now just wrapped in frames. This fixed many of the original issues, but not all of them, most importantly the problem where a command might still fire twice before acknowledged, as the implementation had no guards for that. Not exactly your typical "two army problem", but close to it.
At that time this wasn't major issue so I let is slide.
Fast forward some years to this day. I need to interface a WLAN module that uses 115kbps with flow control and can potentially send large amounts of data at bursts. With a design originally planned to run at max 38.4kbps. Now the "two army problem" is getting to be a serious nuisance, as receiving duplicate data packets is huge no-no for data integrity. I've grown very fond of data integrity over the years, you see. Much more so when I started this design. This trouble here might have been one reason for it.
Data integrity - defined here as ability to "lower level interface" to transmit and receive data with as few errors as possible, and with ability to correct those errors whenever possible - is fairly complex issue when you get into it. When you are working with TCP/IP stacks provided by your operating systems it is very easy to overlook complexities on that level, as that provides you with huge amounts of integrity already - not necessarily error-correctness, but at least you packets will arrive in order and exactly once (to your application, that is.)
Now, back to issue at hand. At this point there are fairly large number of devices deployed already, with aux modules with old and older revisions installed (which, unfortunately, cannot be field-updated - it requires a hardware programmer). On plus side hardware did not need any changes, so I was able to hack together flow control with software and signalling lines included in design. But lack of software update capability means that I can't fully overhaul the system, as it should be able to work with old modules to a certain degree - at least when not using advanced features like ones needed by that WLAN module.
Eventually I figured out a way to do this, by expanding revision two frame system by few commands that have packet sequence identifiers and respective acknowledge/timeout mechanisms. And now the system is fairly robust.
But it is carrying a lot of compatibility burden. So now I have v3 frames, that contain v2 frames, that contain v1 commands in them. With fallback to use just v2 frames if this aux unit doesn't happen to speak v3 language. Oh joy...
But hey, next time I design a protocol for similar use, I will know better.
(you may want to read that as "I will have existing and tested protocol (almost) ready to use - and strip out the bad parts" ...)
lauantai 14. huhtikuuta 2018
Meet your friendly, fallible robotic driver
It seems that the prediction is that self-driving cars are here, and very soon. Quite a few companies - most (in)famously Tesla - have been developing self-driving technology for some time now. Tesla's tech isn't really self-driving however, despite the name they use ("autopilot"), it's just somewhat better version of lane control systems used in higher-end cars now.
I am not against self-driving cars, really, but I will not be an early adopter here. The tech absolutely must mature a lot before I am willing to trust myself in the care of robot driver.
One figure thrown around lately is 90%. That is, today the most advanced systems are about 90% reliable, and remaining 10% requires human attention. Tesla's recent accidents are mostly because of that last 10% - the driver didn't catch on that system was failing and end result wasn't pretty.
That 10% is today. But it gets worse. Much, much worse before it gets better.
I was absolutely certain that I had written a related post earlier, about machine translation, but I don't seem to be able to find it right now. Oh well.
The 10% error figure is a bit difficult, but let's - for sake of argument here - define it to mean that in about 10% of typical trips it will need human to take over, and very quickly.
See, with 10%, today, people are already having problem concentrating on the road enough to catch on when things are start going wrong. Already here we're having first problem - the systems aren't good enough to notice themselves that things are going wrong. It's the human driver that must notice this, often quickly ("what, why car is running towards that media---crash").
But that isn't the worst. What if the failure figure is lower; let's say from year from now it's down to 5%; year after that 2%; another year to 1%?
If people are already having serious problems catching on when things go wrong 10% of time, how are - how could they - catch on when errors are even more rare? The short answer of course is that they can't. We're only humans. We will doze off, play with our phones, daydream, whatever, anything and everything except watch the road.
The translation post I though I had written (or have written, but can't find it right now) was tangentially related. Today machine translation is - again - 90% good, so we need to proofread and correct the mistakes. But when it gets to 98-99% range, we don't bother anymore, and there will be embarrassing mistakes in our brochures and technical documents and whatever.
The difference is, of course, that bad translation doesn't (well, generally, there are some exceptions) get people killed. Bad driving most certainly does.
And this is the reason I won't be adopting self-driving tech anytime soon. 1% is no way low enough failure figure here. I'll be waiting for 0,01% figures first. Or, alternatively, slightly worse device that actually can yell out early enough that it can't handle the situation so I can take over. But then again, that is also very difficult to detect - if it were easy, those cars that have crashed would have stopped instead. When things go wrong, they tend to go wrong quickly.
sunnuntai 8. huhtikuuta 2018
Keeping time 4: The hardest part
Everything we've handled so far in this series has been easy, compared to the real challenge that remains. Now that we got those parts ticking come the hard parts: Leap years and seconds, time zones, daylight savings time and date math.
Last of those meaning for example "how many days/hours/seconds is there between 30.4.1432 [julian] and 29.3.2018 [gregorian]"? Ok, I cheated here a bit, this is just about the hardest calculation you can imagine, and it's unlikely your device has to deal with such math... but again, it's better to at least acknowledge that such issues exist.
Most (well, essentially all) of the RTC chips out these just ignore all these (with exception of leap years - I think most handle them properly now) and leave them up to the application software to handle the rest. You might be tempted to handle them yourself, but there is this wisdom floating out there in the internet: "Thinking of writing your own calendar code? DON'T!"
So my first suggestion is to just to follow that piece of wisdom. Find a library that does that all to you, and use it with the constraints of the chip that you are using - the chips typically can handle normal calendar operations (like moving directly from 28.2. to 1.3.) and leap years (so month doesn't change on 28.2. but 29.2. on leap years) but not anything else. Leap seconds are kinda overkill, as your typical crystal won't be accurate enough for them to matter, but nevertheless, it's always good to keep such things in mind - if for nothing else, in case someone asks. It's always good to be prepared in such case.
But if you, despite of all this, choose to write your own routines, I have just one universal suggestion: Make your device to always use UTC, and handle all the translation on your "client" side. You'll still need a library, but if you happen to be making (for example) IoT device, you can push the hardest parts to the server where you do have libraries to do the heavy lifting for you.
Or you could just ignore all that. Make the device dumb enough not to even know of these and leave it to the user to handle them properly (manually set the clock after DST changes and so on.)
Otherwise, good luck on your chosen path. It will be a difficult journey, but it also will teach you many, many things that will help you further along the way.
tiistai 3. huhtikuuta 2018
Privacy by directive: It's coming up
European GDPR will be in full effect in less than two months now. Last time I wrote about it things were still a bit messy, but since then things have gotten clearer. To me, at least.
In the mean time, following discussion on the "other side of the pond" has been quite interesting. Huge majority of people writing about this on the US side appears to be thinking that this will essentially kill any and every business opportunity in Europe.
In the same time people (I don't claim that they are the same people, but I am sure that there is some amount of overlap) complain about newest privacy issues with Facebook and other companies whose entire business strategy is to grab as much Personally Identifiable Information as possible and to sell that to
Take infamous "shadow profiles" for example (I won't provide a link; you can search for that yourself if you haven't heard of them already.) Or companies' refusal to remove personal data. This kind of behavior is exactly what GDPR was made to get rid of! GDPR makes entire practice of collecting this kind of "shadow information" explicitly illegal, although that line is kinda blurry. Knowing an IP address (or random cookie id) visited, say, toyshop.com? Might be fine. But more and more information you accumulate there, the more illegal status gets are information gets more explicit, until there is no way one can deny it - it's Personally Identifiable Information. Thus it is always better not to collect that "anonymous" information at all. User wins again!
After doing some reading, I found out that there isn't actually that much we have to do to get compliant ourselves. It certainly helps that parts of our business where this is applicable are already services where we keep customers' data for them. Meaning that insidious data collection, analysis and sales has never even been part of our business plan, so filling the gaps wasn't really that difficult. Not all of our GDPR-related updated are out yet, though, but the hardest parts are already done.
I don't get the people complaining how GDPR will ruin the internet. To me, it's completely the opposite - we're (well, at least we the Europeans) getting the control back! But of course, if your business is based on shady practices, I certainly am not surprised if users' access to their own information hurts the operations and therefore bottom line.
Meanwhile, we, the good people, are adapting for things to come, with smile on our faces.
("good" may not be best word here, but I can't think of better one right now, so that has to do)
sunnuntai 1. huhtikuuta 2018
Too sterile?
When I was young(er), MOD music was pretty hip on the scene. In case you don't know what I am talking about, MOD was music/sound format originally from Amiga computers in about mid 80s or so - it could play four-channel digital samples at independent rates and volumes at a time ('time' being late 80s/early 90s), which was pretty amazing at the time. MOD, or 'module' format was essentially list of samples coupled with the instructions on how to play them. Later other formats came along, and PCs also started to be able to play them - first with software, then with hardware (sound cards).
I, too, wrote a mod player of my own, for Sound Blaster (I'm talking about the original 8-bit one here) and Gravis UltraSound. And obviously it could play S3M format modules too, aside typical 4-32 -channel MODs and maybe few others.
These days all this sounds very primitive, and why not - it was different time. Storage space costed a pretty penny and MP3s came along only much later (along with lowered storage costs and increased CPU power - my 486 back then couldn't even decode MP3s in real time!), and today games are wasting (yes, I indeed mean that) huge amounts of space with uncompressed audio...
But all that is beside the point here.
I once asked a die-hard live music person to listen to a pretty nice mod and tell me what he thought of it. He said that it sounded too perfect. No imperfections at all. In a word, too 'sterile' for his liking.
And that it certainly is. If you ask MOD player to play a sample at note, say, F5, it will do that. Every single time, at same exact rate, at same exact speed, the same exact sample.
So I added a few small subroutines to my player. It added small random delay to samples - few tens of milliseconds or so, IIRC. It also varied the playback rate a bit - plus minus percent or so (I don't remember exact figures any more, but it was fairly subtle). It couldn't do anything to the actual digital samples, though, so those had to remain. This was maybe mid-to-late 90s.
I tried the played with this 'filter' on and off, and really couldn't tell any difference, at least that easily. I suspect I didn't play changed version for the person originally commenting about this, either. And I don't think that I tuned the playback alterations very long either - just did the changes and went on with other things.
Whether I was onto something back then, I can't really tell. And don't really care too much either, aside slight academic interest.
When sound cards moved from 8 to 16 bits and sampling at 44.1kHz, I quickly figured out that by interpolating original 8-bit samples at new sample rate, the sound could be much better. And indeed it did. I don't recall exact interpolation method I used anymore, only that it used some combination of look-up tables and fixed point math to make it cost just few clock cycles more per sample. This was still big thing, back then.
Only later, at university, I found the out the name for this issue: quantization noise.
This was around the time all MOD players started using interpolation, but there were some people that insisted that interpolation ruins the "intent of composer" as the music doesn't sound exactly as it originally was. Obviously, I chose to completely ignore this argument, as it was pretty damn obvious that less noisy signal (and this was very audible indeed!) is always better than original very noisy ones.
This all look pretty quaint these days, I'm sure.
Tilaa:
Blogitekstit (Atom)