Random thoughts about software, hardware and electronics. And other things too...
maanantai 22. helmikuuta 2016
Old way is new again (in graphics)
It just occurred to me that computer graphics seems to be reaching a point of full cycle (kinda-sorta).
Back in the days there was just direct hardware access (and I mean direct, as in "absolutely nothing between your software and video card registers.") To get full performance from the graphics you had to know your hardware (both processor and video card) inside and out - memory mappings, instruction timings and interleaving (pentium pipelines) and everything. You had to micromanage all the small details to make hardware fast.
This is how (for example) original Doom and Quake were able to achieve usable frame rates with relatively slow computers. Back then there really wasn't any hardware acceleration available for games (on PCs; other computers had dedicated blitters and whatnot), so if you wanted your program to run fast, you had better to learn it all. I, too, spent considerable time fine tuning inner loops of my blit-equivalent function.
Then came the first hardware accelerators and their APIs. Now you "only" needed to link a library and call functions there. Just call a function and a triangle is drawn. So much easier - as long as user had the same video card. If not, well, too bad.
Then came OpenGL (again, to PCs, it did exist earlier) and more importantly DirectX. Both provided most of the functionality (either directly or with add-on libraries) that you'd need to write fast 3D code. And with abstraction things like memory access and timings became mostly irrelevant, or at least focus changed to entirely different level (optimizing batches of triangles instead of single pixels.) Since the APIs handle the hardware abstraction, you can't really even know what is happening inside libraries, drivers and hardware.
DirectX even had retained mode where libraries did just about all the work for you. No need to know, well, almost anything. But where is the fun in that, really.
And then the cycle started to go back, although somewhat different path. With newer APIs (OpenGL 3.0+, I'm not really familiar with DirectX today but I'd guess it's on similar path) more and work is being pushed back to the user. Old matrix operations were taken out, as well as fixed function pipeline (it took me a long time to let go of it completely as it made some things easier) and now you have to do all that math-heavy work yourself (libraries of course exist to do that for you if you want).
And with new upcoming APIs like Vulcan you have to take over things that have been take care by driver for ages - like memory (texture and buffer) management (I admit that I'm not familiar with details, I'm mostly repeating what I've heard). All this in name of efficiency and speed. You'll have to micromanage all the small details again, for same reason as in the old times - speed!
I don't think that we get back to raw hardware access ever again (again, at least on PCs, with operating system on your way) so this is very likely as far as the raw access gets, but who knows.
Of course, most people don't need to delve there. Commercial engines like Unreal and Unity will take care of those nasty details, but point still stands. You can once again get your hands dirty with low(ish) level access to hardware - if you want to.
And if you pick some other platform (say, embedded systems), you can get direct memory access and register tweaking too if you want to. After all, there's just something extremely satisfying when the first pixel appears on display after hours and hours (if not days) of work on registers... It might not be the most productive way to do things, but I'm pretty sure you'll learn way more about hardware with that single pixel than after years of work with high-level APIs!
Tilaa:
Lähetä kommentteja (Atom)
Ei kommentteja:
Lähetä kommentti