Neko- wrote:The real funny part is knowing where we are now, with virtualization, cloud, internet, terabytes to exabytes of storage and broadband... running circles around most anything being said

True, but at the same time? Computers today are a thousand times higher clocked, AND are drastically more capable of doing things per tick, not to mention having multiple cores/processors all over the place. And literally millions times more memory and faster access to storage.
And yet, how much faster are actually the programs? Uh, for anything that isn't raw datacrunching using specialised software, it's closer to maybe 10 times faster or something.
The Amiga 500 of mid 80s with its 4 processors running at <8Mhz, with a GUI, which by the way just happens to be like a precursor of Windows or something, amazing that coincidence, running from a FLOPPY DISKDRIVE, much of the time, is only a little bit slower than modern windows running on an SSD. An SSD that can read and write hundreds of times more per second than the Amiga Workbench disk of 860kB can store in total.
And it is by the way less than 10 years since the last place i knew of that used an Amiga 500 to do graphical work for TV broadcasting, and there's still that old machines doing professional sound editing/mixing. The graphics people have had to replace their Amigas with PCs because of the increased resolutions used, but most i've heard would have preferred not to.
Because that old stuff still WORKS better. Which of course is one of the reasons you can still today buy brand new accelerator plugins for the Amiga, adding SSDs, modern level of CPU power and a whole host of stuff. Because again, despite the age, it's more reliable and works better than the "latest'n'greatest".
That, is really, really sad and pathetic.
Not that i'm surprised really. Even if you just look at hardware alone, well, the Pentium 4 was one of the most amazing fuckups ever. And then Intel came up with what became the basis of their current line, the Nehalem.
When AMD switched to integrated memory controller earlier, their CPUs gained overall around 30% performance.
When Intel did it with Nehalem? Despite switching from a drastically more inferior structure than AMD did, they gained less than 10% on average. And the ACTUAL improvements Nehalem brought with it should have, probably DID provide at least a 15% improvement in performance.
Meaning that the integrated memory controller was such damned shoddy work that instead of improving performance by 40-50% as it should have, it
reduced it. The only other option is that the architechture overall instead was so utterly terrible, that it managed to nullify what should be a 50-60% performance improvement per tick.
Of course, the Nehalem was at the time still heralded as a wonderful new innovation by the vast majority.
And now that we're closing in on the limits of what nodeshrinks can be done, something that has been VERY much shown off by how Intel has failed with their dieshrink for year after year now, despite having by far the most resources to fix problems with, once designers cant just throw more transistors at every problem and hope the next shrink keeps down the production costs enough to make it work cheap enough anyway, well that and the fact that the next set of increased size in chip-wafers has essentially been put on indefinite hold because it's just too expensive, the cost reductions for production doesn't make up enough for the investments needed, soon we're going to either see an even sharper slowdown or even halt in hardware performance, or designers are going to have to start really doing a drastically better job.
Which is where it could get "funny" from the side of the software... Because software designers have been relying on hardware performance increasing drastically year after year now for decades. Aaand now, already most big increases comes from tricks instead of overall improvements, and with less nodeshrinks, meaning no room for adding more transistors to just keep adding tricks, unless software writing suddenly shapes up one helluva lot, we could start seeing software that is markedly SLOWER than the same existing stuff on the above Amiga 500 in less than 10 years or so.
Kinda wish i had kept going with my short little forage into Assembler and machinecode, coulda made a fortune today when almost noone seems capable of even considering it.
Even worse, when my friend was learning programming in university, we tried a little competition once, i wrote a program in Amiga Basic, he wrote in, IIRC C++, i wrote my program in 1/4 the time, and it ran faster, despite his running on a 500Mhz CPU.
And i wasn't even using any tricks to employ anything beyond the primary CPU of the Amiga. So, <8Mhz beat 500Mhz, took less time to program, was much EASIER to program, took up a few kB compared to the MB required of the "modern" software.
After that my friend stopped trying to claim modern programming was superior.
