Yesterday I learned that virtual machines go all the way back to the 1960s (in this it's case multiple computers emulated on a bigger computer where a certain user gets access to the emulated computer but not the main one, and it was even called a virtual machine at the time)
As I say in the blog post, it's important to remember that most of what we have today in computing we also had in the 60s and 70s. It is just all much cheaper and faster now.
@darius ... and in many cases, worse designed.
If you take a hard look at many of the major installations of Facebook, Google, Amazon or Microsoft, they're basically bad reinventions of the mainframe.
@darius Really, we should be asking ourselves: if we started with a clean slate, to design a new mainframe, what would it look like?
I guarantee you, a million network stacks wouldn't be in it.
@clacke yeah, I believe have seen hypervisor references that predate 71!
@darius "the cloud" is just a new name for what is basically timeshare systems on mainframes.
It demonstrates how effective "clever" (deceptive?) marketing can be. They get people to think they are being leading edge by putting everything "in the cloud" when they are actually regressing back to a 40 to 50 year old paradigm.
@msh Not sure I would necessarily call it a regression. I like shared servers but the context is extremely different from the old days, I prefer something closer to the old university timeshare model where there was some notion of community, at least in the early years
Come on, let us turn this on its head:
Please list any *new* concepts in computer science introduced between July 2009 and today.
I'm actually curious.
I couldn't think of any myself but then again, I'm not a computer scientist so unless it's a major breakthrough I wouldn't have heard of it.
@darius IBM actually did a lot of amazing research and implementation work in the 1960s, and the System 36 and 360 series machines were very impressive. I recall at my first coding job, half the department was Java coders and the other half were RPG coders for the IBM AS400. The latter scoffed at Java bytecode as "nothing new", since IBM had a VM to ensure hardware independence of software for decades.
@roadriverrail Yeah the more I learn about the 360 series the more impressed I am! It also just seems like it was significantly harder to use than the DEC machines.
@darius IBM's biggest wound was its incredible insularity. They were a de facto monopoly and thus wrote their own standards and continued to keep them for far to long. Classic IBM engineers can recite the catalog numbers for parts and coded in languages often not transferrable to other systems. It's a lot like MSFT coders from the mid-late 90s. As late as 2001, I was still writing EBCDIC translations for basic strings from an IBM database.
@roadriverrail Great context, thank you for this. I was sort of idly wondering how much EBCDIC work is still out there for legacy systems.
@darius I left that world in 2003, but at the time, interfacing the AS400 exposed me to a world without Unicode, without even ASCII, and where database systems were of a pre-relational design. I suspect there's a lot of that still out there, given record-keeping requirements for certain industries.
@roadriverrail Yeah! I've been learning about the invention of the database (full stop) and the invention of the relational database 10-15 years later.
The era I'm working in right now, 1971, is right in the middle of the 2-year period where Edward Codd of IBM was inventing the relational database. (Or "data base" as they called it back then.)
@darius This has to be some fascinating reading and study you're doing. Are you documenting this anywhere? I'd love to follow along. I'm a systems hacker; DBMS implementation is one of my secondary loves.
@roadriverrail It's really just side research that I'm doing in order to understand the status quo and the "feeling in the air" at the time the original RFC documents were being written. I try to get as much context as I can and provide it to my readers, so the DB stuff comes from reading I did as background for this blog post
@roadriverrail Who knows, it may eventually turn into a bigger project, but not until I wrap this RFC work at the end of 2019
@darius I really find understanding the historical context is essential for understanding just about anything in engineering and often for mathematics, too, so ((hat tip)) for taking the side research!
@roadriverrail More likely than a project about databases I'm probably going to do something about RAND's contributions to computing, I pored over hundreds of pages of internal memos while I was at the Charles Babbage Institute last month.
@roadriverrail @darius Even for contemporary applications. The vast majority of these large timesharing ... err ... cloud-based computing systems need things called "document databases", such as MongoDB, CouchDB, ZooKeeper, and so forth. All of them have a navigational/pre-relational design. Key/value stores aren't anything new either. :)
@vertigo @darius Because I have a lot of respect for the "NoSQL" world, I use the term "pre-relational" to talk about record-oriented databases that had no capacity for relational constraints or queries; indeed, pre-relational database applications are often just code that performs what a query might otherwise do. MongoDB et al were created in response to the limitations of relational databases and are solving for different cases.
@roadriverrail @darius My level of respect for NoSQL databases is on par with those of relational systems. As you say, they solve different problems, and navigational databases are just as useful today as they've always been, as long as your problem domain remains the same as it's always been. Don't fix what isn't broken. (c.f., most airline reservation systems do NOT rely on relational databases, IIRC.)
@roadriverrail @darius That we're seeing the "pendulum swing back towards the mainframe" (to paraphrase an earlier message) indicates only that the problems we're solving today are the same problems which mainframe-era programmers were facing. Maybe at a larger scale, now that we've finally realized GE's dream of utility-based computing (e.g., AWS), but no less structurally identical to a System/370 with 4 CPUs and 16 harddrives installed.
@roadriverrail @darius I've also studied the IBM mainframe family, but more from the hardware and disk storage side of things. Mainly out of curiosity, as so many decisions made by IBM influenced contemporary computing on levels most people don't realize or recognize. As someone building his own computer entirely from scratch, I like studying this history, as it can help me make better design decisions, experiment with alternatives, etc.
@vertigo @darius Sure, but so are relational databases. The term I was reaching for was "navigational database", which isn't something that I've encountered, even in my DBMS implementations classes. Overall, the point wasn't even that this is "bad", but that IBM's waning influence is, among other things, attributable to a highly insular technology stack.
@roadriverrail @darius I think "navigational" refers to key/value and object stores (e.g., modern object oriented databases use a storage format not that different from VSAM, IIRC), and is named from the organization of data that the application is responsible for taking (typically a tree or other directed graph).
@vertigo @darius Yes, what I was describing was a record-oriented data store where all constraint-enforcement and query logic were directed by application software. According to Wikipedia, it's not strictly key-value or object store.