Yesterday I learned that virtual machines go all the way back to the 1960s (in this it's case multiple computers emulated on a bigger computer where a certain user gets access to the emulated computer but not the main one, and it was even called a virtual machine at the time)

As I say in the blog post, it's important to remember that most of what we have today in computing we also had in the 60s and 70s. It is just all much cheaper and faster now.

@darius ... and in many cases, worse designed.

If you take a hard look at many of the major installations of Facebook, Google, Amazon or Microsoft, they're basically bad reinventions of the mainframe.

@darius Really, we should be asking ourselves: if we started with a clean slate, to design a new mainframe, what would it look like?

I guarantee you, a million network stacks wouldn't be in it.

@darius There was even hardware support for virtualization, support that x86 would later lack ... and finally regain.…

Even the word "hypervisor" was used as early as in 1971:…
/via Hypervisor interface
@darius There was even hardware support for virtualization, support that x86 would later lack ... and finally regain.…

Even the word "hypervisor" was used as early as in 1971:…

@clacke yeah, I believe have seen hypervisor references that predate 71!

@darius "the cloud" is just a new name for what is basically timeshare systems on mainframes.

It demonstrates how effective "clever" (deceptive?) marketing can be. They get people to think they are being leading edge by putting everything "in the cloud" when they are actually regressing back to a 40 to 50 year old paradigm.

@msh Not sure I would necessarily call it a regression. I like shared servers but the context is extremely different from the old days, I prefer something closer to the old university timeshare model where there was some notion of community, at least in the early years

@darius @msh Yeah, it's not a regression so much as it is the swing of a pendulum. Mainframes and PCs were both mistakes in different ways, and now we are making fresh mistakes, but also learning from the ones made before (if maybe not as much as we should).

Come on, let us turn this on its head:

Please list any *new* concepts in computer science introduced between July 2009 and today.

I'm actually curious.

I couldn't think of any myself but then again, I'm not a computer scientist so unless it's a major breakthrough I wouldn't have heard of it.

@61 @darius

- Conflict-free Replicated Data Types
- ... um ...

There's a whole bunch of "actual widespread practical application of" though:
- Merkle Tree
- As a basis for data distribution
- As a basis for version control
- As a basis for defining reproducible systems
- Eventual Consistency
- The distribution of execution and operations over several unaffiliated actors, with low to zero organizational and financial overhead
- Continuous Integration+Delivery
- Ubiquitous pocket computers running generic software applications from a federated distribution network
- This is what has driven the return to mainframes and which will hopefully drive the return from them as the devices become more powerful
- A CPAN-like in every mainstream language except C derivatives
- "Collaborate" as a serious and go-to contender to "Build" and "Buy"
- Immutability and immutable data structures as a core concept in relatively widespread programming languages
- Cooperative multitasking, "asynchronous programming", as a syntax element in mainstream programming languages

@clacke @61 oh for sure there are breakthroughs in application but to my original point, these things existed in the past but were either standalone toys/demos or things that never got momentum for one reason or another

@darius @61 We agree! My comment is a counterpoint to the "everything new is old" and "we're not making progress" memes out there, which breed a form of apathy. In contrast, I read your OP as a "look at all the cool stuff people were doing already in the 60s".

You or somebody else commented recently that there is a fortune to be made by just plowing through old ACM papers and figuring out which theories have become applicable in the decades since the paper was published.
@61 @darius @msh
I rant all the time about how we still haven't implemented the things we knew in the 60s and 70s, and how CS is all about learning those things and then getting frustrated as you enter industry.

But the flip side of that is that every time decades-old concepts enter industry, that is progress! What is lacking sometimes is a bit of humility on the part of the people who push for the progress, and acknowledgement of the theoretical foundations and earlier achievements that they're building on, knowingly or unknowingly.

In the most direct sense, science doesn't matter compared to engineering, but then again engineering wouldn't progress much without science.

@clacke @61 @darius @msh

We would do well to establish a clear model of what should be done (and why, and how) and seek a clear-sighted investor to let us stride boldly forward into the future.

Who knows, it might even be cool.

@clacke @61 @darius @msh

I should add: I know how to manage projects, and how to write business plans.

However, my view of the industry is that the percentage of people that actually cares about this stuff is amazingly low.

@jankoekepan @61 @darius @msh I want a T-shirt that says "Turing and Church are my Idea Guys".

I guess on the back of the T-shirt: "(also McCarthy, Hoare, Dijkstra, Knuth ...)"

@darius IBM actually did a lot of amazing research and implementation work in the 1960s, and the System 36 and 360 series machines were very impressive. I recall at my first coding job, half the department was Java coders and the other half were RPG coders for the IBM AS400. The latter scoffed at Java bytecode as "nothing new", since IBM had a VM to ensure hardware independence of software for decades.

@roadriverrail @darius The half century of binary compatibility across hardware architectures is really something remarkable.

@roadriverrail Yeah the more I learn about the 360 series the more impressed I am! It also just seems like it was significantly harder to use than the DEC machines.

@darius IBM's biggest wound was its incredible insularity. They were a de facto monopoly and thus wrote their own standards and continued to keep them for far to long. Classic IBM engineers can recite the catalog numbers for parts and coded in languages often not transferrable to other systems. It's a lot like MSFT coders from the mid-late 90s. As late as 2001, I was still writing EBCDIC translations for basic strings from an IBM database.

@roadriverrail Great context, thank you for this. I was sort of idly wondering how much EBCDIC work is still out there for legacy systems.

@darius I left that world in 2003, but at the time, interfacing the AS400 exposed me to a world without Unicode, without even ASCII, and where database systems were of a pre-relational design. I suspect there's a lot of that still out there, given record-keeping requirements for certain industries.

@roadriverrail Yeah! I've been learning about the invention of the database (full stop) and the invention of the relational database 10-15 years later.

The era I'm working in right now, 1971, is right in the middle of the 2-year period where Edward Codd of IBM was inventing the relational database. (Or "data base" as they called it back then.)

@darius This has to be some fascinating reading and study you're doing. Are you documenting this anywhere? I'd love to follow along. I'm a systems hacker; DBMS implementation is one of my secondary loves.

@roadriverrail It's really just side research that I'm doing in order to understand the status quo and the "feeling in the air" at the time the original RFC documents were being written. I try to get as much context as I can and provide it to my readers, so the DB stuff comes from reading I did as background for this blog post

@roadriverrail Who knows, it may eventually turn into a bigger project, but not until I wrap this RFC work at the end of 2019

@roadriverrail if you're not already subscribed to @365-rfcs, feel free to follow along there, I tend to cover stuff aside from RFCs themselves

@darius I really find understanding the historical context is essential for understanding just about anything in engineering and often for mathematics, too, so ((hat tip)) for taking the side research!

@roadriverrail More likely than a project about databases I'm probably going to do something about RAND's contributions to computing, I pored over hundreds of pages of internal memos while I was at the Charles Babbage Institute last month.

@roadriverrail @darius Even for contemporary applications. The vast majority of these large timesharing ... err ... cloud-based computing systems need things called "document databases", such as MongoDB, CouchDB, ZooKeeper, and so forth. All of them have a navigational/pre-relational design. Key/value stores aren't anything new either. :)

@vertigo @darius Because I have a lot of respect for the "NoSQL" world, I use the term "pre-relational" to talk about record-oriented databases that had no capacity for relational constraints or queries; indeed, pre-relational database applications are often just code that performs what a query might otherwise do. MongoDB et al were created in response to the limitations of relational databases and are solving for different cases.

@roadriverrail @darius My level of respect for NoSQL databases is on par with those of relational systems. As you say, they solve different problems, and navigational databases are just as useful today as they've always been, as long as your problem domain remains the same as it's always been. Don't fix what isn't broken. (c.f., most airline reservation systems do NOT rely on relational databases, IIRC.)

@roadriverrail @darius The fact remains, though, that most NoSQL systems are based on data structures which predate relational databases.

@roadriverrail @darius That we're seeing the "pendulum swing back towards the mainframe" (to paraphrase an earlier message) indicates only that the problems we're solving today are the same problems which mainframe-era programmers were facing. Maybe at a larger scale, now that we've finally realized GE's dream of utility-based computing (e.g., AWS), but no less structurally identical to a System/370 with 4 CPUs and 16 harddrives installed.

@roadriverrail @darius Speaking of GE and utility-scale computing, I find it mildly amusing that GE and Multics failed at their desires, while those sporting its informally specified progeny (Unics, later Unix) ended up independently realizing the dream.

@vertigo @roadriverrail I imagine Multicians would grumble that the dream would have been realized far faster by them (rightly or not)

@darius @vertigo
((pats gently on copy of "The UNIX Hater's Handbook"))

@darius @roadriverrail Parts of me regrets that Unix won, as Multics was definitely a technologically superior architecture, sharing many attributes of the AS/400 and its support for orthogonal persistence, albeit at the process level.

@darius @roadriverrail But, I digress. Back to my regularly scheduled day-gig. ;)

@vertigo @darius Thanks for dropping by the conversation and indirectly showing me a new term!

@roadriverrail @darius I've also studied the IBM mainframe family, but more from the hardware and disk storage side of things. Mainly out of curiosity, as so many decisions made by IBM influenced contemporary computing on levels most people don't realize or recognize. As someone building his own computer entirely from scratch, I like studying this history, as it can help me make better design decisions, experiment with alternatives, etc.

@roadriverrail @darius For me, then, this whole conversation is right up my alley. :D

@vertigo @darius I have the interest, though not the allocatable time, for a full personal implementation of a computer, but the interest alone has been enough to study a lot of the big classic machines. So, I feel you.

@vertigo @darius Sure, but so are relational databases. The term I was reaching for was "navigational database", which isn't something that I've encountered, even in my DBMS implementations classes. Overall, the point wasn't even that this is "bad", but that IBM's waning influence is, among other things, attributable to a highly insular technology stack.

@roadriverrail @darius I think "navigational" refers to key/value and object stores (e.g., modern object oriented databases use a storage format not that different from VSAM, IIRC), and is named from the organization of data that the application is responsible for taking (typically a tree or other directed graph).

@vertigo @darius Yes, what I was describing was a record-oriented data store where all constraint-enforcement and query logic were directed by application software. According to Wikipedia, it's not strictly key-value or object store.

@darius honestly, there isn't much difference between a VMM/hypervisor and a kernel
Sign in to participate in the conversation
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.