as someone who grew up in that magical and apparently historically aberrant period where mostly standalone, usually not networked personal computers were the norm, I'm always weirded out by the word "edge computing" e.g. https://os.mbed.com/blog/entry/uTensor-and-Tensor-Flow-Announcement/ desktop computers are hardly ever mentioned as "edge" devices in this context, but most of them effectively are (lacking the GPUs necessary to train ML models). in this metaphor, using your own device to do "computing" now is literally an "edge" case
(likewise, initiatives like uTensor, TensorFlow Lite, etc. seem to almost always be about easing "deployment" to devices in order to make "inference" faster— they're never about making it faster/easier to train your own models. anecdotally it feels like over the past 3–4 years, people have stopped even pretending that training a ML model can be done by someone without specialized training and specialized hardware—"democratizing" demos/tutorials are all about using pre-trained models now)
(I need to develop this argument more and find actual evidence to back up words like "never" and "all." but I do really feel like this technology in particular is in the process of being "locked up" in very material ways)
@aparrish scary af, honestly.
Found this out last night, that my phone is part of the monolith and my home computer is "edge"
@aparrish this is an entire area i've kind of… avoided thinking about, if i'm honest, which isn't terribly professional of me. but this rings true.
the complexity ratchet that drives a lot of the centralization of services / infrastructure / platforms seems like it's accelerated by giant blobs of ML that're (in practice) the domain of massively resourced teams at megacorps with unfettered access to data and effectively infinite hardware budgets.
@aparrish I wonder about this too but I take some solace in the fact that most ML code I see seems so brittle that it would surprise me if it still runs 6 months later.
@KnowPresent lately I've felt like the intense rate of change in the field is—whether intentional or not—engineered specifically to encourage centralization of resources and technology. ("can't figure out how to install the latest versions? just buy cloud tpu time from us instead!")
@aparrish Yeah I feel like this is another extension of the DevOps/CI/CD/Containerization philosophy which seems to be greatly motivated by a fear of being too close to hardware.
@aparrish hoping it’s ok if i bug you about this exact topic as i’m literally on day one of switching over to ML. (and come from the comparatively egalitarian land of web dev)
Some folks *were* trying to push for thin clients in the period from the late 80s to ~2000 -- notably Sun. I don't think it caught on until relatively high-speed wifi became common. Netbooks & later chromebooks are probably the harbingers of the transition back to thin clients (now over web instead of X forwarding).
I think there's a philosophical reason that folks stuck to it as long as they did, though. Or, a cultural one, at least. Over at PARC there was this association between time sharing & government-funded top-down-organized networks, & PARC did a lot to promote the idea that 'personal computing = personal freedom', which caught on with the computer hobbyist crowd.
Apparently this caused a bunch of friction with PLATO & with the Dartmouth folks -- i.e., time-sharing folks who nevertheless were very interested in computing-for-the-people & accessibility. Centralized control really *was* a problem (& still is to some degree) but it was a long time before standalone machines could get as friendly as PLATO.
@freakazoid @aparrish @enkiv2
That book is one of my sources for this. Highly recommend it. (I recommend A People's History of Computing to a substantially lesser degree, unfortunately: it has a lot of good information that was unfamiliar to me, but desperately needed someone to reorganize the macro-level structure...)
@enkiv2 I feel like the current situation is the worst of all possible worlds—where we're essentially just all carrying around disposable subsidized supercomputers that we don't have control over but that big corporations can use to track us, pester us, and temporarily offload their "compute" onto when it's convenient for them
@aparrish Oh, that's a beautifully concise description of our wretched state of affairs.
I truly think one of the most base personally liberating computings is thin client tech, for our personal systems. Not a month goes by that I don't wish miracast had persisted & flourished, enabling us to work across personal devices.
& we lack good ways to cohesively personally run device*S*.
If anything I think the big challenge of AI is wading through the information glut of open amazing frameworks, tools, papers, models, &c offerings out there to know what to use. The gap to adoption is that so few know what to do when so much is dropped in front of them.
@jauntywunderkind420 true, it's just that I wonder whether or not that glut itself is a (perhaps unintentional) tool for enforcing centralization
@aparrish a significant reason not to deploy and run the "ML bits" of an application locally is that, depending on how a trained model is applied, it may be important to keep it behind lock and key (e.g., behind a rate-limited, login-gated API) so that bad actors can't reappropriate it for their own use or probe for weaknesses as easily. Until what we're calling machine learning gets a lot more robust, that's going to continue to be an issue.
@aparrish it's so strange
@aparrish there seems to be room, here, to reclaim some "core" autonomy for individuals, possibly via the "owncloud" concept space? Currently that branding is focused on the "a place to upload your files" part of the cloud experience, but I think there's conceptual room for it to extend to the "cloud is where the heavy compute happens" part as well.
@eqe what we need are neighborhood gpu co-ops
@aparrish I was always a little disappointed that Noisebridge didn't have much interest in collaborative rack deployment
Otoh that would have ended up with a bunch of bitcoin bros mining on the shared hardware
@aparrish I have been thinking about this a lot lately, as most of my ML training assumes a cloud/managed/prefab model with a handful of proof of concept type examples.
I start at my new job this week, and it's at an ISP so I'm wondering what kinds of physical resources they have that would otherwise be cloud-type
@aparrish "if it's not machine learning, it's not real computing"
@aparrish Agreed it's sort of a weird term, but I see it as part of a longer cycle where computing power is periodically concentrated, accessed with dumb/thin devices (terminals the first time, web browsers and thin apps the next), then pushed out to "edge nodes" (aka PCs, now mobile devices). We're on at least the second full cycle now, it seems.
Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.