Reading about a Google Research project that generates videos of objects bumping into each other. I thought, cool, this is cute for like, making random weird vids!

Then I read the attached section on why they made it in the first place

Folks, they are trying to make up for the lack of ground truth video of physical object interaction by *simulating those interactions and training the AI on the simulation*

@darius well I'm certainly impressed. This is clever but in a bad way 🤦

@darius on the other hand, if they did it right and set up a Laboratory For Throwing Actual Things At Other Things, I would apply for a Things-Thrower job immediately

@ranjit that's the thing. You could literally happily employ so many people but instead you feed fake data to a machine

@darius i wonder if there are already engineers out there trying to make materials that look and feel more like they do in video games (and other uses of physics simulation engines), so that those materials come across as more "natural" to consumers

@aparrish I recall reading or hearing about sound engineers already having done something similar for decades; using certain objects to simulate slap sounds for instance, because it sounds 'more real' than an actual slap (aside from the issues the come with having to actually slap someone).

@darius What if the real mechanical turk was the biased training set we made along the way?

@darius if it works you get a faster simulator, and you can hopefully use it on top of other techniques (differentiable rendering and/or differentiable physics simulation) to get a model able to work with real data. It's not that stupid, but it has to be used carefully

@Kichae @darius while the "techbro problem" can make this into something stupid (or even dangerous), I don't think it makes the proposal in the OP inherently bad. Well designed models trained with real data can still be dangerous in the hands of stupid or evil people

@Temporalin @darius Sure, but at least with real data you can potentially discovery in testing how your assumptions are wrong. With simulated data, your assumptions are built in from the get-go.

Shit like self-driving cars are bad enough for being tested in places with fairly monotonous climates, like Arizona. Having them training in an *idealized* Arizona would be more short-sighted that Mr. Magoo.

It's not about making a better tool for people to use. It's about cutting R&D costs so that venture-capitalists can see returns faster, which is exactly how you fuck shit up.

@Kichae @darius It can also cut costs for low budget departments in university labs, and it can be useful to create better simulations for games or VR experiences, where you want some kind of realism but is not necessary to completely match our reality, for example. Critical settings like you say are of course much more dangerous, so again, it depends on where it's used.

I share your concerns about capitalism ruining things, but it applies to all of the research in every field, not just this particular example. I personally think that if we were to stop this project for that reason, we should first stop projects on facial recognition or text models like GPT (which worry me more and are trained with real data). Objects hitting objects doesn't seem the most worrying thing right now i have a group of friends whose whole startup idea is basically this (generating synthetic datasets)

@darius there was a project called deepdrive which trained self driving ai on GTA IV. I think they were collaborating with tesla (lol). Horrifying!

Sign in to participate in the conversation
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.