Show more
allison boosted

twitter, my research, light anti-disinformation work 

I finally wrote up something explaining that Twitter has not allowed people to pick usernames during signup for almost 3 years now, and that a "jsmith12345678" username pretty much just means you're dealing with a not super technically savvy social media literate person.

tinysubversions.com/notes/twit

allison boosted

Tech fascists 

A big mistake in tech coverage over the past decade has been depicting its figures as dorks, bureaucrats, buffoons —everything but dangerous, powerful, and fully enabling fascism if not totally fascist themselves. I get the sense that media like the HBO show Silicon Valley (which already was a mess) is going to look apologetic in retrospect. The media depiction of Zuckerberg has changed, thankfully, but not quickly enough.

Philosophers on GPT-3: dailynous.com/2020/07/30/philo "Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory.... In contrast, understanding is a lifelong social labor. It’s a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world."

allison boosted

oh my god check this out

this just might be the coolest thing i have seen in my entire life

(in this case, I'm replacing phrases in sentence templates that match the part of speech of the phrase being replaced in the sentence. the effect of the temperature change is a little subtle because the replacement operation only takes place about half the time, but you can see with the low temperature example you get the same high-probability items over and over again and with the high temperature example you get a lot of weird stuff)

Show thread

highly recommended generative technique: whenever you're selecting an item at random where the items are weighted (e.g. by their frequency in a corpus), normalize the weights and sample it with a softmax function, using temperature as a parameter. (at temp=1.0, it's the same as picking by the weights directly; at <1.0 it favors items that are weighted more heavily; as temperature increases >1.0, the sampling approaches a uniform distribution)

here are examples from the thing I'm working on

this word frequency python package is handy and convenient: github.com/LuminosoInsight/wor includes a nice little word tokenizer and supports a bunch of languages

this newfangled constituency parser is significantly more accurate than the one I had been using, meaning there are fewer outputs that call attention to their patchwork anomalous syntax. I'm using my own ML model to add line breaks to the prose, instead of the adhoc rulesets I had been using in the previous version, which feels more elegant from a code perspective, and I think the results are "better" in that the enjambment draws less attention to itself.

Show thread

another example of the output (text only):

In this life,
he finds fields to be easterly.

He has been accounted
for in the following way
too ambitious to fall-centred,
but yet too intent to be
sympathetic.

To be forced up
high above the ordinary sea,
and the vessel, by

preparation, was mercifully
delivered upon one
occasion.

Show thread

gut reno of the ephemerides code yields... pretty much the same kinds of output as the original. oh well.

allison boosted

In today's disturbing news, I discovered that GPT-3 can write AI Weirdness blog posts.

1st paragraph is my prompt. The rest is GPT-3 simulating the fumbling of a much less powerful neural net.
https://aiweirdness.com/post/626712039215202304/ai-ai-weirdness

allison boosted

birdsite, language models 

I posted a thread to twitter a few weeks back with thoughts on poetry, voice and pretrained language models that I never crossposted here. here it is: twitter.com/aparrish/status/12

the only part I'm 100% sure about is the last tweet: "maybe my reticence to make use of these models is just as much a function of how dreary and distressing it seems to co-write poetry with twelve years of chewed up internet, regardless of how powerful the language model might be. shrug, the end"

includes long stretches where I pretend to know anything about classics!

Show thread

hi fediverse, you could read this essay I wrote in which I try to explain why automatic writing (e.g. Breton, Fox Sisters) and computer-generated writing often resemble one another in form and use—despite the (apparent) differences in how they're authored (with particular reference to Jenna Sutela's work) serpentinegalleries.org/art-an

(The Ephemerides btw is a poem generator I made a while ago that I used to make a twitter/tumblr bot the-ephemerides.tumblr.com/)

(it stopped working a while back because the OPUS API changed and I never bothered to fix my wrapper code. I'm making some zines with it for a show in a few weeks and decided to take this opportunity to revisit some of the inner workings)

Show thread

(using AllenNLP to do constituency parsing instead of Pattern, and using my stichography model to add line breaks instead of the weird ad-hoc system of rules I had originally programmed for this purpose. right now constituents are just being swapped in at random, as in the original code, but I'm thinking about weighting the choices by semantic similarity to what is being replaced, and/or by phonetic similarity to previous parts of the poem)

Show thread

oh wait, apparently with constituency parsing, you just sorta... take your best guess about what the head is. e.g. pattern: github.com/clips/pattern/blob/ even the latest edition of jurafsky's _Speech and Language Processing_ just says to use a rule-driven approach found in a paper from 1999 mitpressjournals.org/doi/abs/1

Show thread
Show more
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.