Show newer

ANYWAY the goal of all this was to make poem things with the word image GANs I trained for NaNoGenMo last year—here's a result from the first working implementation! (now the question is: do I go easy on myself and generate a book with one poem per page, or do I let the poems stretch across pages, or do I try to group shorter poems on the same page, etc. etc. etc.)

Show thread
allison boosted

something that I don't see discussed too much in the push to get off of election/web technologies is the impact that has on displaying non-latin text. if you are writing your own text renderer you are almost guaranteed to get it wrong when it comes to languages you are not familiar with. arabic is particularly badly served in this regard and I collect the many fuck ups on notarabic.com

for all of its faults (and there are many!) the web stack is still one of the best ways to not mess up text...

hey fediverse, one of my (computer-generated) poems is on the cover of BOMB Magazine's Winter 2021 issue! there are also several new poems from the Compasses series inside bombmagazine.org/issues/154/

okay I guess today is a day where I'm just going through a bunch of nlproc papers?? I love this example output from "Controlled Affective Text Generation" (columns from left to right: text completion prompt for the language model; desired topic; desired emotion; "knob" value adjusting emotional intensity; text output) paper link: wordplay-workshop.github.io/mo

"common sense" AI, gender, ethics 

original paper here: arxiv.org/abs/1906.05317 it really sucks that this paper has pretty much no discussion of how the use of a pre-trained language model increases the potential for harm—as just one example, the ConceptNet node for "transgender" is likely curated and more or less informative conceptnet.io/c/en/transgender but putting "transgender" into COMeT... basically spits out a bunch of harmful stereotypes

Show thread

"common sense" AI, alcohol mention 

it's like the AI says. I'm hot, drunk, cool, dead and cute and I need you to to drink beer and study hard

Show thread

"common sense" AI, alcohol mention 

playing around with COMeT (language model trained on knowledge base tuples from ConceptNet) mosaickg.apps.allenai.org/come

allison boosted

I've written a Q&A about Amazing Quest, my controversial (or maybe just despised?) #C64 BASIC game which just was rated 98th out of 103 in the Interactive Fiction Competition

https://nickm.com/post/2020/12/amazing-quest-qa/

more fun with DistilBERT: iteratively expanding a sentence by inserting a mask token *between* random adjacent tokens and predicting the most likely token for that position with the model

Show thread

this is a cool site for finding out what features your variable fonts support wakamaifondue.com/ easier than what I did, which is dig through the python freetype bindings to find the completely undocumented function that maps to the barely documented "multiple masters" header in freetype or whatever 😎

Show thread

playing around with perlin noise and variable fonts (choppy framerate because this is just a screencap of a browser window and animating variable font axes is apparently pretty slow? also I need to close some tabs haha)

with this one the model seems to kinda throw up its hands and be like... "language has repeating patterns in it, right? have some repeating patterns"

Show thread

doing the same thing but instead of starting with words selected at random from a word list, starting with letters selected at random from the alphabet (plus a few spaces thrown in)

pretty weird!

Show thread

(replacing it with the token whose probability is the highest, if it wasn't clear)

Show thread

at each iteration, replace the token whose probability is the lowest compared to DistilBERT's prediction for the masked token in the same position

Show thread

okay haha it's much better at this if I include the begin string/end string tokens. (first line is randomly selected words from a word list; in each line one word is replaced by DistilBERT's prediction)

Show thread

same thing but starting with the word "hello" repeated twelve times. (also the procedure won't pick the word originally found at the random index, even if it would otherwise be the most likely token based on the model)

Show thread

(1) sample words at random from a word list; (2) replace each word (in random order) with the most likely word for that context using BERT (actually Hugging Face DistilBERT)

Show older
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.