ANYWAY the goal of all this was to make poem things with the word image GANs I trained for NaNoGenMo last year—here's a result from the first working implementation! (now the question is: do I go easy on myself and generate a book with one poem per page, or do I let the poems stretch across pages, or do I try to group shorter poems on the same page, etc. etc. etc.)
something that I don't see discussed too much in the push to get off of election/web technologies is the impact that has on displaying non-latin text. if you are writing your own text renderer you are almost guaranteed to get it wrong when it comes to languages you are not familiar with. arabic is particularly badly served in this regard and I collect the many fuck ups on notarabic.com
for all of its faults (and there are many!) the web stack is still one of the best ways to not mess up text...
hey fediverse, one of my (computer-generated) poems is on the cover of BOMB Magazine's Winter 2021 issue! there are also several new poems from the Compasses series inside https://bombmagazine.org/issues/154/
okay I guess today is a day where I'm just going through a bunch of nlproc papers?? I love this example output from "Controlled Affective Text Generation" (columns from left to right: text completion prompt for the language model; desired topic; desired emotion; "knob" value adjusting emotional intensity; text output) paper link: https://wordplay-workshop.github.io/modern/assets/pdfs/8.pdf
"common sense" AI, gender, ethics
original paper here: https://arxiv.org/abs/1906.05317 it really sucks that this paper has pretty much no discussion of how the use of a pre-trained language model increases the potential for harm—as just one example, the ConceptNet node for "transgender" is likely curated and more or less informative http://conceptnet.io/c/en/transgender but putting "transgender" into COMeT... basically spits out a bunch of harmful stereotypes
"common sense" AI, alcohol mention
it's like the AI says. I'm hot, drunk, cool, dead and cute and I need you to to drink beer and study hard
"common sense" AI, alcohol mention
playing around with COMeT (language model trained on knowledge base tuples from ConceptNet) https://mosaickg.apps.allenai.org/comet_conceptnet/
it does get stuck with repetitive words a lot, e.g.
more fun with DistilBERT: iteratively expanding a sentence by inserting a mask token *between* random adjacent tokens and predicting the most likely token for that position with the model
this is a cool site for finding out what features your variable fonts support https://wakamaifondue.com/ easier than what I did, which is dig through the python freetype bindings to find the completely undocumented function that maps to the barely documented "multiple masters" header in freetype or whatever 😎
font is Soulcraft https://www.behance.net/gallery/72595599/Soulcraft-Typeface which I found in the League of Moveable Type newsletter https://www.theleagueofmoveabletype.com/newsletter
with this one the model seems to kinda throw up its hands and be like... "language has repeating patterns in it, right? have some repeating patterns"
doing the same thing but instead of starting with words selected at random from a word list, starting with letters selected at random from the alphabet (plus a few spaces thrown in)
(replacing it with the token whose probability is the highest, if it wasn't clear)
at each iteration, replace the token whose probability is the lowest compared to DistilBERT's prediction for the masked token in the same position
okay haha it's much better at this if I include the begin string/end string tokens. (first line is randomly selected words from a word list; in each line one word is replaced by DistilBERT's prediction)
same thing but starting with the word "hello" repeated twelve times. (also the procedure won't pick the word originally found at the random index, even if it would otherwise be the most likely token based on the model)
Poet, programmer, game designer, computational creativity researcher. Assistant Arts Professor at NYU ITP. she/her
Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.