highly recommended generative technique: whenever you're selecting an item at random where the items are weighted (e.g. by their frequency in a corpus), normalize the weights and sample it with a softmax function, using temperature as a parameter. (at temp=1.0, it's the same as picking by the weights directly; at <1.0 it favors items that are weighted more heavily; as temperature increases >1.0, the sampling approaches a uniform distribution)

here are examples from the thing I'm working on

gut reno of the ephemerides code yields... pretty much the same kinds of output as the original. oh well.

at times Sara Ahmed's _Queer Phenomenology_ reads like Inform 7 source code

making a markov chain model by hand (this is about... eleven words out of a forty-word poem)

helpful social media tips I found on everyword's twitter analytics page

Show thread

beautiful Mirtha Dermisache exhibition catalog available for free download, for the asemic writing fans on here malba.org.ar/catalogo-mirtha-d ("Descargar PDF" on the right side of the page to download) (most of it is essays and stuff in Spanish but there's a big chunk of reproductions of Dermisache's work in the middle)

obligatory tag yourself/celestial emporium of benevolent knowledge joke

Show thread

generating sorta... pseudo-villanelle-like cut-up poems, where the lines are drawn at random from versified wikipedia pages related to "legibility," and instead of finding rhyming lines, I'm matching them up by semantic or phonetic similarity

another day, another VAE (nonsense words) 

I got very helpful advice today on this, which is that the distribution the VAE learns might not be centered at zero—after averaging together the latent vectors from a few thousand items from the data set and using *that* as the center of the distribution, I get much better results when sampling!

Show thread

auto-enjambed lines from wikipedia pages related to mirrors, arranged in chains of semantic similarity (given lines n and n+1 in the corpus, print n and another line similar in meaning to n+1, then repeat with the last line printed)

randomly selected items from a corpus of auto-enjambed lines, drawn from the text of wikipedia pages linked from "Mirror"

found haiku in Frankenstein. for a long time I have had a blanket ban on haiku generators in my classes because what more is there to say about computer-generated haiku that wasn't already said 53 years ago rwet.decontextualize.com/pdfs/ but... I had never actually programmed the "find haiku in an existing text" thing before I did have fun and learned a bit making it, whoops

whoops I left the debug thing in where I printed out any words that weren't in the expected length limits

Show thread

the way I've been reviewing the output of this model is looking at the probability scores juxtaposed with the words, one by one, and checking for the highest scores (higher score = greater probability that a line break will directly follow this word) and anyway now I'm having a hard time not reading "Stopping By Woods on a Snowy Evening" in the Beastie Boys style with everyone shouting out the end rhymes

Show thread

training a quick neural network to predict where to add poetic line breaks in text, based on a large corpus of public domain poetry and taking into account phonetics and semantics. the goal is to be able to enjamb prose passages in a somewhat principled way—after just a handful of epochs, here's what it does to a passage on hyacinths from wikipedia:

Show more
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.