beautiful Mirtha Dermisache exhibition catalog available for free download, for the asemic writing fans on here malba.org.ar/catalogo-mirtha-d ("Descargar PDF" on the right side of the page to download) (most of it is essays and stuff in Spanish but there's a big chunk of reproductions of Dermisache's work in the middle)

obligatory tag yourself/celestial emporium of benevolent knowledge joke

Show thread

generating sorta... pseudo-villanelle-like cut-up poems, where the lines are drawn at random from versified wikipedia pages related to "legibility," and instead of finding rhyming lines, I'm matching them up by semantic or phonetic similarity

another day, another VAE (nonsense words) 

I got very helpful advice today on this, which is that the distribution the VAE learns might not be centered at zero—after averaging together the latent vectors from a few thousand items from the data set and using *that* as the center of the distribution, I get much better results when sampling!

Show thread

auto-enjambed lines from wikipedia pages related to mirrors, arranged in chains of semantic similarity (given lines n and n+1 in the corpus, print n and another line similar in meaning to n+1, then repeat with the last line printed)

randomly selected items from a corpus of auto-enjambed lines, drawn from the text of wikipedia pages linked from "Mirror"

found haiku in Frankenstein. for a long time I have had a blanket ban on haiku generators in my classes because what more is there to say about computer-generated haiku that wasn't already said 53 years ago rwet.decontextualize.com/pdfs/ but... I had never actually programmed the "find haiku in an existing text" thing before I did have fun and learned a bit making it, whoops

whoops I left the debug thing in where I printed out any words that weren't in the expected length limits

Show thread

the way I've been reviewing the output of this model is looking at the probability scores juxtaposed with the words, one by one, and checking for the highest scores (higher score = greater probability that a line break will directly follow this word) and anyway now I'm having a hard time not reading "Stopping By Woods on a Snowy Evening" in the Beastie Boys style with everyone shouting out the end rhymes

Show thread

training a quick neural network to predict where to add poetic line breaks in text, based on a large corpus of public domain poetry and taking into account phonetics and semantics. the goal is to be able to enjamb prose passages in a somewhat principled way—after just a handful of epochs, here's what it does to a passage on hyacinths from wikipedia:

yes, that... is the expected behavior when I click on a link

found this weird political compass in the scikit-learn documentation

I used scikit-images measure.find_contour() function to calculate the width of the generated glyphs and was thereby able to easily crop out the empty space (since the characters are variable width but the images you train the GAN on have to be the same width). a couple of chained interpolations through the latent space using the width data to typeset the glyphs right next to each other, instead of in (e.g.) a grid

Show thread

compositions with characters from a GAN trained on random glyphs from Noto Sans

if you use uniform random numbers (min=-8, max=8) instead of normal random numbers you get delightfully weird results!

Show thread

after a lot of poking and wrangling (and getting help from the authors), I'm finally able to sample from the latent space of the Magenta SVG-VAE model. it's pretty neat, more experiments to follow

(this is the model in question: magenta.tensorflow.org/svg-vae)

Show more
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.