beautiful Mirtha Dermisache exhibition catalog available for free download, for the asemic writing fans on here https://www.malba.org.ar/catalogo-mirtha-dermisache/ ("Descargar PDF" on the right side of the page to download) (most of it is essays and stuff in Spanish but there's a big chunk of reproductions of Dermisache's work in the middle)
obligatory tag yourself/celestial emporium of benevolent knowledge joke
another day, another VAE (nonsense words)
I got very helpful advice today on this, which is that the distribution the VAE learns might not be centered at zero—after averaging together the latent vectors from a few thousand items from the data set and using *that* as the center of the distribution, I get much better results when sampling!
found haiku in Frankenstein. for a long time I have had a blanket ban on haiku generators in my classes because what more is there to say about computer-generated haiku that wasn't already said 53 years ago http://rwet.decontextualize.com/pdfs/morris.pdf but... I had never actually programmed the "find haiku in an existing text" thing before I did have fun and learned a bit making it, whoops
whoops I left the debug thing in where I printed out any words that weren't in the expected length limits
the way I've been reviewing the output of this model is looking at the probability scores juxtaposed with the words, one by one, and checking for the highest scores (higher score = greater probability that a line break will directly follow this word) and anyway now I'm having a hard time not reading "Stopping By Woods on a Snowy Evening" in the Beastie Boys style with everyone shouting out the end rhymes
it *almost* gets "How Doth the Little Crocodile" right:
training a quick neural network to predict where to add poetic line breaks in text, based on a large corpus of public domain poetry and taking into account phonetics and semantics. the goal is to be able to enjamb prose passages in a somewhat principled way—after just a handful of epochs, here's what it does to a passage on hyacinths from wikipedia:
I used scikit-images measure.find_contour() function to calculate the width of the generated glyphs and was thereby able to easily crop out the empty space (since the characters are variable width but the images you train the GAN on have to be the same width). a couple of chained interpolations through the latent space using the width data to typeset the glyphs right next to each other, instead of in (e.g.) a grid
if you use uniform random numbers (min=-8, max=8) instead of normal random numbers you get delightfully weird results!
terpin' through that latent alphabet space
after a lot of poking and wrangling (and getting help from the authors), I'm finally able to sample from the latent space of the Magenta SVG-VAE model. it's pretty neat, more experiments to follow
(this is the model in question: https://magenta.tensorflow.org/svg-vae)
Poet, programmer, game designer, computational creativity researcher. Assistant Arts Professor at NYU ITP. she/her
Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.