training a quick neural network to predict where to add poetic line breaks in text, based on a large corpus of public domain poetry and taking into account phonetics and semantics. the goal is to be able to enjamb prose passages in a somewhat principled way—after just a handful of epochs, here's what it does to a passage on hyacinths from wikipedia:

Follow

(there's a little bit of art in this—here I'm outputting a line break if the model's prediction was 0.25 or above. but I'm happy with the results so far!)

· · Web · 1 · 1 · 5

hmm, weirdly the more I push the accuracy on the training set, the less it produces the result I want on arbitrary prose. (bc there are stray prose snippets throughout the corpus, I think it might actually be learning the difference between prose and verse, whoops!) gonna try training again with *only* phonetic information about each word, maybe that will help

Show thread

the way I've been reviewing the output of this model is looking at the probability scores juxtaposed with the words, one by one, and checking for the highest scores (higher score = greater probability that a line break will directly follow this word) and anyway now I'm having a hard time not reading "Stopping By Woods on a Snowy Evening" in the Beastie Boys style with everyone shouting out the end rhymes

Show thread

whoops I left the debug thing in where I printed out any words that weren't in the expected length limits

Show thread

(and yes, I should probably train this on something other than my laptop but then I have to make the code pretty so I can copy it over and that takes more effort than just waiting. and it'll go faster once the results of finding the phonetic states is cached at the end of the first epoch)

Show thread

@aparrish

Hey Hey Hey
what would Bert say
'bout linking Frost and snow

Sign in to participate in the conversation
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.