Follow

getting a language model to write lipograms by simply zeroing out the probability of any token in the vocabulary that has a particular letter in it (in this case, 'E')

· · Web · 1 · 3 · 17

doesn't do so well at the inverse task, i.e., generating with the probabilities of any token containing a vowel letter OTHER than 'E' zeroed out

@aparrish have you seen @GLOSSATORY? it also does the former and sometimes (randomly) the inverse on initial letters

@ranjit @GLOSSATORY I have, yep! I imagine it works along similar principles

@aparrish @ranjit yes, both it and @gravidum_cor do this, but at the letter level rather than word, so they generate a few neologisms. @GLOSSATORY@oulipo.social also has a couple of other sources - a model trained on inputs without “e”, which generates babble, and posts from the main @GLOSSATORY@botsin.space which are accidentally compliant.

Sign in to participate in the conversation
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.