(1) sample words at random from a word list; (2) replace each word (in random order) with the most likely word for that context using BERT (actually Hugging Face DistilBERT)

same thing but starting with the word "hello" repeated twelve times. (also the procedure won't pick the word originally found at the random index, even if it would otherwise be the most likely token based on the model)

okay haha it's much better at this if I include the begin string/end string tokens. (first line is randomly selected words from a word list; in each line one word is replaced by DistilBERT's prediction)

at each iteration, replace the token whose probability is the lowest compared to DistilBERT's prediction for the masked token in the same position


okay I THINK I finally found a way of doing this that comes close to meeting all of my criteria for this project (i.e., each step shows visible and meaningful change; the change is gradual, but the result "converges" after relatively few steps): calculate the probability of token in source text vs. token sampled from the distribution of mask token at that position, then find "peaks" of improbable tokens, and replace w/sampled token at those peaks; stop when any output repeats

· · Web · 2 · 1 · 5

"doing this" = using DistilBERT to gradually transform a sequence words picked at random from a word list into text that appears to make sense

@aparrish it would be cool if you could feed it a context and it would help steer the algorithm. Like "this sentence should be about lasers". Very cool project!!

@mooog the project I'm riffing on does this explicitly: github.com/jeffbinder/visions-

for the thing I'm working on, I'm really just interested in the transformation from uniform unigram randomness back to somewhat coherent sentence, bringing out the texture of the language model along the way

Sign in to participate in the conversation
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.