more progress! I ended up having to write my own skeleton tracing algorithm, which I thiiiink is working okay now—basically it does a flood fill starting with pixels having exactly one neighbor, and each connected pixel is a node in a graph; later I use visvalingam-wyatt line simplification on each segment between nodes w/3+ edges. this technique gives me nice long lines, clean intersections, & hopefully more elegant plotter gestures. (this is just a raster preview, will try an actual plot soon)
first attempt at sending this to the plotter. I ended up cutting the plot short because I could see some aesthetic and technical problems that I want to fix, and I didn't want to bother waiting another 45mins for the plot to finish, haha. but the basic idea is there and I think it looks nice?
the delicious irony: creators of industrial language models are now worried about no longer being able to use the web as their "commons" (i.e. other people's labor that they appropriate and commercialize) because their own outputs are "polluting" it (via https://mailchi.mp/jack-clark/import-ai-266-deepmind-looks-at-toxic-language-models-how-translation-systems-can-pollute-the-internet-why-ai-can-make-local-councils-better)
weird idea, work in progress: (1) get DistilBERT hidden states (768 dimensions) for 768 sentences (of Frankenstein, in this instance) → stack vertically to form a 768x768 square → subtract the column-wise mean, normalize → lil bit of gaussian blur and threshold → "skeletonize" with skimage → "asemic" "writing"?
computer-generated "recipes" that I made as an example in the workshop I'm teaching. the instructions are composed of random transitive verbs plus random direct objects from Bob Brown's _The Complete Book of Cheese_ https://www.gutenberg.org/ebooks/14293
doesn't do so well at the inverse task, i.e., generating with the probabilities of any token containing a vowel letter OTHER than 'E' zeroed out
love this sorta disgusting visualization of a self-organizing map https://www.complexity-explorables.org/explorables/yo-kohonen/
logit biasing, markov chain style. here I'm doing it with phonetics—basically I check the possible outcomes for each context, and then artificially boost the probability of predictions that have certain phonetic characteristics. (in this case, more /k/ and /b/ sounds)
I like having this extra setting to fiddle with! but based on my limited testing, the temperature doesn't really matter once the length of the ngram hits a certain limit, since most ngrams only have one or two possible continuations. like... with word 3-grams, it's pretty difficult to distinguish 0.35 from 2.5
generating with a markov chain using softmax sampling w/temperature (a la neural networks). this is an order 3 character model, and you can really see the difference between low temperature (instantly starts repeating itself) and high temperature (draws from wacky corners of the distribution) (if you've generated text with a markov chain before, it's probably using what amounts to a temperature of 1.0)
here it is working on an oov ngram ("you ate books" is not an ngram that appears in Frankenstein. all of this is trained on Frankenstein, I guess I forgot to mention that)
another way to find similar ngram contexts: each context has an embedding derived from the sum of positional encoding (they're not just for transformers!) multiplied by "word vectors" (actually just truncated SVD of the transpose of the context matrix). then load 'em up in a nearest neighbor index
(this is cool because I can use it even on ngrams that *don't* occur in the source text, though all of the words themselves need to be in the vocabulary)
someday I should develop a poetics where the success condition is something other than "yeahhh now it's giving me a good headache" but... today is not that day
hey all, here's a new computer thing I made! it's called the Nonsense Laboratory, and it's a series of weird little tools for manipulating the spelling and sound of words with machine learning: https://artsexperiments.withgoogle.com/nonsense-laboratory/
it's part of a series of projects launched yesterday showing the outcomes of the Artists + Machine Intelligence grant program, which you can learn more about here: https://experiments.withgoogle.com/ami-grants
Poet, programmer, game designer, computational creativity researcher. Assistant Arts Professor at NYU ITP. she/her
Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.