Show more

"i don't like thing / ok" re: the expanse (tv show) 

whoops, I forgot to put ❝real❞ ❝words❞ in giant scare quotes

I have new poems out in Andreas Bülhoff's sync² zine series today: for the series, or here's a direct link to the PDF:

I've posted prototypes of some of these poems here—"compass" shapes with real words on the points and interpolated words (generated from a seq2seq model I trained) between. a few excerpts:

medieval magic, livestock joke? 

medieval magic, livestock joke? 

literature, racism 

drug mention, aging, literature joke 

drug mention, aging, literature joke 

the main problem with reading this book is getting distracted by looking up all of the amazing old computers he writes about—like check out this Kaypro 2000, it's PERFECT

(today it feels like we've all culturally settled on two possible attitudes about computers: (a) "I love 'em, I think about nothing else" or (b) "they're everywhere, I don't even think about them, ugh, whatever"—so I like this reminder that there are eras and contexts where it's possible to care about computational tools without it being Your Whole Thing?)

in 1985 Ben Fong-Torres wrote an article about Amy Tan's Kaypro User Group, and I'm not sure why this fact is so amusing and surreal

(from Kirschenbaum's _Track Changes_, p. 65)

hope you're enjoying this account's new "all sator square jokes" format

just who exactly is this "Farmer Arepo" character and what angle are they working with all those wheels? in this new prestige podcast from the makers of this american life,

I really like using these byte-pair embeddings—using sentencepiece gives you a guaranteed fixed vocabulary size without having to worry about out-of-vocabulary problems, which *really* simplifies the model and the preprocessing code. and the embeddings (I think? I should experiment to be sure) seem to be giving the model a little head start on figuring out how english generally works

RNN text generation to me always feels most "like itself" in these early epochs anyway, with the half-formed words and insistence of repetition—to anthropomorphize a bit, there's something sort of adorable in the underlying philosophy of the network here—"hey the loss is going down! maybe no one will notice that I just keep predicting 'the tarot cards' over and over again?"

experimenting with RNN text generation using pre-trained byte-pair embeddings (—output below shows results at various temperatures after five epochs with a minimalist architecture on a small corpus (A.E. Waite's _Pictorial Key to the Tarot_). it's not gpt-2 but it's not bad for 10mins of training on my macbook air either

Oh, and I'm releasing my own fork of Mastodon based on my experiences running Friend Camp for a year. It's called Hometown, and you can read more about it here:

It's not fully documented yet but it will be. Experienced Mastodon admins should be able to switch over with a minor database migration and minimal fuss.

I finished my stint as a Mozilla Fellow and now I'm relaunching my Patreon with a focus on, well, fixing social media.

This means I'm going to continue my concerted work trying to make the fediverse a better place, in the form of best practice guides for running instances, external advocacy, and technical tutorials and training so that more people can contribute software to the fediverse at large.

You can read a partial summary of my work so far at my new Patreon page!

religion, magic, mild body horror maybe? 

Show more
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.