Pinned post

(not sure exactly what I'm going to do with this yet, but so far it passes the "is it at least mildly interesting if you pick lines at random" test)

Show thread

randomly selected items from a corpus of auto-enjambed lines, drawn from the text of wikipedia pages linked from "Mirror"

welp this is going to lead to some truly magnificent feedback loops youtube.com/watch?v=fZSFNUT6iY (when code that was written with the model leaks into the corpus that they're training the model on and all code on github or whatever starts to become a deep dream-esque fractal of language models completing themselves)

another day, another VAE 

(I would expect those samples to look more like plausible made-up words, at least as plausible as a markov chain or something trained on the same dataset. but it doesn't look like there's been a model collapse, since the model still performs well otherwise? it's also possible I did the math bad somewhere? I guess the next step is to visualize the latent space and see if it does actually look like a normal distribution)

Show thread

another day, another VAE 

working on a keras/tf adaptation of this paper: arxiv.org/pdf/1911.05343.pdf on character data (words from cmu dict) and I ended up with very good reconstruction loss and bad (I think?) KL loss, like 0.08? the space seems to be smooth when I do interpolations:

moses
mosess
mosees
midsets
maddets
madderon
middleton
middleton
middletown

but sampling from a normal distribution is kinda garbage:

manina
kal
agruh
aar
urosh
'louic
cseq
gb
zani
ias
nsny
huinea
a's
om
ntioo
gante

today in machine learning 

in other circumstances (i.e. when I'm doing it on purpose), a character RNN decoding after the stop token is one of my favorite things, it's just like "oh uh you want me to keep going? okay, um, ingisticanestionally...?"

Show thread

today in machine learning 

... then spent like six hours debugging my decoding function, which was always returning sequences of length 20... it was because I'd hardcoded "20" as the length of the time series in the vectorization function (which worked fine, of course, when all of the training data was padded to the same length)

Show thread

today in machine learning 

instead of figuring out how to do fancy tricks with keras masking and tensorflow indexing, I just wrote a function to feed in batches where all of the data have the same number of timesteps. seems to work fine

Please enjoy the new online exhibit Post Hoc, with work by

Agnieszka Kurant
Christian Bök
Daniel Temkin
Derek BeaulieuForsyth Harmon
Lauren Lee McCarthy
Lilla LoCurto & Bill Outcault
Olia Liaina
Manfred Mohr
Mark Klink
Renée Green
Sly Watts
Susan Bee

responses by

Amaranth Borsuk
Craig Dworkin
Daniel Temkin
Fox Harrell
Mary Flanagan
Paul Stephens
Simon Morris & Valérie Steunou

It's at https://nickm.com/post/2020/05/post-hoc-an-online-art-show/

hey everyone, my PyCon 2020 tutorial session is now online: youtube.com/watch?v=yJ6iN5M42s and contains, well, pretty much everything I know about phonetics, machine learning and meter/rhyme/sound symbolism in poetry. tutorial code and notebooks here: github.com/aparrish/nonsense-v

For the next WordHack @babycastles on 5/21, 7pm Eastern, I've set up a virtual book table

You can buy new computer-generated books by featured readers Lillian-Yvonne Bertram & Jörg Piringer!

Purchases will please you and will also help nonprofit book distributor Small Press Distribution and the two nonprofit publishers

https://nickm.com/post/2020/05/wordhack-book-table/

emotions 

crying at the end of a dorktown again

my program's masters thesis presentations are happening right now and they've all been fantastic so far. you can watch them online here: itp.nyu.edu/shows/thesis2020/ (presentations are all just 10mins long, so if you're not into whatever's on the screen when you tune in, just wait for a bit for something completely different, haha)

hey everyone, you can watch my !!con keynote talk about text and lines and lines of text in the recorded livestream here (along with all of the rest of yesterday's talks) youtu.be/EReoVpb9LJo?t=1297 featuring: a surprising number of photographs of medieval manuscripts, a hilbert curve, a tasteful amount of cannabis, shortcuts on the road not taken, enjambed zinnias

The Ghostbusters (2012) starring Michael Cera, Dave Sheridan, Jean Reno and Willie Garson (dir. Wes Anderson)

will be doing my !!con keynote in a little bit, you can watch here: bangbangcon.com/livestream.htm there will be... pictures of medieval manuscripts

Show more
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.