Show more

Black-led LGBTQ services and activist groups to donate to:

- The Okra Project
theokraproject.com

- Black Trans Travel Fund
blacktranstravelfund.com

- SNaPCo
snap4freedom.org

- Black AIDS Institute
blackaids.org

- Trans Cultural District
transgenderdistrictsf.com

- LGBTQ Freedom Fund
lgbtqfund.org

- House of GG
houseofgg.org

- Trans Justice Funding Project
transjusticefundingproject.org

- Youth Breakout
youthbreakout.org

are you in portland? do you like cookies and hate when police murder black people? i'm selling cookies for pick-up in SE portland tomorrow with all proceeds going to bail funds and community orgs in minneapolis

ginascookies.square.site/

programming, tensorflow 

my Z key is a little stuck so instead of searching for something reasonable ("tensorflow squeeze") I accidentally searched for a strange enthusiasm ("tensorflow squeee")

another day, another VAE (nonsense words) 

I got very helpful advice today on this, which is that the distribution the VAE learns might not be centered at zero—after averaging together the latent vectors from a few thousand items from the data set and using *that* as the center of the distribution, I get much better results when sampling!

Show thread

excited that NarraScope is free and online and open to everyone this year: narrascope.org/pages/schedule. so much to look forward to, including an hour-long talk about Kentucky Route Zero by Aaron Reed??

programming 

the eternal question: just cut and paste code I want to reuse from last year's notebook, or take the time to actually make it a standalone package?

or the third option, toot about it to further procrastinate

auto-enjambed lines from wikipedia pages related to mirrors, arranged in chains of semantic similarity (given lines n and n+1 in the corpus, print n and another line similar in meaning to n+1, then repeat with the last line printed)

(not sure exactly what I'm going to do with this yet, but so far it passes the "is it at least mildly interesting if you pick lines at random" test)

Show thread

randomly selected items from a corpus of auto-enjambed lines, drawn from the text of wikipedia pages linked from "Mirror"

welp this is going to lead to some truly magnificent feedback loops youtube.com/watch?v=fZSFNUT6iY (when code that was written with the model leaks into the corpus that they're training the model on and all code on github or whatever starts to become a deep dream-esque fractal of language models completing themselves)

another day, another VAE 

(I would expect those samples to look more like plausible made-up words, at least as plausible as a markov chain or something trained on the same dataset. but it doesn't look like there's been a model collapse, since the model still performs well otherwise? it's also possible I did the math bad somewhere? I guess the next step is to visualize the latent space and see if it does actually look like a normal distribution)

Show thread

another day, another VAE 

working on a keras/tf adaptation of this paper: arxiv.org/pdf/1911.05343.pdf on character data (words from cmu dict) and I ended up with very good reconstruction loss and bad (I think?) KL loss, like 0.08? the space seems to be smooth when I do interpolations:

moses
mosess
mosees
midsets
maddets
madderon
middleton
middleton
middletown

but sampling from a normal distribution is kinda garbage:

manina
kal
agruh
aar
urosh
'louic
cseq
gb
zani
ias
nsny
huinea
a's
om
ntioo
gante

today in machine learning 

in other circumstances (i.e. when I'm doing it on purpose), a character RNN decoding after the stop token is one of my favorite things, it's just like "oh uh you want me to keep going? okay, um, ingisticanestionally...?"

Show thread

today in machine learning 

... then spent like six hours debugging my decoding function, which was always returning sequences of length 20... it was because I'd hardcoded "20" as the length of the time series in the vectorization function (which worked fine, of course, when all of the training data was padded to the same length)

Show thread

today in machine learning 

instead of figuring out how to do fancy tricks with keras masking and tensorflow indexing, I just wrote a function to feed in batches where all of the data have the same number of timesteps. seems to work fine

Please enjoy the new online exhibit Post Hoc, with work by

Agnieszka Kurant
Christian Bök
Daniel Temkin
Derek BeaulieuForsyth Harmon
Lauren Lee McCarthy
Lilla LoCurto & Bill Outcault
Olia Liaina
Manfred Mohr
Mark Klink
Renée Green
Sly Watts
Susan Bee

responses by

Amaranth Borsuk
Craig Dworkin
Daniel Temkin
Fox Harrell
Mary Flanagan
Paul Stephens
Simon Morris & Valérie Steunou

It's at https://nickm.com/post/2020/05/post-hoc-an-online-art-show/

hey everyone, my PyCon 2020 tutorial session is now online: youtube.com/watch?v=yJ6iN5M42s and contains, well, pretty much everything I know about phonetics, machine learning and meter/rhyme/sound symbolism in poetry. tutorial code and notebooks here: github.com/aparrish/nonsense-v

Show more
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.