allison learns about... gans 

apparently the answer to "why isn't my gan working" is usually "well why didn't you put more batch normalization in there, hotshot"

(currently trying to hack the dcgan model I've been using to generate conditioned on labels, with only sputtering success, woo)

what do you call a gan that doesn't work 

a gan't


conditional dcgan progress 

this is so tantalizingly close to what I want—I'm training the GAN on images of words, conditioned on labels for different text styles (italics, all caps, title case, etc)—you can clearly see many of the different styles in this sample (trained on about 100k images). I managed to avoid mode collapse, but the GAN unfortunately fails to converge (after 200k images, the generator just makes white noise)

· · Web · 3 · 0 · 3

conditional dcgan progress 

I sorta gave up on having the same model produce different fonts—it just didn't work and the samples across classes weren't similar for the same latent variable (which was the effect I was going for in the first place). HOWEVER, I am super pleased with the samples from the model I'm training on Garamond italics...

conditional dcgan progress 

@aparrish I wonder, do people ever do like. Median filter as output layer for this kind of thing?

conditional dcgan progress 

@aparrish replace multiscale iterative training with a median filter of random width on generator output. There’s your paper idea now someone get amazon to try it because I certainly won’t

(Wait can you even make that diffable)

conditional dcgan progress 

@aparrish garamond notches another victory

Sign in to participate in the conversation
Friend Camp

Hometown is adapted from Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.