GAN when given training dataset can generate new images or outputs that have never been seen before.
StackGAN can take description of an image such as a bird and generate a photo of the said bird. iCAN convert sketches to images. Pix2Pix translation, blue print for building turns into building. #edges2cats turn doodle of cats into real cats. Can be trained in unsupervised ways. CartoonGAN is trained on faces and cartoons but does not need to be trained on face and cartoon pairs. It knows how to convert without being explicitly told. Also can turn photo of day scenes to photo of night scenes. CycleGAN Berkeley especially good at unsuperivsed image-to-image translation. Best example is video of horse turned into a video of zebra. The surrounding even changed from grassland to Savannah. See links to the networks below. Generating simulated training set apple example of turning unreal eyes into realistic eyes and train models to learn where user is looking. Imitation learning, reinforcement learning (data), imitate action that would be taken by experts. GANs can generate adversarial networks: images that look normal to humans but can fool neural networks.
StackGAN - https://arxiv.org/abs/1612.03242
iGAN - https://github.com/junyanz/iGAN
Other generative models
Fully visible belief networks: output is generated one element at a time, for example, one pixel at a time. Aka autoregressive models, known since the 90s.
Breakthrough is to generate in one shot: GANs generate an entire image in parallel. Uses a differentiable function in form of NN.
"Generator Network takes random noise as input, runs that noise through a differentiable function to transform the noise, reshape it so it have recognizable structure. " - Ian Goodfellow
The output of a generator network is a realistic image. The choice of the noise input determines which image will come out of the network. "The goal is to have these (output) image sto be a fair sample of real image data" - Ian Goodfellow
The generator network has to be trained. The training process is very different from a supervised model. The generator network is not supervised. "We just show it a lot of images. And ask it to make more images that come from the same probability distributin."
The second network: the discriminator, a normal neural network classifier, guides the generator network. The discriminator is shown real images half of the time, and fake images the other half of the time. It classifies whether the image is real or not.
The generator network's goal is to make compelling images that the discriminator will assign 100% probability that the image is real.
Overtime, generator has to produce realistic outputs, almost real replicas. Generator Network takes in noise z and generates input x. Whereever generator outputs more of z, the x function becomes denser. Discriminator outputs high numbers (higher probability) whenever real data density is higher than generated data density. Generator then changes its output to catch up.
Your byte size news and commentary from Silicon Valley the land of startup vanities, coding, learn-to-code and unicorn billionaire stories.
Friday, March 8, 2019
GAN - Udacity Deep Learning Nanodegree Part 5
Subscribe to: Post Comments (Atom)
React UI, UI UX, Reactstrap React Bootstrap
React UI MATERIAL Install yarn add @material-ui/icons Reactstrap FORMS. Controlled Forms. Uncontrolled Forms. Columns, grid
This review is updated continuously throughout the program. Yay I just joined the Udacity Nanodegree for Digital Marketing! I am such an Uda...
All you need to know about Snap IPO. Tech startup news explained for Youtubers in minutes.
This is the photo collection, Youtube video construction in progress. Fisherman's Wharf Lure
Deep Learning Final Year Projects for CSE Project Centers in Chennai
You are providing good knowledge. It is really helpful and factual information for us and everyone to increase knowledge. Continue sharing your data. Thank you. SVG converter MacReplyDelete