I am currently attempting to code a GAN to learn to generate images of hand drawn digits (MNIST dataset) and am having little success. My discriminator / generator are failing to converge and my generator output is noise.
After trying to solve the problem for a while, I tried not using the discriminator at all, and just training the generator with a desired output of a sampled image, and within minutes, the generator had learned to output very realistic images.
Why is this a bad idea? I haven’t seen any use of this method online and it is not used in GANs, however seems to be much simpler and works (for me) much better. Does it overfit too much or is there another problem?