Resultado de búsqueda
StyleGAN - Official TensorFlow Implementation. Contribute to NVlabs/stylegan development by creating an account on GitHub.
17 de jun. de 2020 · This new project called StyleGAN2, presented at CVPR 2020, uses transfer learning to generate a seemingly infinite numbers of portraits in an infinite variety of painting styles. The work builds on the team’s previously published StyleGAN project.
This repository is an updated version of stylegan2-ada-pytorch, with several new features:. Alias-free generator architecture and training configurations (stylegan3-t, stylegan3-r).Tools for interactive visualization (visualizer.py), spectral analysis (avg_spectra.py), and video generation (gen_video.py).Equivariance metrics (eqt50k_int, eqt50k_frac, eqr50k).
Video 1a: FFHQ-U Cinemagraph. Video 1b: FFHQ-U Cinemagraph. The following videos show interpolations between hand-picked latent points in several datasets. Observe again how the textural detail appears fixed in the StyleGAN2 result, but transforms smoothly with the rest of the scene in the alias-free StyleGAN3. Video 2: MetFaces interpolations.
StyleGAN es una red generativa antagónica. Se la utiliza para la generación de imágenes, principalmente de rostros. Nvidia lo presentó en diciembre de 2018 y publicó el código en febrero de 2019.
Abstract: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them.
StyleGAN 2 is an improvement over StyleGAN from the paper A Style-Based Generator Architecture for Generative Adversarial Networks. And StyleGAN is based on Progressive GAN from the paper Progressive Growing of GANs for Improved Quality, Stability, and Variation .