Recognizing Geographical Locations using a GAN-Based Text-To-Image Approach
Date
Authors
Advisors
Journal Title
Journal ISSN
ISSN
DOI
Volume Title
Publisher
Type
Peer reviewed
Abstract
Generating photo-realistic images that align with the text descriptions is the goal of the text-to-image generation (T2I) model. They can assist in visualizing the descriptions thanks to advancements in Machine Learning Algorithms. Using text as a source, Generative Adversarial Networks (GANs) can generate a series of pictures that serve as descriptions. Recent GANs have allowed oldest T2I models to achieve remarkable gains. However, they have some limitations. The main target of this study is to address these limitations to enhance the text-to-image generation models to enhance location services. To produce high-quality photos utilizing a multi-step approach, we build an attentional generating network called AttnGAN. The fine-grained image-text matching loss needed to train the AttnGAN’s generator is computed using our multimodal similarity model. With an inception score of 4.81 on the PatternNet dataset, our AttnGAN model achieves an impressive R-precision value of 70.61 percent. Because the PatternNet dataset comprises photographs, we’ve added verbal descriptions to each one to make it a text-based dataset instead. Many experiments have shown that AttnGAN’s proposed attention procedures, which are critical for text-to-image production in complex circumstances, are effective.