Nowadays it is nearly impossible to browse the internet without stumbling upon some kind of AI generated content. The models advance at an incredible rate and it is therefore getting increasingly difficult for a human to discern if what they are looking at is created by AI or not. In fact, an ongoing study shows that AI generated content has been found increasing steadily in websites especially since GPT-3 release in June 2020, with a drastic spike from March 2024 [1].
Generative AI has been accessible for the wide public for a few years and its malicious uses are severe and too many to count. AI generated content is not always flagged as such or even worse, is passed on purpose as legit to deceive and elicit strong reactions from users. There have also been instances of AI generated images being unknowingly picked as winners in art contests [2]. It is therefore critical that tools used to detect this kind of content do not lag behind and hopefully get deployed in as many websites and social media as possible to help combat the many harmful uses. In this case it is still machine learning that can be of help and be trained to discern whether the creators were human or not.
There are many different approaches and models that have been developed for just this task and there is one that, ironically enough, can be used for both use cases: to generate images or to differentiate between real and AI ones. This framework is called GAN (Generative Adversarial Networks) which has two parts, a generator and a discriminator. In the use case of detecting synthetic images, the generator creates images that aim to fool the discriminator, which in turn has to classify them correctly. Feedback is then shared between the two parts so they both can improve in their specific task. Giving an image to a classification model and getting a result like “synthetic image” or “real image” might already be of help but more information is needed to evaluate if the model is looking at the right things.
Machine learning algorithms are in fact often seen as “black boxes” where data is inputted and an output is generated without knowing exactly the whole extent of the logic that led the model to a certain result. In this case though it is possible to get some insight thanks to a technique called CAM (Class Activation Mapping), which highlights the parts that the model detected as problematic and worth investigating [3]. This kind of information is invaluable and has shown that in many occasions the details that reveal if an image is AI generated or not would be extremely hard to notice to the human eye, especially if untrained and unaware of the details to look out for. Some examples can be: surfaces showing extraneous patterns, weird details rendering, incorrect positioning of lights and shadows [4].
The game of AI detection chasing generative AI’s progress is going to continue in the future and it is not a good idea to rely solely on AI solutions when the risk of false positives or negatives is always present. Awareness on this problem needs to be raised and rules to regulate and flag the use of AI need to become widespread. The best result might be achieved by a trained human eye assisted by AI tools so that the best of both worlds can be combined.
References
- Amount of AI Content in Google Search Results – Ongoing Study. (n.d.). Originality.ai. Retrieved August 19, 2024, from https://originality.ai/ai-content-in-google-search-results
- Kuta, S. (2023, September 8). Art Made With A.I. Won a State Fair Last Year. Now, the Rules Are Changing. Smithsonian Magazine. Retrieved August 19, 2024, from https://www.smithsonianmag.com/smart-news/this-state-fair-changed-its-rules-after-a-piece-made-with-ai-won-last-year-180982867/
- Pham, A. (2023, September 25). [2309.14304] Overview of Class Activation Maps for Visualization Explainability. arXiv. Retrieved August 24, 2024, from https://arxiv.org/abs/2309.14304
- Kamali, N., Nakamura, K., Chatzimparmpas, A., Hullman, J., & Groh, M. (2024, June 12). [2406.08651] How to Distinguish AI-Generated Images from Authentic Photographs. arXiv. Retrieved August 24, 2024, from https://arxiv.org/abs/2406.08651