We may all have a certain awareness about how jobs performed with AI are perceived, both by us and by society, or at least by our environment. These feelings might be particularly strong when it comes to those tasks typically considered as human-exclusive, like art. However, is this a conscious choice? Do we actually feel different when we look at AI-generated artwork? Or are we just afraid of its possibilities?
As we saw in a previous article of this series on Meijer’s experiment to set an online art gallery, labels can significantly influence the value we give to artwork. Zhou and Kawabata (2023) also point out how there are also studies on how the setting for viewing artwork (whether it is in a museum, for example) can influence both evaluation and memory.
For this 2023 paper, “Eyes can tell: Assessment of implicit attitudes toward AI art”, these two researchers attempted to discover, through more objective measurements, rather than surveys more prone to subjectivity, whether there is any subconscious response when people see AI- vs. human-made art, that is, any implicit bias.

Research Design
Zhou and Kawabata’s research focused on tracking the eye movements of the 34 Japanese participants, 22 of which were women. All of the participants were undergrads from universities in the Greater Tokyo Area with normal or corrected-to-normal- vision and they were unaware that some of the paintings shown were AI-generated. Participants were shown a total of forty paintings, half of which were generated via Disco Diffusion AI, while the other half were selected from The Vienna Art Picture System dataset (Fekete et al., 2022). All the pieces were representational (i.e. landscapes), as several previous studies (Chamberlain et al., 2018 and Gangadharbatla, 2022) suggest abstract work is more likely to be categorized as AI-made.
Before the showing, each artwork was analysed to calculate its hue, saturation and brightness, as well as entropy (level of disorder of the pixels), in order to confirm that there was no significant difference between human- and AI-made paintings. Signatures were then removed from the human-made pieces and participants were given three tasks while their eye movements was being tracked:
- A free viewing, where they were shown all forty paintings for twenty seconds each, in a random order, and with a one second blank between paintings.
- A subjective rating task, where they evaluated each painting for the factors of beauty, liking, valence, arousal, familiarity and concreteness, rating them in a 0-100 scale.
- A categorization task where they had to decide whether each painting was AI- or human-made.
Results
First, they looked at eye movement during the free viewing. While the differences for fixation count (number of times a person’s gaze remains still on a particular spot) and mean fixation count were not significant, nor was the effect for actual authorship, they did find out that the total fixation count (the total dwell time during the viewing of each painting) was 331 milliseconds longer for the paintings that were later chosen as human-made during the categorization task.
For the subjective rating task, they checked for beauty, liking, valence (emotional response, whether positive or negative), arousal, familiarity, and concreteness. As it turned out, categorization and authorship were not significant for the ratings, and the paintings did not differ in subjective evaluations either.
Lastly, they moved on to the categorization task. Here, they measured the accuracy when choosing whether each piece was AI- or human-made (human-made paintings were successfully detected with 68% accuracy, while AI-made paintings only reached a 43% accuracy). This helped identify the negative bias mentioned before: art selected as human by the participants was looked at for longer. However, when contrasting this with the results of the subjective rating tasks, this bias cannot be found explicitly, nor did the paintings get a higher rating for being human-made.
It may be important to note that although participants could not accurately identify authorship, human-made paintings were more likely to be categorized as such, whereas AI paintings are more likely to be misclassified. Finally, the authors suggest these results point out to both AI- and human-made art having similar perceived aesthetic value, “at least for people who were naive to art criticism”.
In summary, although the sample size is rather small, Zhou and Kawabata’s study seems to show a tendency towards participants liking more the paintings they assumed to be human-made, as well as higher accuracy when classifying them as such.
This article belongs to the series ‘How do we perceive AI art?’.
References
Chamberlain, R., Mullin, C., Scheerlinck, B., & Wagemans, J. (2018). Putting the art in artificial: Aesthetic responses to computer-generated art. Psychology of Aesthetics, Creativity, and the Arts, 12(2), 177–192. https://doi.org/10.1037/aca0000136
Fekete, A., Pelowski, M., Specker, E., Brieber, D., Rosenberg, R., & Leder, H. (2022). The Vienna Art Picture System (VAPS): A data set of 999 paintings and subjective ratings for art and aesthetics research. Psychology of Aesthetics, Creativity, and the Arts. Advance Online Publication, 17(5), 660–671. https://doi.org/10.1037/aca0000460
Gangadharbatla, H. (2022). The role of AI attribution knowledge in the evaluation of artwork. Empirical Studies of the Arts, 40(2), 125–142. https://doi.org/10.1177/0276237421994697Zhou, Y., & Kawabata, H. (2023). Eyes can tell: Assessment of implicit attitudes toward AI art. I-Perception, 14(5), 20416695231209846. https://doi.org/10.1177/20416695231209846