
Perhaps the ever-increasing number of data images, machine learning algorithms, and technologies will continue to enhance life on Earth. But are most people able to distinguish between real and fake images? In experimental investigations, Bozkir et al. explore the consequences of human limitations when it comes to facial recognition and social issues.
The authors begin by concisely reviewing the literature. They then analyze and investigate the incidental impacts of dubious videos, images, and texts created by high school adolescents and adults worldwide. Perhaps “eye tracking can support self-reported measures in investigating objective human visual behaviors when people encounter such content.” The authors shed new light on this timely issue.
But can humans differentiate computer-generated images from actual face images? In experiments, the authors investigate hypotheses about the extent to which humans are likely to recognize (1) fake from real images, (2) the impacts of image fixation rates on face recognition, and (3) the impacts of facial features like hair and eyes on image recognition. Twenty-two computer-literate subjects (14 men and eight women), all with adequate vision and English competency, were randomly stratified into two groups by gender and computer skills.
The authors performed a two-phase experiment. First, the participants viewed a reliable subset of real images and assorted computer-generated face images. All face images were portrayed in head-on views, with identical positioning, sizes, and properties to ensure a veracious examination of eye motion between images. Next, the subjects who successfully completed the visualization activities rated each image--using a 7-point Likert scale ranging from “completely disagree” to “completely agree”--by assessing the statement “I think this image is computer generated.”
In the investigation, two small groups of subjects viewed different actual and computer-generated face images. Consequently, the authors appropriately applied (a) the Mann-Whitney U test for independent samples, (b) the Wilcoxon signed-rank test for paired samples, and (c) the Holm-Bonferroni correction for multiple comparison tests to analyze the three research hypotheses. The experimental results are revealing:
While people are relatively better at identifying the truthfulness of real faces and faces generated by earlier machine learning algorithms with different gazing behaviors in viewing and rating phases, they perform less accurately when deciding the truthfulness of synthetic face images that are generated by newer algorithms.
The authors also recognize the experiment’s weaknesses: its small sample size, as well as the unknown human reactions to images generated from emerging machine learning (ML) and artificial intelligence (AI) algorithms.
The use of deceptive synthetic images to spread false or misleading information (for example, fake news) continues to be a threat to society. Policymakers and lawmakers should read this insightful paper.