Asst. Prof. Burak Çeber noted that artificial intelligence learns patterns from the data it collects and generates new content, but in some cases the system can produce information that does not align with the real world. He said: “Among the causes of hallucination are insufficient training data, poorly constructed context, excessive generalization, and the system making guesses instead of saying ‘I do not know.’”
Stating that good media literacy is essential to detect fake content, Asst. Prof. Burak Çeber said: “Developing the habit of verifying content from different sources is of great importance. In addition, awareness of artificial intelligence applications is also necessary.”
Üsküdar University Faculty of Communication, Department of Advertising, Asst. Prof. Burak Çeber evaluated the issue of artificial intelligence and information pollution.
AI tools sometimes produce misleading content
Stating that generative artificial intelligence applications have become an indispensable part of daily life in recent years, Asst. Prof. Burak Çeber said: “AI tools, used for different purposes at home, school, work, or while shopping, now touch everyone’s lives. While these tools, frequently used in everyday life, can produce accurate results, they sometimes generate misleading content as well. This situation can be explained by the concept of ‘AI hallucination.’”
The system makes guesses instead of saying ‘I do not know’
Explaining that AI systems are trained with large datasets collected from various sources ranging from social media posts to websites, forms, and scientific articles, Dr. Çeber said: “AI learns the patterns in these datasets and generates new content. However, in some cases, the system may produce information that is not present in the training data or does not match the real world. These errors manifest as presenting something nonexistent as if it were real, making logical mistakes, or providing false information. Causes of hallucinations include insufficient training data, misconstructed context, overgeneralization, and the system making guesses instead of saying ‘I do not know.’ In short, AI hallucinations resemble human situations of imagining or misperceiving things, in the sense of producing information that does not exist in reality.”
Artificial intelligence also inherits bias
Pointing out that human biases are also reflected in AI, Dr. Çeber continued: “This situation is called ‘algorithmic bias.’ Algorithmic bias can arise in two ways. First, the conscious or unconscious choices of the algorithm’s designer can reflect on the model’s behavior. Second, social biases present in the datasets used to train AI can be reflected in the system’s outputs. That is, if there is error or bias in the dataset, the same biases may appear in the content generated by AI. For example, AI can discriminate between certain demographic groups during recruitment processes. In 2018, Google was criticized because its résumé-screening tool downgraded applications from female candidates. A similar example occurred with Apple Card, where the algorithm granted men higher credit limits than women, even when couples had the same income level.”
Fake AI-generated content spreads rapidly
Dr. Çeber also stated that AI significantly amplifies the scale of disinformation compared to human-generated content and greatly accelerates its spread: “Content that would take a person days to prepare can be generated by AI in minutes or even seconds. This leads to misinformation going viral rapidly. For example, with deepfake technology, the face and voice of a leader or celebrity can be digitally imitated and quickly spread in video form.”
Some even shape advertising strategies
Emphasizing that as AI becomes integrated into business processes, advertising professionals have started to access and make sense of information they had never encountered before, Dr. Çeber said: “Advertising decisions are no longer solely based on the opinions of a group of experts but also on consumers’ conversations, digital behaviors, and the clues they leave behind. In this way, clearer ideas can be obtained about the type, content, and features of the advertisement to be prepared. Recently, AI has begun to produce concrete results in advertising. Technologies have been developed that can write ad copy, generate visuals and videos, and even determine strategies. While generative AI applications provide successful outcomes for advertisers, they also function in a way that directs consumers to consumption at all times and under all circumstances. This brings with it an insatiable understanding of consumption. This can be compared to drinking seawater: a person who drinks seawater to quench their thirst only becomes thirstier and drinks more. Likewise, depending on its use, AI can serve as a tool that increases or reminds consumption thirst.”
Pointing out that AI applications based on data analytics can now collect not only demographic information or consumption habits but also data on people’s psychological profiles, Dr. Çeber added: “As we saw in the Cambridge Analytica scandal, such information can be used to influence people’s voting behavior.”
Media literacy is essential
Highlighting that good media literacy is necessary to detect fake content, Dr. Çeber said: “Developing the habit of verifying content from different sources is of great importance. In addition, awareness of AI applications is essential. When the capabilities and limitations of AI are known, it becomes much easier to distinguish between what is fake and what is real. AI is quite successful at surface-level consistency. When one moves away from superficiality and seeks depth by looking at details and context, fake content can be detected. In addition, methods such as digital watermarking, anomaly detection, and metadata analysis can also reveal AI-generated fake content.”
He further noted that when a new technology emerges, its conveniences and exciting aspects are usually highlighted first: “The case is no different with AI; the focus is often on its positive effects, such as efficiency, contributing to creativity, and speeding up business processes. However, as a natural result of this approach, fake content production, manipulation, and ethical risks may remain in the background. While it is understandable not to focus on negatives in the early stages of a technology, over time it becomes inevitable for industries, professional organizations, academia, and policymakers to address these risks. Because fake production is not merely a technical issue but one directly related to ethics, law, and social values.”
A balance based on Human–AI collaboration: hybrid intelligence
Pointing out that the blurring of the line between real and fake predates the widespread adoption of AI, Dr. Çeber said: “However, with the development of AI technologies, we are witnessing a rupture in our relationship with reality and truth in different ways. Today, we encounter images, sounds, and videos that we find difficult to distinguish from reality. We often find ourselves asking: ‘Is this content real?’ Here, the possibility that not all, but only part of the content may have been generated by AI becomes important. When a human produces content with AI, are they incorporating their own experiences, imagination, creativity, and thinking skills into the process? When solving a problem, are they using their knowledge, application skills, communication power, and human relationships? If the answer to these questions is ‘yes,’ then AI is not fully taking over but rather supporting the process. At this point, a balance based on human–AI collaboration (hybrid intelligence) can be established.”
Open-source AI is like a double-edged sword
“Open-source AI is like a double-edged sword. On the one hand, when it falls into the hands of malicious individuals, it can be used to produce fake content or deceive people,” said Dr. Çeber, and continued: “On the other hand, it provides a great advantage in terms of transparency and social oversight; because public accessibility also enables academics, researchers, and civil society to monitor the process. At this stage, the issue should not be confined to the question ‘Is the spread of open-source AI models an opportunity or a risk?’ The real priority is to develop principles, safety measures, and ethical standards that will help us use this technology responsibly.”
All types of digital content should be subject to verification systems
Emphasizing that all types of digital content could be subjected to a kind of “digital identity” or verification system, Dr. Çeber said: “Already, major technology companies and some research institutions are working on digital watermarks and identification standards that will indicate the source of images, texts, or videos. In other words, this system is technically feasible and is being developed. However, this alone is not sufficient; ethical and legal regulations are also needed for a permanent solution.”
Üsküdar News Agency (ÜNA)







