Assist. Prof. Dr. Burak Çeber, who noted that artificial intelligence learns patterns from collected data and generates content, but in some cases, the system can produce information that does not align with the real world, said, “Among the reasons for hallucination are insufficient training data, incorrectly established context, overgeneralization, and the system making predictions instead of saying 'I don't know.'”
Stating that good media literacy is primarily required to detect fake content, Assist. Prof. Dr. Burak Çeber said, “Gaining the habit of verifying content from different sources is of great importance. Additionally, awareness of artificial intelligence applications is essential.”
Assist. Prof. Dr. Burak Çeber from the Department of Advertising, Faculty of Communication, Uskudar University, evaluated the issue of artificial intelligence and information pollution.
Artificial intelligence tools also occasionally produce misleading content
Stating that generative artificial intelligence applications have become an indispensable part of daily life in recent years, Assist. Prof. Dr. Burak Çeber said, “Artificial intelligence tools used for various purposes at home, school, work, or shopping now touch everyone's lives. Frequently used AI tools in daily life, while providing accurate results, can also occasionally produce misleading content. It is possible to explain this situation with the concept of 'artificial intelligence hallucination'.”
The system makes predictions instead of saying 'I don't know'
Explaining that artificial intelligence systems are trained with large datasets collected from many sources, from social media posts to websites, forms to scientific articles, Assist. Prof. Dr. Burak Çeber said, “Artificial intelligence learns patterns from this data and generates content. However, in some cases, the system can produce information that is not present in the training data or does not align with the real world. These errors manifest as presenting non-existent things as real, making logical mistakes, or providing incorrect information. The reasons for hallucination include insufficient training data, incorrectly established context, overgeneralization, and the system making predictions instead of saying 'I don't know'. In short, AI hallucinations can be likened to situations where humans dream or misperceive, in terms of producing information that is not actually real.”
Artificial intelligence is also biased
Noting that human biases are also reflected in artificial intelligence, Dr. Çeber continued:
“This situation is called 'algorithmic bias'. Algorithmic bias can manifest in two ways. First, the conscious or unconscious preferences of the person designing the algorithm can be reflected in the model's behavior. Second, societal biases present in the datasets used to train artificial intelligence can be reflected in the system's outputs. This means if there is an error or bias in the dataset, the same biases can be seen in the content produced by AI. For example, AI can discriminate between certain demographic groups in recruitment processes. In 2018, Google was criticized for its resume screening tool de-prioritizing applications from female candidates. A similar example occurred with Apple Card. The algorithm gave men higher credit limits than women in couples with the same income level.”
AI's fake content is spreading rapidly
Assist. Prof. Dr. Burak Çeber also stated that artificial intelligence significantly increases the scale of disinformation and drastically raises its spreading speed compared to human-produced content, saying, “Artificial intelligence can produce content in minutes, even seconds, that a human might take days to prepare. This leads to the rapid virality of false information. For example, with deepfake technology, the face and voice of a leader or celebrity can be imitated digitally and spread rapidly as a video.”
There are also those who determine advertising strategies
Assist. Prof. Dr. Burak Çeber also stated that as artificial intelligence enters business processes, advertising professionals are starting to access and interpret types of information they had never encountered before. “Now, advertising-related decisions can be made not just based on the opinions of a group of experts, but according to consumers' conversations, digital behaviors, and the clues they leave behind. This allows for clearer ideas about the type, content, and features of the advertisement to be prepared. Recently, artificial intelligence has also started to yield concrete results in advertising production. AI technologies capable of writing ad copy, producing visuals and videos, and even determining strategies have been developed. Although generative AI applications offer successful results to advertisers, they operate in a way that directs the consumer towards consumption at all times and under all conditions. This brings with it an insatiable understanding of consumption. We can liken this understanding to drinking seawater: a person who drinks seawater to quench their thirst becomes thirstier and drinks more as they get thirstier. Depending on its use, artificial intelligence can also serve as a tool that increases or reminds of thirst in consumption.”
Assist. Prof. Dr. Burak Çeber, stating that data analytics-based artificial intelligence applications can now collect data not only on demographic information or consumption habits but also on people's psychological profiles, said, “As we saw in the Cambridge Analytica scandal, such information can be used to influence people's voting behavior.”
Media literacy is essential
Emphasizing that good media literacy is primarily required to detect fake content, Assist. Prof. Dr. Burak Çeber said, “Gaining the habit of verifying content from different sources is of great importance. Additionally, awareness of artificial intelligence applications is essential. When the capabilities and limitations of artificial intelligence are known, it becomes much easier to understand what is fake and what is real. Artificial intelligence is quite successful in superficial consistency. When moving away from superficiality and seeking depth, i.e., looking at details and context, fake content can be detected. In addition, fake content produced by artificial intelligence can be uncovered through methods such as digital watermarking, anomaly detection, and metadata analysis.” Assist. Prof. Dr. Burak Çeber pointed out that when a new technology emerges, its conveniences and exciting aspects are usually highlighted first, stating, “The situation is no different with artificial intelligence; often, positive effects such as efficiency, contribution to creativity, and acceleration of business processes are emphasized. However, as a natural consequence of this approach, fake content production, manipulation, and ethical risks can be sidelined. While it is understandable not to focus on the negatives in the early stages of technology, over time, it becomes inevitable for industries, professional organizations, academia, and decision-makers to address these risks. Because fake production is more than a technical issue; it is directly related to ethics, law, and social values.”
A balance based on collaboration between humans and artificial intelligence, hybrid intelligence…
Assist. Prof. Dr. Burak Çeber stated that the blurring of the line between real and fake predates the widespread adoption of artificial intelligence. “However, with the development of AI technologies, we are witnessing our relationship with reality and truth being severed in different ways. Today, we encounter images, sounds, and videos that are difficult to distinguish from real ones. Often, we find ourselves asking, 'Is this content real?' Here, the possibility that not all, but some part of the content might have been produced with artificial intelligence becomes important. When a human produces with artificial intelligence, do they also incorporate their own experiences, imagination, creativity, and thinking ability? When solving a problem they encounter, do they use their knowledge, application skills, communication power, and human relations? If the answer to these questions is 'yes', then artificial intelligence is not taking over the job entirely but is in a supporting position for the process. It is at this point that a collaborative balance (hybrid intelligence) can be established between humans and artificial intelligence.”
Open-source artificial intelligence is like a double-edged sword
“Open-source artificial intelligence is like a double-edged sword. On one hand, when it falls into the hands of malicious individuals, it can be used to produce fake content or mislead people,” said Assist. Prof. Dr. Burak Çeber, adding:
“On the other hand, it provides a great advantage in terms of transparency and public oversight; because its accessibility to everyone allows academics, researchers, and civil society to monitor the process. At this stage, the issue should not be confined to the question of 'Is the proliferation of open-source AI models an opportunity or a risk?' What is truly important is to develop principles, security measures, and ethical standards that will help us use this technology responsibly.”
All types of content in the digital environment should be subject to a verification system
Assist. Prof. Dr. Burak Çeber noted that all types of content in the digital environment could be subject to a kind of “digital identity” or verification system. “Already, major technology companies and some research institutions are working on digital watermarks and identification standards that will show the source of images, texts, or videos. So, this system is technically feasible and is being developed. However, this alone is not enough; ethical and legal regulations are also needed for a permanent solution,” he concluded.







