The increase in inappropriate content such as violence and obscenity in TikTok live streams, the failure to filter this content, or the inadequacy of age restrictions are causing public reaction.
Evaluating the real-time moderation of live stream content on TikTok, Assoc. Prof. Dr. Aylin Tutgun Ünal stated, “The TikTok platform states that it uses both automatic and human evaluation to identify and remove harmful content and behaviors that violate Community Guidelines (support.tiktok.com). However, some concerns and discussions about the adequacy of these measures continue.”
Assoc. Prof. Dr. Aylin Tutgun Ünal emphasized that components such as advanced artificial intelligence and machine learning, real-time monitoring, user reporting systems, education and awareness campaigns, human moderators, and strong community rules and sanctions are critically important to prevent violent and abusive content on social media platforms.

Üsküdar University Faculty of Communication Lecturer Assoc. Prof. Dr. Aylin Tutgun Ünal evaluated the real-time moderation of live stream content on TikTok.
Moderation is possible with technologies including artificial intelligence and machine learning algorithms…
Noting that technologies used for real-time moderation of live stream content typically include artificial intelligence and machine learning algorithms, Assoc. Prof. Dr. Aylin Tutgun Ünal said, “Artificial intelligence and machine learning provide scalable solutions for real-time content processing, enabling platforms to quickly and efficiently detect and remove inappropriate content. In the past, manual content moderation relied on human moderators and was not suitable for dealing with big data as it did not offer scalable solutions. Today, real-time moderation of big data mechanisms like social media is possible with technologies involving artificial intelligence and machine learning algorithms.”
How successful are AI-powered technologies?
Explaining that these technologies include audio content moderation, image content moderation, text content moderation, real-time monitoring and detection, and AI-powered communication, Assoc. Prof. Dr. Aylin Tutgun Ünal said, “If we delve into the working principles of these technologies; AI algorithms analyzing written texts used during live streams can detect profanity, hate speech, or other inappropriate content. Or, audio content moderation converts audio content into text and analyzes the expressions and discourse used during live streams, checking for appropriateness. AI can immediately detect whether content is appropriate in real-time. This not only reduces the burden on human moderators but also allows for faster content management. Platforms like Twitch, YouTube Live, and Facebook Live aim to protect viewers and provide a safer and more enjoyable experience using these technologies. However, their degree of success is a subject of debate.”
TikTok states it uses both automatic and human evaluation
Assoc. Prof. Dr. Aylin Tutgun Ünal, also touching upon TikTok’s measures against violent broadcasts, stated, “Social media, e-commerce, publishing services, and online forums generate large amounts of user-generated content every day. With this increase in content, platforms face the challenge of ensuring that the material/content remains appropriate, safe, and compliant with community standards and legal regulations. Real-time moderation of this content brings to the fore a broad moderation ecosystem, from appropriately incorporating technologies involving artificial intelligence and machine learning algorithms to policies. The TikTok platform states that it uses both automatic and human evaluation to identify and remove harmful content and behaviors that violate Community Guidelines (support.tiktok.com). However, some concerns and discussions about the adequacy of these measures continue.”
What should be done for the measures to be adequate?
Assoc. Prof. Dr. Aylin Tutgun Ünal, also pointing out the necessity of establishing and implementing a more effective moderation system for the measures to be adequate, continued:
“The components that should be in this system can be listed as; Advanced artificial intelligence and machine learning, Real-time monitoring, user reporting systems, education and awareness, human moderators, strong community rules and sanctions. At this point, the Ministry of Family and Social Services plans to take more effective steps by holding discussions with TikTok and other social media platforms regarding broadcasts containing violence and abuse. At this point, we can mention the importance of policies to be regulated within the scope of social media platforms in moderation mechanisms. Clearly defined community rules and serious sanctions for violations will encourage users to comply with these rules.”
What are the responsibilities of platforms like TikTok?
Assoc. Prof. Dr. Aylin Tutgun Ünal stated that social media platforms like TikTok have various responsibilities to prevent the spread of violent incidents and protect users, listing these responsibilities as follows:
“1- Community rules and policies: Content that promotes or contains violence must be prohibited from being shared.
2- Content moderation: Content should be moderated using both automatic evaluation systems and human moderators. When violent content is detected, it should be quickly removed, and necessary measures should be taken regarding infringing accounts.
3- Real-time monitoring and intervention: A real-time monitoring system should be used for instant detection and intervention in violent incidents. This is a critical step to ensure user safety.
4- User reporting systems: Users can report violent content, and these reports should be evaluated quickly. This ensures that community members contribute to the platform's safety.
5- Education and awareness: Various campaigns and programs should be organized to educate users about the harms of violent content and to raise awareness.”
Work in this area should be continuously updated
Assoc. Prof. Dr. Aylin Tutgun Ünal concluded her remarks as follows:
“Social media platforms, especially TikTok, that fulfill these responsibilities can succeed in protecting their users and providing a safe online environment. However, work in this area needs to be continuously developed and updated.”