Is the moderation system inadequate in TikTok live streams?
The increase in inappropriate content such as violence and obscenity in TikTok live streams, the lack of filtering of these contents, or the lack of age restrictions cause public reaction.
Commenting on the real-time moderation of live broadcast content on TikTok, Assoc. Prof. Aylin Tutgun Ünal stated that "The TikTok platform states that it uses both automated evaluation and human evaluation to identify and remove harmful content and behaviors that are against the Community Guidelines (support.tiktok.com). However, some concerns and debates remain about the adequacy of these measures."
Assoc. Prof. Aylin Tutgun Ünal emphasized that components such as advanced artificial intelligence and machine learning, real-time monitoring, user reporting systems, training and awareness studies, human moderators, and strong community rules and sanctions are critical to prevent violent and abusive content on social media platforms.
Üsküdar University Faculty of Communication Faculty Member Assoc. Prof. Aylin Tutgun Ünal evaluated the real-time moderation of live broadcast content on TikTok.
Auditing is possible with technologies that include artificial intelligence and machine learning algorithms
Noting that the technologies used to moderate live broadcast content in real time generally include artificial intelligence and machine learning algorithms, Assoc. Prof. Aylin Tutgun Ünal stated that "Artificial intelligence and machine learning provide scalable solutions for real-time content processing, enabling platforms to detect and remove inappropriate content quickly and efficiently. In the past, manual content moderation relied on human moderators and was not feasible because it did not offer scalable solutions for dealing with big data. Today, real-time control of mechanisms containing big data such as social media is possible with technologies that include artificial intelligence and machine learning algorithms."
How successful are AI-powered technologies?
Explaining that there are technologies such as audio content moderation, image content moderation, text content moderation, real-time monitoring and detection, artificial intelligence-supported communication, Assoc. Prof. Aylin Tutgun Ünal expressed that "If we talk about the working principles of these technologies; AI algorithms that analyze written text used during a live broadcast can detect profanity, hate speech, or other inappropriate content, or audio content moderation converts audio content into text and analyzes the expressions and discourses used during the live broadcast, checking whether they are appropriate or not. In real-time, artificial intelligence can immediately detect whether content is appropriate or not. This not only reduces the burden on human moderators, but also allows content to be moderated faster. Platforms such as Twitch, YouTube Live, and Facebook Live aim to protect viewers and provide a safer and more enjoyable experience by using these technologies. How successful they are is a matter of debate."
TikTok states that it uses both automated evaluation and human evaluation
Referring to the measures taken by TikTok against violent publications, Assoc. Prof. Aylin Tutgun Ünal explained that "Social media, e-commerce, streaming services and online forums produce large amounts of user-generated content every day. With this increase in content, platforms are faced with the challenge of ensuring that material/content remains appropriate, safe, and compliant with community standards and legal regulations. Real-time moderation of this content raises a wide ecosystem of audits, from appropriately onboarding technologies including artificial intelligence and machine learning algorithms to policies. The TikTok platform states that it uses both automated review and human review to identify and remove harmful content and behavior that goes against its Community Guidelines (support.tiktok.com). However, some concerns and debates remain about the adequacy of these measures."
What should be done to ensure that the measures are sufficient?
Pointing out that a more effective audit system should be established and included in the work in order for the measures taken to be sufficient, Assoc. Prof. Aylin Tutgun Ünal continued her remarks as follows:
"The components that should be in this system are as follows: advanced artificial intelligence and machine learning, Real-time monitoring, user reporting systems, education and awareness, human moderators, strong community rules and enforcement. At this point, the Ministry of Family and Social Services plans to take more effective steps by negotiating with TikTok and other social media platforms on violent and abusive posts. At this point, we can talk about the importance of policies to be regulated within the scope of social media platforms in control mechanisms. Clear community guidelines and severe enforcement of violations will encourage users to follow them."
What are the responsibilities of platforms like TikTok?
Stating that social media platforms such as Tik Tok have various responsibilities to prevent the spread of violent incidents and protect users, Assoc. Prof. Aylin Tutgun Ünal listed these responsibilities as follows:
"1- Community guidelines and policies: Sharing content that incites violence or is violent should be prohibited.
2- Content moderation: Content should be moderated using both automated review systems and human moderators. When violent content is detected, it should be removed immediately, and the violating accounts should be taken appropriate action.
3- Real-time monitoring and response: A real-time monitoring system should be used to detect and intervene in violent incidents immediately. This is a critical step to ensure the safety of users.
4- User reporting systems: Users can report violent content, and these reports should be evaluated quickly. Thus, it is ensured that community members contribute to the security of the platform.
5- Education and awareness: Various campaigns and programs should be organized to educate users and raise awareness about the harms of violent content.
Studies in this area should be constantly updated
Assoc. Prof. Aylin Tutgun Ünal concluded her remarks as follows:
"By fulfilling these responsibilities, social media platforms, especially TikTok, can succeed in protecting their users and providing a safe online environment. However, studies in this area need to be constantly developed and updated."
Üsküdar News Agency (ÜNA)