Experts state that artificial intelligence-powered technologies have the potential to facilitate social isolation by reducing face-to-face interactions, and that AI algorithms used on social media platforms can contribute to the formation of echo chambers where individuals are exposed only to information and opinions consistent with their existing beliefs.
Prof. Dr. Burhan Pektaş noted that artificial intelligence-powered algorithms can be used to manipulate public opinion, spread misinformation, and amplify harmful content. He said, “This situation can erode trust in information sources and lead to societal divisions and confusion.”
Prof. Dr. Burhan Pektaş, Head of the Computer Engineering (English) Department at Üsküdar Üniversitesi, Faculty of Engineering and Natural Sciences (MDBF), evaluated whether artificial intelligence can interfere with people's lives.
“AI-powered technologies facilitate social isolation”
Prof. Dr. Burhan Pektaş stated that AI-powered technologies such as virtual assistants and social media algorithms have the potential to facilitate social isolation by reducing face-to-face interactions and encouraging dependence on digital communication. Pektaş said, “AI algorithms used on social media platforms can contribute to the formation of echo chambers where individuals are exposed only to information and opinions consistent with their existing beliefs. This can exacerbate social polarization and hinder constructive dialogue between different groups.”
“It can lead to societal divisions and confusion”
Prof. Dr. Burhan Pektaş noted that AI-powered algorithms can be used to manipulate public opinion, spread misinformation, and amplify harmful content. He said, “This situation can erode trust in information sources and lead to societal divisions and confusion. To mitigate these dangers, it is important to develop and implement robust ethical guidelines, regulations, and accountability mechanisms for the responsible design, deployment, and use of AI in social contexts.”
“Promoting awareness of potential risks associated with AI technologies is important”
Prof. Dr. Burhan Pektaş also explained that promoting digital literacy, critical thinking skills, and awareness of the potential risks associated with AI technologies can empower individuals to navigate social interactions in an increasingly AI-driven world, stating the following:
“Striking a balance between Artificial Intelligence (AI) and human life requires a careful evaluation of the benefits and risks of AI technologies, as well as the implementation of strategies to ensure AI best serves the interests of humanity. Accordingly, ethical considerations must be prioritized in the design, development, and deployment of AI systems. This includes ensuring transparency, fairness, accountability, and respect for human rights throughout the AI lifecycle.
“Users should be included in the design process of AI technologies”
On the other hand, we must implement regulatory frameworks and standards to govern the responsible use of AI technologies. These frameworks should address issues such as data privacy, algorithmic bias, autonomous systems, and the ethical implications of AI applications. Human needs, values, and preferences should be prioritized in the design of AI systems. It is important to involve all users in the design process to ensure that AI technologies are intuitive, user-friendly, and aligned with human values and goals.”
“Designers have a moral responsibility to consider societal impacts”
Prof. Dr. Burhan Pektaş also emphasized that Artificial Intelligence (AI) increasingly plays a role in decision-making processes with ethical consequences. He said, “Designers, developers, and engineers responsible for creating AI systems have a moral responsibility to ensure that these systems are designed ethically and with potential societal impacts in mind.”
“Steps should be taken to mitigate potential harms such as privacy breaches and unintended consequences”
Prof. Dr. Burhan Pektaş stated that this also includes addressing issues such as bias, fairness, transparency, and accountability in the design process. He added, “Furthermore, those who deploy and use AI technologies bear the moral responsibility for the consequences of their actions. This involves ensuring that AI systems are used responsibly and ethically, and taking steps to mitigate potential harms such as discrimination, privacy breaches, and unintended consequences.”
“Who will assume moral responsibility for the actions of these systems?”
On the other hand, Prof. Dr. Burhan Pektaş noted that as AI systems become more autonomous and capable of making decisions without human intervention, questions arise as to who will assume moral responsibility for the actions of these systems. He concluded his remarks as follows:
“It is crucial to establish clear lines of accountability and ensure that appropriate mechanisms are in place to address issues of responsibility and liability. AI systems often make decisions with ethical consequences in areas such as healthcare, criminal justice, and autonomous vehicles. Individuals and organizations involved in the development and deployment of AI systems have a moral responsibility to ensure that these systems make decisions consistent with ethical principles and values.
In general, the intervention of AI in human life raises complex ethical questions related to moral responsibility; it requires careful consideration and collaboration among various stakeholders to ensure that AI technologies are developed and used in a manner consistent with ethical principles and that promotes the well-being of individuals and society as a whole.”

