Artificial Intelligence could pose a real threat if its rapid development is not controlled!
Üsküdar University Head of the Department of Computer Engineering Prof. Burhan Pektaş stated that the possibility of artificial intelligence (AI) gaining control over humans or causing harm cannot be entirely excluded and expressed that "It is difficult to claim that such dystopian scenarios are completely impossible. However, current AI systems work based on predefined goals and cannot create their own objectives."
Emphasizing the need for global cooperation and faster development of ethical standards to prevent the potential threats of AI technology, Prof. Burhan Pektaş stated that "Although most of these risks are hypothetical, if AI's rapid development is not controlled, it could pose a real threat in the long term."
Prof. Burhan Pektaş, who is Head of the Department of Computer Engineering at the Faculty of Engineering and Natural Sciences of Üsküdar University, evaluated the possibility of AI destroying humanity.
What are the risks of Artificial General Intelligence (AGI)?
Prof. Burhan Pektaş evaluated the words of Geoffrey Hinton, known as the father of AI, regarding the increasing possibility of AI technology destroying humanity in the next 30 years. Pektaş stated that "Geoffrey Hinton's warning reflects the concern that control mechanisms might fall short during the development of AI. This concern is especially related to the possibility of systems known as 'Artificial General Intelligence' (AGI), which have learning and decision-making capacities similar to humans. The risks are as follows: Autonomous control: AI setting its own goals and disregarding human interests to achieve them. Arms race risk: The development of autonomous weapons and the dangerous use of AI in the wrong hands. Information manipulation: The generation of fake information, which could manipulate elections and social decision-making processes. Economic control: The monopolization of AI by large companies, increasing income inequality."
Physical, Economic, Social, and Cultural Threats
Prof. Burhan Pektaş emphasized that the threats of AI are not limited to physical destruction but also include social, economic, and cultural transformations. He listed the following threats: Physical threats: Autonomous weapon systems, AI applications with security vulnerabilities.
Economic threats: Widespread unemployment caused by AI replacing the workforce.
Social threats: Digital inequality, the destruction of privacy, and the formation of surveillance societies.
Cultural threats: Erosion of human values and originality in competition with AI systems."
Are ethical and security standards sufficient?
Prof. Burhan Pektaş also provided information regarding the development of ethical rules and security standards in AI technologies:
"Although some progress has been made in this area (such as the European Union's AI Act), the development of global standards is still moving slowly. The main reasons for this are: Lack of international cooperation: Conflicting interests between different countries. Technological speed: The rate at which AI is developing exceeds the pace at which regulations can be implemented. Influence of companies: Lobbying activities of large tech companies."
Prof. Burhan Pektaş pointed out that lobbying by large technology companies is slowing down the process, stating, "Globally binding rules should be established. Independent regulatory bodies should be set up. Ethical awareness should be increased through educational programs."
What are the proposed solutions?
Prof. Burhan Pektaş also pointed out the steps that need to be taken to prevent AI from going out of control. His suggestions for preventing AI from going beyond control are as follows:
transparency: The workings of all AI models should be open to the public.
Security tests: Independent tests should be conducted to guarantee that AI systems do not cause harm to humans.
International cooperation: Binding agreements should be established among all countries.
Ethical oversight: Companies developing AI should undergo regular ethical audits.
Kill switch mechanisms: Technical solutions should be implemented to stop AI in case it leads to an undesired situation."
Could dystopian scenarios become reality?
Prof. Burhan Pektaş stated that the possibility of AI gaining control over humans or causing harm cannot be entirely excluded and made the following remarks: "It is difficult to say that such dystopian scenarios are completely impossible. However, current AI systems work based on predefined goals and cannot create their own objectives. Nevertheless, risky scenarios can be such as Autonomous systems working with faulty algorithms or being misused by malicious actors could pose a danger. While many of these risks are hypothetical today, if the rapid development of AI is not controlled, it could become a real threat in the long term."
Üsküdar News Agency (ÜNA)