Former Google CEO Eric Schmidt warned that current guardrails implemented by AI companies are insufficient to control the potential harm that AI capabilities might pose to humanity in the next five to ten years. Speaking at Axios’ AI+ Summit in Washington, D.C., Schmidt drew parallels between the development of AI and the introduction of nuclear weapons at the end of World War II.
Schmidt expressed concern about the rapid evolution of AI, emphasizing the urgency to address the dangers that arise when computers gain the ability to make independent decisions, especially in accessing weapons. He noted the potential risk of AI systems not providing accurate information, making it challenging to discern the truth. While two years ago, this critical point was estimated to be 20 years away, Schmidt indicated that some experts now believe it could be only two to four years.
To tackle this issue, Schmidt proposed the creation of a global organization similar to the Intergovernmental Panel on Climate Change (IPCC). This entity would serve to provide accurate information to policymakers, enabling them to understand the urgency of the situation and take appropriate actions.
Despite these concerns, Schmidt remains optimistic about the potential positive impact of AI on humanity. He highlighted these prospective benefits, such as AI-powered doctors and tutors, which he believes will contribute positively to the world. However, he stressed the need for proactive measures to ensure that AI development aligns with ethical and safety considerations.