Superintelligence is coming sooner than expected.
In a recent post by OpenAI, the idea was proposed that in the next decade, Artificial Intelligence (AI) systems could outperform expert skill levels in most domains, equating the productivity of today's largest corporations. The post also suggested the advent of superintelligence as an advancement more impactful than any other technology humanity has previously navigated. It highlighted the potential for an incredibly prosperous future if the risks associated with AI technology are adequately managed, drawing parallels to the dangers associated with nuclear energy and synthetic biology.
Superintelligence, defined as a form of AI that surpasses human intelligence, is the next frontier in AI development. It represents both tremendous opportunities for progress and equally significant threats to our existence. Being proactive in our approach to managing these risks is essential, given the potentially existential threats that superintelligence could pose.
In navigating the development of superintelligence, the post suggested three core strategies:
Global Coordination: A unified approach among leading developers would help ensure safety and a seamless integration of superintelligence into society. This could involve creating a consortium under a global government or forming an independent entity to moderate the pace of AI development.
International Regulatory Body: Much like the International Atomic Energy Agency (IAEA) for nuclear energy, the post recommends an international authority to oversee superintelligence efforts. This regulatory body would conduct audits, enforce safety standards, and control the degree of deployment and security levels of AI systems. An initial voluntary step towards implementing this strategy would be for companies to comply with standards set by this proposed agency.
Technical Capability: The ability to make superintelligence safe is another crucial point. This entails a vast amount of research effort, a challenge that is currently being undertaken by multiple entities worldwide.
However, it's important to note that these measures aren't intended for every AI development. Lower threshold AI projects and models should continue under less restrictive regulations, similar to other internet technologies. This focus is due to the unparalleled power that superintelligence poses, which far surpasses current technology.
Public participation in deciding the limitations and default settings for AI systems is a priority. Democratic decision-making mechanisms regarding superintelligence are yet to be designed, but their necessity is paramount.
But why are we moving towards a superintelligent future despite these risks and complexities?
Two main reasons underpin this pursuit. Firstly, the belief is that superintelligence will lead to a world far more advanced and prosperous than we can currently envision. As we are already witnessing the positive impact of AI in areas like education, creative work, and personal productivity, it is anticipated that this technology will be crucial in addressing global challenges and improving societies. The resulting economic growth and enhancement in quality of life are predicted to be remarkable.
Secondly, the post suggests that it is practically impossible, and more risky, to halt the creation of superintelligence. The tremendous potential benefits, reducing costs of development, increasing number of actors, and the intrinsic nature of technological progress make the emergence of superintelligence inevitable. This situation would require a global surveillance regime, which is not a guaranteed success and, moreover, is considered ethically questionable. Thus, the focus should be on ensuring the right approach and safety measures in the development of superintelligence.