OpenAI Claims Alignment Can Govern SuperIntelligence

Superintelligence: OpenAI Aims to Deploy "Automated Alignment Researcher" In 4 Years

In recent months, OpenAI has been at the forefront of AI research and development mainly for distributing ChatGPT which is among the most popular large language models and the most downloaded software in history. Their work on advanced AI models like ChatGPT has garnered significant attention and also raised important questions about regulation and AI safety.

OpenAI recognizes the potential risks associated with the rapid progress of AI and aims to prevent the negative consequences that could arise if it is not properly regulated and controlled. After heavy lobbying of governments around the world, they are announcing that they have a solution. It's outlined in a startlingly brief whitepaper published on their website.

 

sam altman of openai lobbying gov't
Sam Altman. Credit: Reuters/ Elizabeth Frantz

 

Sam Altman, CEO of OpenAI says it is committed to the safe and responsible advancement of AI technology. They recognize the need to rein in the potential power of superintelligent AI, even though no one is pretending that we are close to that level of technology yet. The implications for humanity of runaway AI are potentially apocalyptic.

"The vast power of superintelligence could ... lead to the disempowerment of humanity or even human extinction," OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."

OpenAI is "Doubling Down" on Safety

OpenAI believes that the vast power of superintelligence could either help us solve many of the world's problems or lead to the disempowerment of humanity or even human extinction. To address this, they advocate for the careful regulation and management of AI technologies to ensure their safe development and deployment.

Humans will need better techniques than currently available to be able to control the superintelligent AI, hence the need for breakthroughs in so-called "alignment research," which focuses on ensuring AI remains beneficial to humans by aligning it with human values.

OpenAI believes in the need to balance the benefits and risks associated with AI, as the technology has the potential to greatly impact various industries and the future of humanity. by using so-called alignment research it reads like an experimental proposal to study something but lacks even a simple explanation of how it might plausibly combat all the bad intentions in the world.

No mention of how to counter bad actors or deal with the situation if it "goes rogue" as they love to say, almost foreshadowing how they will respond when it happens. Read their paper and see if your anxiety is assuaged by their plan.

The organization's chief scientist, Ilya Sutskever, along with Jan Leike and the rest of the team, actively contribute to AI research and safety measures. OpenAI has an ambitious goal of developing AI systems that are not only highly capable but also guaranteed to succeed in alignment with 'human values.'

Why is OpenAI Lobbying Against Regulation While Demanding It?

Most people believe the harm of AI will come from people, organizations, and governments who have less than noble intentions. As governments around the world consider how to regulate AI, Altman has lobbied for relief from the EU AI Act, a rudimentary risk-based framework like the one in place for internet communications. Big tech is lobbying like never before and OpenAI, which was alleged to be a non-profit at one point, is among them.

The company, OpenAI can no longer pretend to be a non-profit while looking at a valuation of $30 billion. Its announcement sends a message but not one of safety. Altman toured the globe extolling the risks of AI in world capital after world capital, calling for regulation while at the same time lobbying to water it down. This should make everyone think twice.  Deepmind and another company employed the same tactic and it succeeded. The CEO of Alphabet met with Congress and  Altman reportedly had meetings with at least 100 lawmakers.

Alphabet CEO Sundar Pichai and OpenAI CEO Sam Altman walking together
CEO of Alphabet and OpenAI heading to a meeting with the Vice President. Credit: AP

 

The plan, which has obvious weaknesses doesn't explain much of anything. However, if you consider the amount of lobbying and campaigning preceding it, the efforts were a success. All they want is everyone's undivided attention.  Democratic and Republican lawmakers will have to grapple with the daunting task of learning about rapidly developing technology which even the CEO doesn't understand, and the fact that even AI experts disagree about what AI regulations should look like.

Whether following through with any of what they propose is useful in preventing the existential collapse of civilization is another question that, conveniently, doesn't have an answer.

The explanation for overcoming superintelligence is incoherent. Saying that regulation for AI is needed like for nuclear weapons doesn't make sense either from the mouth of the bomb builder. Further, it does make sense that OpenAI sees itself as the bottleneck or meterstick for AI progress. It does make sense that any company will do what it can to entrench itself in the industry and prevent new entrants into the market.

We plan to share the fruits of this effort broadly and view contributing to alignment and safety of non-OpenAI models as an important part of our work.

They propose creating a superintelligence (within a decade) and also controlling it by allocating 20% of their resources toward building an alignment researcher in four years. The white paper for the crypto "Squid Games" which turned out to be a scam is more plausible than this white paper which is full of imaginative terms and holes. It lacks seriousness and is shorter than most book reports.

The Solution to Superintelligence: Meet "Superalignment"

OpenAI has initiated the Superalignment project, led by Ilya Sutskever and Jan Leike, to develop scientific and technical breakthroughs for steering and controlling AI systems that surpass human intelligence.

If this sounds absurd to you, you aren't alone. They aim to align superintelligent AI systems with human intent and values within four years which means building a super-intelligent system and proving it works and is safe. Once it is made, the time for tests is over, isn't it?

They resort to the argument that "since someone will make it, it might as well be us" which is more of a rationalization. We went from thought leaders proposing a moratorium on GPTs for at least six months to OpenAI saying we're ensuring safety by aiming for an apocalyptic ETA of four years. To create such a timeline with no regard to impending regulations that might prevent their plans is somewhat curious.

Is This Really A Step Forward?

While alignment may be the next thing in machine learning, it won't do anything if AI systems attain AGI...all of it can be undone at even human-level AI stages say other experts.

"...to bring AI systems in line with human values."

Most people don't need reminding but some of the most abhorrent, evil beings to ever walk this planet are human.  We can't even grasp the problem and these terms need defining. Is there anything approaching universal human values?

Such statements are beyond subjective and cringeworthy.  Whatever those human values may be, OpenAI wants you to believe they understand the problem enough to name a solution.

How Did an AI Startup Become the Arbiter of Human Values?

OpenAI is actively recruiting ML researchers and engineers to join the team and emphasizes the need for new institutions and governance to manage the risks associated with superintelligence. In other words, this move for safety once again places OpenAI at the center of attention, transmits a virtue signal, and also creates a standard others have to live with even if it's merely a distraction.

Whether superalignment is a valid gauge of risk or not, if governments follow their prescriptions, OpenAI will have established control over the industry and its rate of growth with this talk.  Mainstream news stories of fake videos of Pentagon bombings on Twitter signaled the danger of AI, but all of that was the work of pranksters. Superalignment does nothing about those.

The Governance of SuperIntelligent AI

Presupposing such a thing is even possible...controlling something exponentially more intelligent than you are and is, in turn, connected to several others like itself, is hard to imagine. Predicting what it will do is nearly impossible.

While the new "super alignment team" may be a starting point for something, it is far more likely this is how they are framing their approach toward no regulation at all. Regardless of their stated intentions, by distracting the public's attention to grandiose things that may never materialize, they create a smoke screen. It is a common way of making people ignore what can practically be done. The public should approach these safety declarations with skepticism and caution.

By focusing the interests of the public on hypothetical risks that will never materialize or take longer than is reasonable, toward a threat like superintelligence, organizations like OpenAI shift the responsibility of regulation towards an unattainable horizon. It increases the likelihood that nothing as far as regulation gets done, an outcome they lobbied for. 

Conclusion:

There are far more important, real-world issues about AI and its impact. The company needs to address other concerns, for example, their sweatshop labor practices in Kenya, the inherent biases ChatGPT has when asked questions in different languages (e.g. Russian and Ukrainian), and copyright issues. Policymakers have to tackle those now, but lobbying and scare tactics help push those aside.

Like the hyperbolic nuclear arms analogy they use to illustrate a need for regulation, it conflates the real issues with an imaginary scenario. Any company demanding for government to intervene in their nascent industry is uncommon and should sound alarms. Claiming that this route is the answer and these three individuals are the solution is more scary than it is reassuring. Claiming to have a solution for a threat as looming as this one is unrealistic regardless of the plan and this one is just underwhelming.

OpenAI would have done more to allay fears of superintelligence by saying they aren't pursuing it at all. To make headlines about something as flimsy as "super alignment" implies something else. To me, this is hype and OpenAI is announcing it has lobbied its way to being unregulated for 4 years and has some ideas moving forward.

This site uses cookies to offer you a better browsing experience. By browsing this website, you agree to our use of cookies.