A.I. Tools

4 Reasons Why I Won’t Sign the “Existential Risk” New Statement | by Rafe Brena, Ph.D. | May, 2023

Opinion

Fueling fear is a dangerous game

Rafe Brena, Ph.D.
Towards Data Science
Photo by Cash Macanaya on Unsplash

Some weeks ago, I published my pro and con arguments for signing that very well-known open letter by the Future of Life Institute — in the end, I signed it, though there were some caveats. A few radio and TV hosts interviewed me to explain what all the fuss was about.

More recently, I got another email from the Future of Life Institute (FLI in the following) asking me to sign a declaration: this time, it was a short statement by the Center for AI Safety (CAIS) focused on the existential threats posed by recent AI developments.

The statement goes as follows:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Very concise indeed; how could there be a problem with this?

If the previous FLI statement had weaknesses, this one doubles down on them instead of correcting them, making it impossible for me to support it.

In particular, I have the four following objections, which for sure are going to be a bit longer than the declaration itself:

The new statement is essentially a call to panic about AI, and not just to panic about some natural consequences of it that we can see right now, but instead about hypothetical risks that have been raised by random people who give very vague risk estimations like “10 percent risk of human extinction.”

Really? 10% risk of human extinction? Based on what? The survey respondents were not asked to justify or explain their reasons, but I suspect many were thinking about “Terminator-like” scenarios. You know, horror films are intended to scare you, so you go to the movies. But to translate the message to reality is not sound reasoning.

The supposed threat to humanity assumes a capability to destroy us that hasn’t been explained and an agency—the willingness to erase humankind. How would a machine want to kill us when devices don’t have any feelings, be they good or bad? Machines don’t “want” this or that.

The real dangers of AI we see playing out right now are very different. One of them is the capability of Generative AI to make fake voices, pictures, and videos. Can you imagine what you’d do if you received a phone call with your daughter’s voice (impersonated with a fake voice) where she asks you to rescue her?

Another one is public misinformation with fake evidence, like counterfeit videos. The one with a fake Pope was relatively innocent, but shortly, Twitter will be flooded with false declarations, images about events that never occurred, and so on. By the way, have you considered that the US elections are approaching?

Then there is the exploitation of human-made content that AI algorithms are mining all over the internet to produce their “original” images and text: humans’ work is taken without any financial compensation. In some instances, reference to human work is explicit, like in “make this image in the style of X.”

If in the FLI letter of a month ago there were suggestions of a “man vs. machine” mindset, it is made very explicit this time. “Extinction from AI,” they call it, nothing less.

In the real world where we are living—not in apocalyptic Hollywood movies—it’s not the machines that damage us or are threatening our existence: it’s more like some humans (accidentally, the powerful and rich ones, the owners of big companies) leverage new powerful technology to increase their fortunes, and often at the expense of the powerless: we have seen how the availability of computer-generated graphics has shrunk the small business of graphic artists in places like Fiverr.

Further, the assumption that advanced machine intelligence would try to dethrone humans has to be questioned; as Steven Pinker wrote:

“AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.”

Yann LeCun—the famous head of AI research at Meta—declared:

“Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct… Those drives are programmed into our brain, but there is absolutely no reason to build robots that have the same kind of drives.”

No, machines gone rogue will not become our overlords or exterminate us: other humans, who are currently our overlords, will increase their domination by leveraging the economic and technological means at their disposal—including AI if it’s fitting.

I get that the FLI mentioned the pandemics to relate their statement with something that we just lived—and left an emotional scar on many of us— but it’s not a valid comparison. Leaving aside some conspiracy theories, the pandemic we emerged from was not technology—vaccines were. How does the FLI assume catastrophic AI would spread? By contagion?

Of course, nuclear bombs are a technological development, but in the case of a nuclear war, we know precisely how and why the bomb would destroy us: it’s not speculation, as it is in the case of “rogue AI.”

One last item that drew my attention was to see the list of people signing the statement, starting with Sam Altman. He is the leader of OpenAI, which, with ChatGPT since November 2022, put the frantic AI race we live in motion. Even the mighty Google struggled to keep pace in this race—didn’t Microsoft’s Satya Nadella say he wanted to “make Google dance”? He got his wish at the cost of accelerating the AI race.

It doesn’t make sense to me that people at the helm of the very companies fueling this AI race are also signing this statement. Altman could say that he’s very worried by AI developments, but if we see his company keeps going straight at full speed, then his preoccupation looks meaningless and incongruous. I don’t intend to moralize about Altman’s declarations, but accepting his support at face value undermines the statement’s validity –even more so when we consider that for Altman’s company leading the race is essential to their financial bottom line.

It’s not that machines are going rogue. It’s the use that capitalistic monopolies and despotic governments make of AI tools that could damage us. And not in a Hollywood dystopic future, but in the real world where we are today.

I won’t endorse a fear-fueled vision of machines that is hypocritical in the end because it’s brought by the very companies that try to distract from their profit-seeking operating ways. That’s why I’m not signing this new statement endorsed by the FLI.

Further, I suspect that wealthy and influential leaders can allow themselves to examine imaginary threats because they don’t worry about more “mundane” real threats like the income reduction of a freelance graphic artist: they know very well they will never struggle to make ends meet at the end of the month, nor will do their children or grandchildren.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Translate »