GCC

Sam Altman Warns That AI Is Learning “Superhuman Persuasion” – Gulf Insider

[ad_1]

Humanity is likely still a long way away from building artificial general intelligence (AGI), or an AI that matches the cognitive function of humans — if, of course, we’re ever actually able to do so.

But whether such a future comes to pass or not, OpenAI CEO Sam Altman has a warning: AI doesn’t have to be AGI-level smart to take control of our feeble human minds.

“I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” Altman tweeted on Tuesday, “which may lead to some very strange outcomes.”

While Altman didn’t elaborate on what those outcomes might be, it’s not a far-fetched prediction. User-facing AI chatbots like OpenAI’s ChatGPT are designed to be good conversationalists and have become eerily capable of sounding convincing — even if they’re entirely incorrect about something.

See also  Danish Artist Ordered To Repay £58,000 After Supplying Museum With Two Blank Canvasses for Project Named ‘Take the Money and Run’ - Gulf Insider

At the same time, it’s also true that humans are already beginning to form emotional connections to various chatbots, making them sound a lot more convincing.

Indeed, AI bots have already played a supportive role in some pretty troubling events. Case in point, a then-19-year-old human, who became so infatuated with his AI partner that he was convinced by it to attempt to assassinate the late Queen Elizabeth.

Disaffected humans have flocked to the darkest corners of the internet in search of community and validation for decades now and it isn’t hard to picture a scenario where a bad actor could target one of these more vulnerable people via an AI chatbot and persuade them to do some bad stuff. And while disaffected individuals would be an obvious target, it’s also worth pointing out how susceptible the average internet user is to digital scams and misinformation. Throw AI into the mix, and bad actors have an incredibly convincing tool with which to beguile the masses.

See also  Saudi Arabia: Six Dead After Car Accident in Jazan - Gulf Insider

But it’s not just overt abuse cases that we need to worry about. Technology is deeply woven into most people’s daily lives, and even if there’s no emotional or romantic connection between a human and a bot, we already put a lot of trust into it. This arguably primes us to put that same faith into AI systems as well — a reality that can turn an AI hallucination into a potentially much more serious problem.

See also  Meghan Markle and Prince Harry are in ‘last chance saloon’ with the royal family  - Gulf Insider

Could AI be used to cajole humans into some bad behavior or destructive ways of thinking? It’s not inconceivable. But as AI systems don’t exactly have agency just yet, we’re probably better off worrying less about the AIs themselves — and focusing more on those trying to abuse them.

Interestingly enough, one of the humans who might be the most capable of mitigating these ambiguous imagined “strange outcomes” is Altman himself, given the prominent standing of OpenAI and the influence it wields.

Comments



[ad_2]

Source link

ismailsesa

Works as an in-house Writer at Gulf Tech Plus and focuses on the latest smart consumer electronics. Closely follows the latest trends in consumer IoT and how it affects our daily lives. You can follow him on Facebook, Instagram & YouTube.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button