0 0
Read Time:1 Minute, 49 Second

SAN FRANCISCO: An internal fight over whether Google built technology with human-like consciousness has spilled into the open, exposing the ambitions and risks inherent in artificial intelligence that can feel all too real.

The Silicon Valley giant suspended one of its engineers last week who argued the firm’s AI system LaMDA seemed “sentient,” a claim Google officially disagrees with.

Several experts told AFP they were also highly skeptical of the consciousness claim, but said human nature and ambition could easily confuse the issue.

“The problem is that… when we encounter strings of words that belong to the languages we speak, we make sense of them,” said Emily M. Bender, a linguistics professor at University of Washington.

“We are doing the work of imagining a mind that’s not there,” she added.

LaMDA is a massively powerful system that uses advanced models and training on over 1.5 trillion words to be able to mimic how people communicate in written chats.

The system was built on a model that observes how words relate to one another and then predicts what words it thinks will come next in a sentence or paragraph, according to Google’s explanation.

“It’s still at some level just pattern matching,” said Shashank Srivastava, an assistant professor in computer science at the University of North Carolina at Chapel Hill.

“Sure you can find some strands of really what would appear meaningful conversation, some very creative text that they could generate. But it quickly devolves in many cases,” he added. Still, assigning consciousness gets tricky.

It has often involved benchmarks like the Turing test, which a machine is considered to have passed if a human has a written chat with one, but can’t tell.

“That’s actually a fairly easy test for any AI of our vintage here in 2022 to pass,” said Mark Kingwell, a University of Toronto philosophy professor.

Google pays $118mn to settle gender discrimination suit

“A tougher test is a contextual test, the kind of thing that current systems seem to get tripped up by, common sense knowledge or background ideas – the kinds of things that algorithms have a hard time with,” he added.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %