Technology

DeepMind researcher claims new AI could lead to AGI, says ‘game is over’

Humanity is seemingly on the verge of solving general artificial intelligence (AGI) during our lifetime.

In response to an opinion piece written by yours truly, the scientist posted a thread on Twitter that began with what is perhaps the boldest statement we’ve seen from anyone at DeepMind regarding his current progress towards AGI:

My opinion: It’s all about scale now! The game is over!

Humanoid greetings

Subscribe now for a weekly recap of our favorite AI stories

Here is the full text of de Freitas’ thread:

Someone’s opinion piece. My opinion: It’s all about scale now! The game is over! It’s about making these models bigger, safer, more computationally efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N

Solving these scaling challenges is what AGI will deliver. Research focusing on these issues, eg S4 for greater memory, is needed. The philosophy of symbols is not. Symbols are tools in the world and large networks have no problem creating and manipulating them 2/n

Last but not least, [OpenAI co-founder Ilya Sutskever] @ilyasut is right [cat emoji]

Rich Sutton is right too, but the AI ​​lesson is not bitter but rather sweet. I learned it from [Google researcher Geoffrey Hinton] @geoffreyhinton ten years ago. Geoff predicted the predictable with stunning clarity.

There’s a lot to unpack in this thread, but “it’s all about scale now” is a pretty hard statement to misinterpret.

How did we come here?

DeepMind recently published a research paper and published a blog post on its new multimodal AI system. Dubbed “Gato,” the system is capable of performing hundreds of different tasks ranging from controlling a robot arm to writing poetry.

The company dubbed it a “generalist” system, but hadn’t gone so far as to say it was in any way capable of general intelligence – you can read more about what that means here.

It’s easy to confuse something like Gato with AGI. The difference, however, is that a general intelligence could learn to do new things without prior training.

In my opinion piece, I compared Gato to a game console:

Gato’s ability to multitask is more like a video game console that can store 600 different games than a game you can play 600 different ways. It’s not a general AI, it’s a bunch of neatly grouped pre-trained narrow models.

That’s not a bad thing, if that’s what you’re looking for. But there’s simply nothing in the research paper accompanying Gato to indicate that this is even a nudge in the right direction for AGI, let alone a stepping stone.

Doctor de Freitas disagrees. Not surprisingly, but what I found shocking was the second tweet in their thread:

The bit up there dealing with “symbol philosophy” could have been written in direct response to my opinion piece. But as sure as the criminals of Gotham know what the Bat Signal means, those who follow the AI ​​world know that mentioning the symbols and AGItogether is a surefire way to summon Gary Marcus.

Between Gary

Marcus, world renowned scientist, author and founder and CEO of Robust AI, has spent the past few years advocating for a new approach to AGI. He believes the whole field needs to change its basic methodology to build AGI, and has written a bestselling book to that effect titled “AI restartwith Ernest Davis.

It is debated and discussed his ideas with everyone from Yann LeCun of Facebook to Yoshua Bengio of the University of Montreal.

And, for the inaugural edition of its newsletter on SubstackMarcus echoed de Freitas’ statements in what amounted to a fiery (but respectful) expression of rebuttal.

Marcus calls the hyper-scalability of AI models a perceived path to AGI “Scaling Uber Alles”, and refers to these systems as attempts at “alternative intelligence” – as opposed to artificial intelligence that tries to imitate human intelligence.

About exploring DeepMind, he writes:

There’s nothing inherently wrong with suing Alt Intelligence.

Alt Intelligence represents an intuition (or more accurately, a family of intuitions) about how to build intelligent systems, and since no one yet knows how to build a type of system that matches the flexibility and ingenuity of human intelligence, it’s certainly fair game for people. to pursue several different hypotheses about how to get there.

Nando de Freitas is about as blunt as he gets in defending this hypothesis, which I’ll call Scaling-Uber-Alles. Of course, that name, Scaling-Uber-Alles, isn’t quite right.

De Freitas knows full well (as I’ll explain below) that you can’t just scale models and hope for success. People have done a lot of scaling lately and achieved great successes, but also hit some roadblocks.

Marcus goes on to describe the problem of incomprehensibility that plagues giant models of the AI ​​industry.

Essentially, Marcus seems to be saying that no matter how awesome and amazing systems like OpenAI’s DALL-E (a model that generates custom images from descriptions) or DeepMind’s Gato are, they’re still incredibly fragile.

He writes:

DeepMind’s newly unveiled star, Gato, is capable of cross-modal feats never before seen in AI, but when you look in the fine print, remains stuck in the same land of unreliability, moments of brilliance coupled with absolute misunderstanding.

Of course, it’s not uncommon for deep learning advocates to argue that humans make mistakes too.

But anyone who is outspoken will agree that these kinds of errors reveal that something is very wrong, at the moment. If any of my kids were making mistakes like these on a regular basis, I wouldn’t overdo it, I would drop everything else I’m doing and take them to the neurologist right away.

While it’s certainly worth a laugh, there’s a serious nuance there. When a DeepMind researcher declares that “the game is over”, it conjures up a vision of the immediate or short-term future that does not make sense.

ACT? Really?

No more Gato, DALL-E, or GPT-3 are robust enough for unfettered public consumption. Each of them requires hard filters to keep them from tipping over to bias and, worse, none of them are able to consistently produce solid results. And not just because we haven’t found the secret sauce to coding AGI, but also because human problems are often difficult and don’t always have a single, trainable solution.

It’s unclear how scaling, even coupled with revolutionary logic algorithms, could solve these problems.

That doesn’t mean the giant models aren’t useful or worthy endeavours.

What DeepMind, OpenAI and similar labs are doing is very important. It’s cutting-edge science.

But declaring the game over? To insinuate that the AGI will be born from a system whose distinctive contribution is the way it serves the models? Gato is amazing, but it feels like a stretch.

There is nothing in de Freitas’ fiery rebuttal to change my opinion.

Gato’s creators are obviously brilliant. I’m not pessimistic about AGI because Gato isn’t stunning enough. Quite the opposite, in fact.

I fear AGI is decades away – centuries, perhaps – because of Gato, DALL-E and GPT-3. They each demonstrate a breakthrough in our ability to manipulate computers.

It’s nothing short of miraculous to see a machine pull off feats of diversion and conjuring a la Copperfield, especially when you realize that said machine is no smarter than a toaster (and obviously dumber than dumbest mouse).

To me, it’s obvious that we’re going to need more than just… After…to take the modern AI equivalent of “is this your map?” to the Gandalfian sorcery of AGI promised to us.

As Marcus concludes in his newsletter:

If we’re going to build the AGI, we’re going to have to learn something from humans, how they reason and understand the physical world, and how they represent and acquire language and complex concepts.

It is pure pride to believe otherwise.


#DeepMind #researcher #claims #lead #AGI #game

About the author

admin_3fxxacau

Leave a Comment