Has GPT-4 really crossed the astonishing threshold of human-level artificial intelligence? It depends on

By George Siemens, Co-Director, Professor, Center for Change and Complexity in Learning, University of South Australia


The recent public interest in tools like ChatGPT has raised an old question in artificial intelligence: is artificial general intelligence (in this case, human-level AI) possible?

One online oppression this week has added to the hype, suggesting that the latest advanced large language model, GPT-4, is in the early stages of artificial general intelligence (AGI) as it showcases “sparks of intelligence“.

OpenAI, the company behind ChatGPT, has unashamedly declared its aspiration by AGI. Meanwhile, a large number of researchers and public intellectuals have called for one immediate stop to the development of these models, citing “profound risks to society and humanity”. These calls to pause AI research are theatrical and unlikely to succeed—the allure of advanced intelligence is too provocative for humans to ignore, and too rewarding for businesses to pause.

But are the worries and hopes about AGI justified? How close is GPT-4, and AI more generally, to general human intelligence?

Read more:
Evolution not revolution: why GPT-4 is remarkable, but not groundbreaking

If human cognitive capacity is a landscape, AI has indeed increasingly taken over large parts of this territory. It can now perform many separate cognitive tasks better than humans in the areas of vision, image recognition, reasoning, reading comprehension and games. These AI skills could potentially result in a dramatic rearrangement of the global labor market in less than ten years.

But there are at least two ways to look at the AGI question.

The uniqueness of humanity

First, AI will eventually develop learning skills and abilities that match those of humans and reach AGI level. The expectation is that the unique human capacity for ongoing development, learning, and the transfer of learning from one domain to another will eventually be duplicated by AI. This is in contrast to current AI, where training in one domain, such as detecting cancer in medical images, does not transfer to other domains.

So the worry felt by many is that AI will someday exceed human intelligence, and then quickly eclipse us, making us appear to future AIs as ants appear to us now.

The likelihood of AGI is questioned by several philosophers and scientists, citing that current models are largely ignorant of outputs (that is, they do not understand what they are producing). They have no prospects either achieve consciousness because they are primarily predictive – automate what comes next in text or other outputs.

Rather than being intelligent, these models simply recombine and duplicate the data they have been trained on. Consciousness, the essence of life, is missing. Although AI base models continue to advance and complete more sophisticated tasks, there is no guarantee that consciousness or AGI will emerge. And if it did appear, how would we recognize it?

Read more:
Futurists predict a point where humans and machines become one. But will we see it coming?

Ever-present AI

The usability of ChatGPT and GPT-4's capabilities to master certain tasks as good as or better than a human (like bar exams and academic olympiads) gives the impression that AGI is close. This perspective is confirmed by the rapid performance improvement with each new model.

There is no doubt that AI can now outperform humans in many ways Individual cognitive tasks. There is also growing evidence that the best model for interacting with AI may well be one of human/machine pairing – where our own intelligence is changednot replaced by AI.

Screenshot of an example where GPT-4 analyses visual input – a photo of eggs, flour, milk and cream – and the question of what can be cooked with those, and offers several ideas such as pancakes.https://images.theconversation.com/files/518635/original/file-20230331-1… 1200w, https://images.theconversation.com/files/518635/original/file-20230331-1… 1800w, https://images.theconversation.com/files/518635/original/file-20230331-1… 754w, https://images.theconversation.com/files/518635/original/file-20230331-1… 1508w, https://images.theconversation.com/files/518635/original/file-20230331-1… 2262w” sizes=”(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px”>
GPT-4 is also “multimodal” – it can take visual input and answer questions based on it.

Signs of such pairing are already emerging with announcements of work copilots and AI pair programmer to write code. It seems almost inevitable that our future of work, life and learning will have AI pervasive and persistently present.

By that measure, the ability of AI to be seen as intelligent is plausible, but this remains a contested space and many have spoken out against it. Famous linguist Noam Chomsky has said that the day of AGI “may come, but its dawn is not yet coming”.

Smarter together?

The other angle is to consider the idea of ​​intelligence as it is practiced by people in their daily lives. According to one school of thought, we are intelligent mainly in networks and systems rather than as solitary individuals. We keep knowledge in networks.

Until now, these networks have been primarily human. We may take insight from someone (like the author of a book), but we do not treat them as an active “agent” in our cognition.

But ChatGPT, Copilot, Bard and other AI-powered tools can become part of our cognitive network – we engage with them, ask them questions, they restructure documents and resources for us. In this sense, AI does not need to be sentient or have general intelligence. It simply needs the capacity to be embedded in and part of our knowledge network to replace and augment many of our current jobs and tasks.

The existential focus on AGI overlooks the many possibilities that current models and tools give us. Sentient, conscious or not – all these attributes are irrelevant to the many people who already use AI to co-create art, structure writings and essays, develop videos and navigate life.

The most relevant or pressing concern for humans is not whether AI is intelligent when it is by itself and disconnected from humans. It can be argued that as of today we are more intelligent, more capable and more creative with AI when it advances our cognitive capacity. Right now, it seems that the future of humanity may be AI teaming – a journey that is already well underway.

Read more:
Bard, Bing, and Baidu: how tech's AI race will transform search—and all computing

The conversation

George Siemens does not work for, consult with, own stock in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Originally published in The conversation.

#GPT4 #crossed #astonishing #threshold #humanlevel #artificial #intelligence #depends

Source link

Leave a Reply