Six different visions for the future of generative AI

Seizing the nature and consequences of generative AI is an exercise in speculation. But we can speculate with method.

in the following I use “ChatGPT”, “LLM” and “generative AI” interchangeably because ChatGPT and Large Language Models (LLMs) are front examples of generative AI.

The consequences of generative AI - why do we care?

For the sake of keeping this blog post short, I’ll just mention that Geoffrey Hinton, who is a key contributor to generative AI, compared it to the invention of electricity, or even the invention of the wheel:

So we should probably pay attention and as the interviewer said, “let’s buckle up”.

How to explore the consequences of a radically new fundamental technology? Caveats

I must say I lack the experience that academics and colleagues possess in exactly this topic, with Philippe Silberzahn on how to face disruptive innovations or Thomas Gauthier on conducting far reaching prospective studies. I will simply list and organize the different influential visions for generative AI that I have read and encountered in the past months.

The goal is very modest: it is to provide a panorama of visions by people whose opinions are usually well informed. I do it to develop my own thoughts on the consequences of ChatGPT, and hopefully you will find it useful too. Let us not forget that of course, each of them have a stake in the debate, and that their voice is performative: by articulating a vision, they contribute to this vision becoming reality, even modestly.

The visions (in no particular order)

1. Noam Chomsky: generative AI is not true intelligence

Who?

Noam Chomsky, key contributor to linguistics in the second half of the XXth century and public intellectual.

Source

Op-ed in the New York Times: “Noam Chomsky: The False Promise of ChatGPT”

Date: March 8, 2023 (full version in pdf).

Argument

ChatGPT is good at mimicking discourse. True intelligence, reasoning, remains qualitatively different from what ChatGPT does. Chomsky elaborates:

Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.

It seems that ChatGPT (and Bard too, I didn’t check) are well able to display such a reasoning. This is the transcript of my attempt on ChatGPT today:

Suppose you are holding an apple in your hand. Now you let the apple go. What will happen and why?

If I were a physical entity capable of holding an apple, and I were to let it go, the apple would fall to the ground due to the force of gravity. Gravity is a fundamental force of nature that pulls objects with mass towards each other. In this case, the Earth’s gravity would pull the apple towards its center. When you release the apple, there are no other forces counteracting gravity (assuming we are neglecting air resistance in this scenario). Therefore, the gravitational force will cause the apple to accelerate towards the Earth at a rate of approximately 9.81 meters per second squared (on the surface of the Earth). This acceleration continues until the apple hits the ground or encounters some other obstacle that might affect its motion. [the answer continues with a precise description of the law of universal gravitation, which I skip here for brievety].

My personal opinion

I tend to flatly disagree with Chomsky here, which is embarrassing because who am I to disagree with such a great mind. I would think that yes, generative AI is not the same kind of intelligence as a human intelligence. But that does not prove generative AI is incapable of reasoning. It can achieve reasoning through different (non biological) means, namely the gigantic number of statistical associations stored in the model that the generative AI is.

2. Andrej Karpathy: generative AI as “the kernel process of a new Operating System”

Who?

Andrej Karpathy was a Director of AI at Tesla and as of today, he is working at OреոΑӏ (the company which released ChatGPT).

Source

A tweet from his personal account

Date: September 28, 2023

Argument

GPT-4, which is the generative AI behind ChatGPT, is not merely a “model that produces good chatbots”. If we consider that the text it generates is not just passive reading material for humans, but text which can then be fed as “instructions” or “commands” to other computing systems such as apps (including to… itself 🤯), then a better describer for GPT-4 would be that it is an operating system (à la Windows, iOS, Android, Linux), or even best, the central orchestrator in such an operating system (technically called a “kernel”).

My personal opinion

The development of plugins and apps for ChatGPT (since April 2023) and Bard (since September 2023) points indeed to this model. The immense stream of revenues that would be generated from replicating the logic of an App Store to generative AI does provide the incentives for companies to push in this direction. Karpathy does point that this analogy, by necessity, sticks to known mental models (what is a kernel, what is an app store), and that we should expect unexpected developments beyond these models. I also agree with this: we don’t know what is the magnitude of the changes that such a technology will enable. In my view, the change is comparable to the the move from analogical to digital systems.

3. Mustafa Suleyman: containment is needed

Who?

Mustafa Suleyman was the cofounder of DeepMind which developed AlphaGo, the system that beat the best player at the game of Go in 2015. He then became vice president of AI product management and AI policy at Google. He is now co-founder and CEO of Inflection AI, which develops personal assistants with generative AI.

Source

“The Coming Wave”, a book praised by a vast array of scientists, entrepreneurs and key opinion leaders.

image

Date: published in September 2023

Argument

Generative AI will cause major disruptions at the societal level in the short term (5 to 10 years). Not because it will become super human (AGI) though it might, but because it is already an ACI (Artificial Capable Intelligence). Generative AI enables the development of intelligent agents at low cost and “en masse”. Dilemna: this will probably create many good things, however it will be very hard to contain its nefarious uses because software is harder to trace than say, nuclear material.

My personal opinion

Suleyman’s argument is of course richer than the summary above, and includes descriptions of the risk of cheap synthetic biology, too. I personally agree with the arguments of the book, even if they sound dramatic. It is pretty pessimistic in the sense that it does not cheer up with an easy solution to the problems (the catastrophies, really) that he predicts. “Containment” is the general direction he advocates, but containing AI is impossible to do or if done at full force, that would be at the price of our personal liberties (states tightly controlling what individuals do with IT).

4. Jony Ive: towards the new iPhone?

Who

Jony Ive was Apple’s lead designer from to 2015 to 2019, and contributed centrally to designing the series of iPhones.

Source

“OpenAI and Jony Ive in talks to raise $1bn from SoftBank for AI device venture”, published by the Financial Times.

image

Date: September 28, 2023

Argument

ChatGPT opens a new era for user interactions, which feel different from the user interactions on mobile phones we have been used to: typing a prompt, reading the answer, and acting on it, is quite different from interacting with apps at the tap of our fingers. This is very conditional: Jony Ive would be talking to OpenAI and Softbank on the development of a device that would actually fit this new type of user interaction, and would deliver a better user experience than the textual chat interface which is now largely dominating our interactions with generative AI.

My personal opinion

Yes there is definitely a need for a new type of interface. It feels so quaint to type long questions on the keyboard, and see the textual response. Voicing a prompt would seem the natural way: quicker, less formal than writing, and more precise because we could hesitate and rephrase while prompting. Choosing the format of the output would seem natural, too. Voice, text, writing to a file, searching a document on the phone and displaying it, or executing a command (opening and using an app, checking an agenda, sending an email, etc). A couple of random thoughts:

  • personal conversational assistants, such as Siri or Alexa, seem like the natural starting point for such an interface
  • a difficult issue is the use of voice in public space: we are all texting while commuting for instance, but I am pretty sure we are not ready to interact by speaking aloud to our phones while in the bus or subway. Thinking of that freely, I was thinking that “whispering as an interface” might then become a thing. You would whisper or murmure to your phone when interacting with it in public spaces. It paints a grim picture of our social interactions 😬 but it could solve the issue of voice interactions.
  • SoftBank being involved as an investor in this potential project sounds like the kiss of death 😅.

5. Clément Delangue: no tsunami, lots of localized AIs

Who

Clement Delangue is co-founder & CEO at Hugging Face, the company which made it so much easier to train, host and share models in AI - including in generative AI.

Source

These posts on LinkedIn:

And a repost by Clément Delangue of this post by Yann Le Cun, Chief AI Scientist at Meta:

Date: September - October 2023

Argument

The claims of an existential risk to humanity (posed by generative AI) are greatly exaggerated. Exagerated as well are the claims that AI, even when not AGI, is bound to cause severe troubles. Instead, what the future holds if we choose it, is an economy where companies and organizations develop and use special purpose AI models (generative or not), trained on datasets which reflect the particularities of their local context. The sharing of open source models of AI is one guarantee that AI becomes available to all and stays under the scrutiny of all.

My personal opinion

I tend not to be convinced by this argument, for at least two reasons.

The first is that there are scenarios of generative AI getting super dangerous, which can’t be evacuated by the argument above. Autonomous weapons on the battle field? That seems pretty likely. Exploitation of the attention and emotions of customers, for the sake of the generation of ad revenues, with content / apps / bots powered by generative AI? Seems likely to me as well. Clement Delangue seems worried as well, actually:

The second reason for not being convinced by the alternative vision of companies each training localized versions of open source, smaller LLMs, instead of using one general-purpose, expensive and private GPT-4: the argument sounds like a straight defense of the business model of Hugging Face, which is about hosting a large number of models, as opposed to one hegemonic Bard or ChatGPT. In my personal experience using ChatGPT or Bard, it is very hard to scale down to less powerful models once you have experienced the quality of their seemingly near perfect human interaction.

So we’ll see where things go. The terms of the argument (“specialized models because trained on local datasets” vs general models) might well evolve: what if OpenAI would offer versions of GPT-4 that you could then train and specialize on extra datasets? (if they don’t already?). That would make the alternative void (either ChatGPT or less powerful models on Hugging Face).

6. Yuval Noah Harari: existential threat to humanity

Who?

Yuval Noah Harari, author of the book Sapiens in 2011.

Source

Several interviews and debates, eg:

“Yuval Noah Harari argues that AI has hacked the operating system of human civilisation” (The Economist)

image

Date: April - October 2023

Argument

A model in generative AI is capable of getting out of hands of the humans who created it. If that happens and in the worst case scenario, this model could pursue its own objectives, which would not necessarily be compatible with human’s welfare.

My personal opinion

I would tend to side with this opinion. People arguing that this overly pessmistic, dramatic and overall ill founded, tend to bring several arguments forward:

  1. people making the argument for an existential risk are not engineers, they use sloppy arguments and imprecise vocabulary, they don’t know what they talk about
  2. there is an agenda by some making these claims, to close up AI and keep it to a handful of monopolies (the GAFAMs)
  3. those who know about LLMS know that they are far from truly intelligent, purposeful, self reflexive AGIs (this is a variation on argument 1.)

To which I would reply:

  1. the argument about who is a true engineer or not seems pretty bad. I give all the credit to Yuval Harari to be able to express a sensible opinion on LLMs. Mustafa Suleyman, who expresses similar opinions (see above), is a cofounder of Google’s DeepMind: good enough credentials.
  2. This is true that (anti-)competitive dynamics are very strong. Microsoft, Google and a couple of others probably dream to see the ecosystem of generative AI being closed up to a small club composed of… themselves, to reproduce the walled gardens that they are used to enjoy. That said: it does not follow that generative AI is inocuous.
  3. AGI or not? The term “AGI” is so poorly defined that the debate is moot. Suleyman introduced the term “Artifical Capable Intelligence” to distinguish between today’s LLMs, and an super intelligent one. To clarify these issues, I have written a short blog post trying to list the features defining an AGI. LLMs check most of them.

Next

The goal of this blog post was simply to gather in one place some influential opinions on the consequences of ChatGPT, LLMs and generative AI in general, as of October 2023. What did we learn?

  • a diring absence of women’s voices, and other minorities. That reflects the biases of my sources of information.
  • a lack of consensus on the effects of generative AI: dramatic or not
  • the involvement of the GAFAMs and a recent entrant (Hugging Face) in the debates: competitive dynamics are clearly at play, with the topic of the regulation of AI being perceived by the new entrants as a potential form of regulatory capture used to clip their wings. At least that is what I read implicity in the debates about the necessity to keep AI open or to contain it.

About me

I am a professor at emlyon business school where I conduct research in Natural Language Processing and network analysis applied to social sciences and the humanities. I teach about the impact of digital technologies on business and society. I build nocode functions 🔎, a click and point web app to explore texts and networks. It is fully open source. Try it and give some feedback, I would appreciate it!

 Date: October 12, 2023

Previous
⏪ Is ChatGPT Artificial General Intelligence?