With ChatGPT, we are crossing a threshold as important as the one from analogical to digital.

Why is ChatGPT not another exceedingly good model in AI

If you don’t know about ChatGPT, please Google it and see what it does before coming back here.

This is an attempt to seize the significance of large language models aka foundational models. ChatGPT is the latest release of this sorts of models, which also encompasses image generation models such as MidJourney, Dall.E and Stable Diffusion. I’ll use the name “ChatGPT” in the following as this might be the most familiar to the readers.

In three bullet points, why is ChatGPT different from other models in AI?


ChatGPT is different because it is not trained to perform a specific task, like ‘is this email a spam or not’, or ‘translate this sentence in French into English’. It is trained in the sense that it learns to guess ‘what comes next’. If you train it with a text that says ‘I love chocolate’, it will learn that ‘I love’ can be followed by ‘chocolate’ (and less often by ‘spinash’, maybe?).

Learns implicit relations

The result of the training is not a simple reflection of what the model has been trained on (“after ‘I love’, there is a 1% chance that the following word is ‘chocolate’). No. Instead, the result of the training is a giant table of numbers (a ‘model’) representing the relations between all the pieces of text you fed it with. A classic example is that it can store and hence, “understand” the following relations: the word ‘king’, minus the term ‘man’, and plus the word ‘female’, is equivalent to ‘queen’. Following the same logic, it can also learn and reproduce ‘styles’, like slang, formal writing, dialects, but also styles of painting, etc.

A true deluge of data

In the last few years, it has been trained on an immeeeeeeeense set of textual documents on every imaginable aspect of our human culture. All English pages of Wikipedia. Dozens of thousands of digitalized books. Tons of websites. All Github. And more, more, more.

The results are astonishing. Here I hesitate to give examples because that’s not the point of this piece. Let’s just say that ChatGPT can be used to emulate a computer (a virtual machine), for example, just because somneone tried it by simply asking it to behave like a computer. That is… a very, very strong capability.

Ok, what should we think of it?

My impression is that with ChatGPT, we are crossing a threshold as important as the one we passed from analogical systems to the digital world.

When digital systems emerged, they freed operations from the constraint of being carried through material circuits which are mechanistic analogs to the operations. Like, when you listen to music today, the sound you hear is stored and transmitted in zeros and ones. No need to have the sound stored by having an engraved disk where the engravures are analogical to the sound waves that need to be reproduced.

The move from analogical to digital was transformative and impactful because you could emulate any analogical system with computers. It is cheaper, faster, takes less space, and easier to evolve.

Mechanic calculators had to be actioned by hand and took minutes or hours to operate by skilled professionals. In the digital era, electronic calculators emerged and kids could use them in junior school and do trigonometry with it. Cameras with argentic film became digital cameras. Newspapers became online magazines. The transmission of voice, image, data started shifting from analog wave systems to digital forms around the 1980s.

This transition was mostly over in the late 2000s. It allowed a tremendous increase in the volume and accuracy of the data being transmitted, paving the way for Internet-as-content-broadcasting-and-consumption that we experience today.

Just one more illustration:

I love this comparison showing the evolution of the control systems aboard the cockpits of human-controlled rocket modules. If you have an analogical system for controls like in the old Soyouz, what happens if you want to modify / add or remove a button? That’s going to be difficult and costly, because you will need to materially change panels and knobs. In digital systems, you just redesign an interface on a screen. Moving bits is easier than moving atoms.

With ChatGPT, we experience a breakthrough of a similar magnitude. The argument goes like this:

  • in digital systems, operations are still described and enacted through rules, which are defined by lines of code that the programmers write.
  • the typical example would be the “for loops” or “if then else” instructions (and their equivalents) that can be found in any programme. They describe in a precise way what the program should do. You want a the program to perform a new task? Then you will need to rewrite these lines of code.
  • to this extent digital systems are very much like analogical systems. Faster, easier to modify, but still following a mechanical way of designing and implementing operations.
  • with ChatGPT, there are no operations defined, written or even described anywhere. There is just a latent space (the huge table of numbers I discussed above), which can be triggered to produce an output. The operations to produce the output are discovered by ChatGPT at the moment you ask for it, they were not stored or explicitly described anywhere before.

The example I gave above should be enough: ChatGPT can emulate a computer even if it has not been specially designed for this task. There is no line of code, no instructions in ChatGPT that tell it how to behave like a computer if we ask it to. Yet it does it very well.

What are the consequences?

Things I am 95% sure:

➡ Resistance by all of us, first

The same way the eldest among us might prefer interacting with analog systems rather than digital ones (reading prints rather than on-screen, anyone?), I expect we will all feel more comfortable interacting with digital or analogical systems than with ChatGPT-like systems. The same way people who grew up with analogical systems always appreciate the trust they can put in good-old-plain-mechanics, we will all feel that the latent spaces of ChatGPT are too nebulous to be fully trusted, even with their incredible mind-blowing performance.

➡ Followed and mixed with rapid adoption

Why would I prefer scrolling and reading a piece of content created by “a real human” or a “real computer program” when ChatGPT can create one which is 10x more inspiring? With time, individuals who will be exposed to ChatGPT will see no reason to prefer the old ways to create content, which will feel so ‘analog’ and quaint: slow, mechanical, imperfect, uninventive.

➡ Rapid shift from content creation to “content supervision”

Any text, image, movie, sound produced by humans will soon be produced in a realistic way by ChatGPT at a speed and degree of inventiveness that no human can match. I am convinced that this will lead to a rapid integration of ChatGPT in all industries - not just creative ones. Jobs will be lost, new jobs will emerge, especially the ones in relation to the handling of ChatGPT: giving an editorial line, executing, re-drafting the results, quality control, integration with other systems.

➡ Increased isolation of individuals

There is a classic book in sociology entitled ‘Bowling alone’, which describes how individuals in the United States “have become increasingly disconnected from family, friends, neighbors, and our democratic structures”.

The title of the book is great and tragic: bowling is a typically social activity, and yet bowling alone became not uncommon. The book was published… in 2000. With the emergence of social media and smartphones in the years that followed, we as individuals became even more isolated from each other. ChatGPT will most probably increase this tendency. After all, it will be able to become your best teacher (1) (2) (3), and from there why not your shrink / friend / lover (not necessarily in that order). While it is hard to guess how this will impact our daily life and the fabric of society, one thing is pretty clear: individuals will spend even more time interacting with screens and devices than with fellow humans.

➡ Profound and unexpected changes in the fabric of society

In the middle to long term (aka in 10 years and more), I would expect that ChatGPT will introduce changes analog in scope to what Internet has caused (1), and maybe more (2).

(1) Business wise, I would expect profound transformations in professions and industries, just like the GAFAs emerged with Internet. ChatGPT will allow us to interact with a system which is seemingly omniscient - that has to open new sources of productivity, new markets, new channels of communication, new expectations by consumers.

(2) Until 2020, I was just shaking my head in disbelief when discussing “Strong AI” in class. Strong AI is an AI that is self conscious and sets its own goals, independently from the direction of humans. Very much akin to a life form. With ChatGPT, the demonstration is now made that we don’t need an AI that would be truly self-conscious to be perfectly human-like when interacting with us*. An AI simulating consciousness can achieve the same result. So give it 3, 5 10 or 20 years more of development, and ChatGPT could well become an autonomous agent. Actually, I’d bet on it.

*many acknowledge that “true self consciousness” is too high a bar - we, humans, are far from being truely and completely self-conscious.

About me

I am a professor at emlyon business school where I conduct research in Natural Language Processing and network analysis applied to social sciences and the humanities. I teach about the impact of digital technologies on business and society. I also build nocode functions 🔎, a click and point web app to explore texts and networks. It is fully open source. Try it and give some feedback, I would appreciate it!

Written on December 23, 2022