ChatGPT: The AI Oracle Era Has Begun

To chat with ChatGPT and obtain valid results, you need to know how to ask the right questions!
To chat with ChatGPT and obtain valid results, you need to know how to ask the right questions!

I first tried to chat with ChatGPT in mid-December and, I admit, it surprised me.

For those of you who have never come into contact with this tool, I suggest you do. Meanwhile, to the few who don’t know what we’re talking about, I’m just saying that it’s a tool capable of chatting with you, discussing the most disparate topics and answering questions with a level of detail and comprehensibility suitable for the interlocutor.

To try it, just go to the website and create an account, even quickly using your google credentials, after which you are free to interact.

A first question

To give an idea of the potential of the medium, here is the first question I asked him, let’s say a test:

ME: – We have two database tables. One of them has these fields: ID, EMPLOYEE_NAME, ZIP. The second one has these fields: ZIP, CITY, STATE. Are you able to write a SQL query to know how much employees lives in Chicago?

CHATGPT: – To find out how many employees live in Chicago using SQL, you can use a query like the following:

FROM employees e
JOIN zip_codes z ON =
WHERE = 'Chicago';

This query will join the employees’ table and the zip_codes table on the zip column, and then filter the results to only include rows where the city is ‘Chicago’. The COUNT(*) function will count the number of rows in the result set, which will give you the number of employees who live in Chicago.

You may need to modify the table and column names in this query to match the names in your own database.

The response, flawless, required a semantic understanding of my request, not a simple transliteration or keyword search. The part that excites me the most is imagining the data structures and algorithms that were necessary to achieve such a result. To understand each other, we are at the level of HAL-9000.

Do you like Carbonara?

I’ll tell you about a second conversation. I asked him to find me the true Italian Carbonara receipt.

He provided me with a nice recipe, complete with ingredients and procedure. But now pay attention: bacon appears in the recipe instead of guanciale, and parsley is added. ChatGPT draws on everything it finds on the internet and evidently what it provides is the synthesis of this immense information heritage. Let’s not blame the tool: who knows how many famous chefs have written a carbonara recipe using those ingredients, passing them off as true Italian Carbonara receipts. And this is the spirit with which to read and interpret the result.

Very nice that at the end of the recipe he wrote to me:

CHATGPT – I hope you enjoy this traditional Italian recipe for carbonara! Buon appetito! (in Italian). 

He got funny! Evidently, during learning, the conversational algorithms have learned to humanize the answers using this kind of conclusion, I can’t say if with specific reinforcements or spontaneously, through the analysis of billions of texts around the net, probably both.

The Real Carbonara: further chat with ChatGPT

But ChatGPT is not just questions and answers: instead, it is able to interact with interlocutors. That’s why I went further and told him:

ME – You know? In Italy, many people use pancetta or bacon, but the true Carbonara receipt requires strictly the “guanciale” (pork cheek). Also, I’m not so sure about parsley. I’ve never seen parsley used for carbonara in Italy.

I stressed him a bit about the concept. If I ask you for the true Italian Carbonara receipt, it seems simple to me (Italian) to understand what I want. But this too is interesting: usually, what do people want when they look for the true Carbonara recipe? Do they really want the genuine Italian recipe? Or do they just want a good recipe that looks like it? ChatGPT seems to reflect the set of contents present on the net not only as regards the answers and the information sought but also as regards the interpretation of the questions and intentions of the applicant; this seems to me to represent a real bias, on which it would be appropriate to reflect.

The ChatGPT Answer to my further question

In fact, every piece of software reveals more about its inner workings when it gets it wrong than when it provides the correct answers. Would you happen to know what he answered me? Here you are:

CHATGPT – You are correct that guanciale (pork cheek) is the traditional meat used in carbonara sauce, although pancetta or bacon can also be used as substitutes. It is important to use high-quality ingredients in this recipe, as the simplicity of the dish means that the flavours of each component will be more pronounced.

As for the parsley, it is not a traditional ingredient in carbonara sauce. The traditional ingredients for carbonara are simply eggs, cheese (usually Pecorino Romano or Parmesan), pancetta or bacon, and black pepper. Some variations may include garlic or onions, but parsley is not typically used.

I apologize for any confusion my previous response may have caused. I hope this clears up any misunderstandings about the ingredients in traditional Italian carbonara sauce.

In the mind of the monster

My answer made him better understand the meaning of my initial question (ChatGPT has the context of the conversation in mind and remembers what was said before): I wanted to know the authentic traditional recipe and not just a good recipe that looks like it. This must have better focused his research field, excluding some information and giving more weight to others; this meant that the subsequent answer was formally more correct and reasoned. He also changed the register:

  • He understood that the interlocutor knew more than initially imagined and, for example, he investigated what makes a simple dish like Carbonara truly excellent: the ingredients. This level change also resulted in the request for a final apology.
  • While in the first answer, he provided me with a recipe (ingredients, procedure), in the second he provided gastronomic insights, thus using a different communicative register.

It goes without saying that the subsequent request for a wine to match was decidedly more technical and pertinent.

Afterwards, I started other conversations with ChatGPT, but meanwhile what to conclude from this first contact?

ChatGPT chat is different than a Google research

Meanwhile, we are faced with a powerful tool. I said tool because it is this. One of the most sophisticated tools mankind has produced; is a tool that uses artificial intelligence to create a level of interaction and integration of knowledge as similar as possible to that of humans.

I will not go into the merits of artificial intelligence in order not to go beyond the scope of this brief contribution.

I won’t even go into very important questions about the sources of the data and the confidentiality of conversations, or the limits imposed or to be imposed on the answers to be given: I don’t dare try to ask how to make an atomic bomb at home, first of all, because I don’t want to know, then because I’m afraid of receiving an effective response and finally because I don’t want to find the FBI at home.

Even just to make a simple google search effective, it is important to identify the correct terms to use. Once the results have been obtained, the reliability and relevance of the responses should be assessed, so as to arrive at an adequate answer. It won’t be difficult for you to imagine the same search on carbonara sauce through google, the countless examples and more or less reliable videos…

Chat with chatgpt vs old Google research flow
ChatGPT vs old Google research

ChatGPT eliminates most part of these steps, above all that of understanding the origin and nature of the sources and the synthesis of the results, replacing it with his understanding of the context of the question and the level of the applicant. What then remains to do?

The New Oracle

Here, the point is this: you need to know how to formulate questions. Questions become the centre of research. They must be complete, detailed and well-defined. We will always get answers, but the quality of the answers is commensurate with the quality of the questions. A person who does not know the subject can receive a first, essential answer; an expert can be confronted in an immediate and disintermediated way with all the knowledge on that topic, distilling the information requested on the merits of what they represent and not on the basis of the priorities identified by the search algorithms.

Focus on the Question

This focus on the question before the answer seems to me philosophically and humanly a great thing. Evidently, it is the fate of men to be turned back on our questions. It tells us that stupid questions will have stupid answers and that idiots, however, will be able to take them at face value. Intelligent questions may not even have an answer because even in the boundless ambit of human knowledge there are holes, unexplored areas, lack of information or documentation, or insufficient reflection. And this will be even more interesting for the expert because unanswered questions require new research, new knowledge and open spaces for research and understanding.

I wonder if we don’t need a new profession, the AI Oracle, a consultant able to help stakeholders and domain experts to interact with AI, to ask the correct questions and interpret the answers, not in a mystical key, but regarding context and the requested interaction level.

(Article ported to the blog after being first published by the author on Linkedin. Image sources: The Oracle - Generated by the author with Dall-E; Activities and roles in interacting with Google and ChatGPT - Made by the author)

Leave a Reply

Your email address will not be published. Required fields are marked *