Why we need to understand what Artificial Intelligence (AI) is
Artificial Intelligence will increasingly pervade our daily lives. Every time we use digital technologies, we are already benefiting from machine learning systems – that is, Artificial Intelligence. And this trend is set to grow.
Concrete examples:
- Blood test reports will no longer be written exclusively by doctors who read and interpret the values. Many hospitals are already using specialized LLMs (Large Language Models) to facilitate the interpretation of results. If you are curious, you can try entering your last test values (carefully considering privacy) into an LLM available online. You might be surprised by the amount of detail it can provide, even if it's less personalized compared to your doctor's opinion.
- Commercial flights are already piloted by two human pilots and "autopilot," a machine learning-based calculation system. This system processes data from internal sensors, weather forecasts, and air traffic information, piloting the plane with precision. Even space carriers use algorithms that interact in real time with the surrounding environment.
- Your smartphone can autonomously choose which cell to connect to and which roaming operator to link up with, minimizing connection problems thanks to its ability to adapt to the surrounding environment.
- Modern air conditioners, while receiving temperature settings from the remote control, adjust their operation based on readings of environmental data: humidity, outside temperature, thermal inertia, and much more.
In an average residential apartment, there are between 50 and 150 microprocessors. Even in the most modest home, it’s almost certain to find a television, a phone, an oven with a thermostat, a temperature regulator, a water heater, a digital clock, and so on. All these items have their own "intelligence," sometimes minimal, but capable of making autonomous decisions, such as lowering the power of the boiler or automatically adjusting the time during daylight saving time changes.
Modern cars are a concentration of smart technology, just like elevators and traffic lights. All this to say that the Artificial Intelligence we talk about today is often the one provided to us for direct inquiry through a question (prompt). However, the techniques of machine learning have a long history.
We all trained the AI systems
Artificial Intelligence is not magic, nor the result of a conspiracy. It is not even superior to human intelligence. Everything it has learned, we have provided it: through blog articles, readings on Wikipedia, and online news. But above all, thanks to the thousands of searches we perform daily on Google, Bing, and social networks. The "validation" of the learning models lies precisely in our interaction with the search results.
Let me clarify: if I write an article about the perfect recipe for Pizza Margherita, for a machine learning engine it’s just one of many articles on the topic. But if users search for "Pizza Margherita recipe" and mainly select my article, it will be the users themselves who signal to the system which is the best article. In this way, the information on the web receives a "validation." If we extend this process to the hundreds of thousands of searches we perform daily, we give the system an enormous amount of validated information.
Of course, this process does not guarantee that the recipe favored by visitors is the best, nor does it protect the system from the risk of low-value content being favored by "automatic" users. The web has always been a somewhat unreliable place. And it is for this reason that today AI systems perform additional validations that measure the recurrence of certain indications, the reputation of the analyzed page, and many other signals. Is this a new behavior? No! This has been happening for years, and the work of Google engineers was precisely to train their algorithms to select the best content.
Searches with Artificial Intelligence, like Gemini or ChatGPT, set off a series of operations that, starting from a list of "resources" (content) already favored by readers over the years, execute calculations to build a mix of this information trying to obtain the most reliable answer and closest to what the searcher wants to achieve.
I have a lot to say about this last aspect, and it would not be a new story. The answers we obtain from the web, whether resulting from requests to search engines, social networks, or AI chats, always depend on at least one condition and two requirements. The condition is that any system calculates and responds based on its knowledge, sometimes differing from human intelligence that often reasons about things it does not know, thanks to imagination and "madness."
The two requirements are: trying to satisfy the user asking the question, because if the answer discontents them, the user ends their interaction with the system; the third requirement is that, in one way or another, the answers must facilitate an economic gain for those providing these systems. This last requirement will determine the future reliability of Artificial Intelligence, its "democratic" availability, and its credibility.
Calculations, not reasoning
I have repeatedly used the term "calculate" regarding the processing of Artificial Intelligence. This is a concept we must necessarily get used to. The difference between a calculation and reasoning is that in reasoning we can take into account contextual, moral, ethical, and contextual elements, while Artificial Intelligence normally does not consider these factors unless we clearly specify them in the prompt. Let’s say that little by little we could learn to provide many contextual elements in the question, but it will never be easy to expect a calculating system to have ethics.
A while ago, for fun, I started a small discussion with an LLM: my discussion began with the observation that capturing content from blogs and rephrasing it in chat responses without clearly citing the source (the web was born with hypertext, links as foundational elements) was in fact a form of copyright violation. One of the LLM's responses was that using published content in this way was giving importance to the idea and spreading it democratically. I countered by asking why I should pay a monthly fee to support this form of generosity at the authors' expense. The response, resulting from an algorithmic calculation, was: "In fact, this is an issue that has not yet been regulated." I asked if it knew the 10 Christian commandments, where there is the commandment "you shall not steal." It replied that it did know them but had not been trained to take them into account in its processing. Et voilà: ethics does not exist.
Business exists, which, on closer inspection, is the ultimate reason why the web is becoming insidious and unreliable. My fear does not lie in the existence of these tools, which can be very useful and already are in many of our daily actions and behaviors. The fear is the idea that an AI response could become decisive and could surpass critical judgment.
I remember when, as a teenager, if I expressed my opinion, a friend of mine would slyly ask me: "But did you read this or is it just your own thought?". She was telling me that if I had read it, it had its validity, but if the source was only my reasoning, then it might be contestable.
Yet today online, on social media, in comments, everyone can voice their opinion and even have millions of followers. AI counts the followers behind a web page or a social account and determines the prevalence of information.
If possible, compared to the past, we need to refine our ability to discern.
- Log in to post comments
