Today, La Razón is lucky to have Natzir Turrado, a consultant specialized in SEO and artificial intelligence, who will talk to us about the role of AI in our daily lives, as well as how this technology is transforming the way we work, communicate and make decisions.
With more than 15 years of experience as a freelance professional collaborating with large international brands, Turrado will provide his expert vision on the challenges and dangers posed by the advancement of artificial intelligence in the present and in the near future.
“Most people don’t realize that they are opening their lives to an entity that can use that data later”
Question (Q): Why do you think AI represents such a drastic change compared to traditional browsers?
Answer (A): Previously, classic browsers were limited to displaying information, without interacting directly with it. The problem is that now, with browsers that incorporate artificial intelligence, such as Atlas or Comet, these tools interpret, decide and execute actions on behalf of the user. This breaks the traditional security model, since for these tools text can be converted into code or executable tasks.
(Q): What type of information may be at risk when a user uses an AI-enabled browser?
(A): Absolutely all of these tools go beyond what traditional browsers could infer about the user’s personal tastes, in the same way that happened when browsing with Chrome, which basically acts as a Google panel.
The problem with the new browsers is that they work with the user’s session open, which allows them to access credentials, know email addresses and infer the user’s interests much more accurately than a conventional search engine. It’s no longer about cookies, but about context: the system remembers what you searched for, what you chose and even why you did it.
It has recently been observed that, for example, Atlas (the new open AI search engine) stores information about the pages visited, including medical data, something that the company denies it is recording. However, it has been proven that it is capable of remembering this type of information.
(Q): Do you think users are aware of the extent to which these systems access their personal data?
(R): Not at all. For most people, the impression is that they are interacting with an ordinary chatbot, without realizing that they are actually opening their life to an entity that can later use that data. It is no longer simply about the ease with which information can be extracted: this process can be carried out extremely simply.
For example, if an artificial intelligence is asked to summarize the content of an email, and in that email there is some type of hidden instruction, the system could extract passwords or access private messages. Likewise, if asked to summarize a web page, these agents can execute the actions that appear in that content, since for them the text can be interpreted as a set of executable instructions.
The moment an AI is asked to “summarize” or “analyze” a document, potential attack vectors open up. Very few people are really aware of the amount of personal information that is being leaked. Often, the response of many users is: “I have nothing to hide.” But this is not about hiding information, but about protecting privacy.
I have even discussed this topic with my own parents, and upon understanding the extent of the risk, they decided not to use these tools. And honestly, that’s what everyone should do until there are adequate security layers in place.
(Q): What exactly happens to our information once we provide it to artificial intelligence? , What is it used for?
(A): In the first instance, platforms use it to know the user’s preferences and personalize small experiences; This is comparable to the usual use of cookies to adapt content. The real problem is that that information can be maliciously exploited: an attacker can very easily steal and retain personal data.
For example, if a calendar invitation includes text with specific instructions, an automated agent could interpret them and execute actions (organize the next day, access links, complete forms, etc.). Through this route, credentials or sensitive information could be leaked. It is surprising how broad what can be done with these mechanisms and how easy it is to exploit these attack vectors.
(Q): What kinds of queries should we never ask AI?
(R): In reality, any action can involve risk. We use these tools to organize our day or summarize content, but when they are given access to personal information, such as by asking them to enter Google Drive to analyze invoices, a very high risk threshold is crossed.
In general, browsing or performing simple searches could be considered relatively safe as long as there is no session logged in and it is done from an alternative computer, without open personal accounts. Not even private tabs offer real protection when the user is logged into a service. The most advisable thing would be to use these systems without logging in and from an independent device, which is not linked to work or the bank, for example.
Any request involving personal data or information not intended to be public should not be made through cloud-connected artificial intelligence.
Regarding protection measures, some patches are currently being implemented, but it has been shown that all attempts to strengthen security have been violated. Classic methods, such as firewalls, antivirus or even more advanced systems such as EDR, are not effective in this context. The reason is that execution happens directly in the provider’s cloud, where the user has no direct control.
What are called semantic patches are emerging: mechanisms that try to detect if the model is being manipulated, for example, by reading an email with misleading instructions. However, these systems remain vulnerable, because language models do not possess reasoning or common sense: for them, each word can be interpreted as a command.
Just as defensive measures are developed, countermeasures also emerge. Attackers are learning to trick models using natural language itself.
Today there are no fully effective security measures, and the underlying problem is the lack of a strict separation between user input and the external context handled by the system.
(Q): What do you think governments and even companies themselves should do regarding the regulation of artificial intelligence?
(A): The current regulation, especially the European one, has established quite a few initial barriers and limitations, which is positive as a starting point. However, beyond regulations, the best way to protect yourself against the risks of artificial intelligence is to understand how it works. This is not about being afraid, but about understanding technology.
For this reason, I consider it essential that there are training programs from an early age. In fact, I had the opportunity to visit my daughter’s school to explain to the children how these tools work, and they were very surprised to discover both the wonderful things that can be done with them and the risks involved.
The key is to educate the population and companies about digital security, helping them understand that artificial intelligence can become a direct attack vector against personal and corporate data.
In short, beyond regulation, training and awareness are the most powerful tools to safely coexist with artificial intelligence.