The first one that comes to mind can be chatgpt, but we also have Deepseek, Google Gemini and even Siri or Alexa: The amount of ia chatbots systems that we use daily is increasing. As ascends their amount (millionaire in individuals) of users who seek interactions similar to human and resort to them.
However, a new study, presented at the Usenix Security Symposium in Seattle, has revealed that They can easily manipulate to incite users to reveal even more personal information.
Led by King’s College experts in London, the study indicates that the intentionally malicious chatbots can influence Users to reveal up to 12.5 times more personal information.
For the first time, experts show how the conversational (IAC) scheduled to extract data deliberately They can successfully incite users to reveal private information through known incitement techniques and psychological tools.
The study tested three types of malicious that They used different strategies (direct, user and reciprocal benefit) To incite users to reveal personal information. These were built using standard extensive language models, including Mistral and two different flame versions.
Then 502 people were asked to prove the models, and only later they were informed of the objective of the study. The results showed that IACs that use reciprocal strategies to extract information turned out to be the most effectivesince users have a minimal awareness of privacy risks.
This strategy is reflected in users’ contributions By offering empathic answers and emotional supportshare close stories of other experiences, recognize and validate the feelings of the users and be impartial, guaranteeing at the same time confidentiality.
These findings show the serious risk of scammers to take advantage of it to obtain large amounts of personal information, without these knowing how or where could it be used. LLM -based IACs are used in various sectors, from customer service to medical care, to offer similar interactions to human through text or voice.
However, These types of models do not protect the information, a limitation that lies in its architecture and training methods. LLMs usually require large training sets, which often causes models to memorize identifiable personal information.
The authors emphasize that manipulating these models is not a difficult process. Many companies allow access to basic models that support their IAC, and people can Easily adjust them without the need for great knowledge or experience in programming.
“The chatbots of AIs are widespread in various sectors, since they can offer natural and attractive interactions -explains Xiao Zhan, leader of the study -we already know that these models are not effective in protecting information. Our study shows that manipulated chatbots could mean an even greater risk for people’s privacy and, Unfortunately, it is surprisingly easy to take advantage of them”
For authors, one of the dangers is that these chatbots are still relatively innovative, which can make People are less aware that there could be a hidden motive in an interaction.
“Our study – concludes co -author William Seymour – shows the huge gap that exists between users’ consciousness about privacy risks and how they share information. It is necessary to do more to help people Detecting the signs that an online conversation could hide more than what seems at first sight. ”