A 19-year-old young man was arrested in the United States after confessing to ChatGPT a crime: “I destroyed all those cars”

The emergence of artificial intelligence in our daily lives has transformed not only the way we work or learn, but also the way we express ourselves and seek emotional support. Day after day We trust these systems to consult everything from trivial doubts to deep concerns.without realizing that this subtle dependence can erode our autonomy and reinforce a drive to resort to AI even in complex situations. This technological closeness is dangerous when we confuse the digital assistant with a real confidant, since behind each message algorithmic decisions, biases or monitoring mechanisms that we never see can be hidden.

When we talk to an AI we not only activate a tool, we also create an intimate digital space that can be transformed into an invisible trap. The prolongation of use, day after day, conversation after conversation, can generate psychological dependence. Instead of turning to human interlocutors, we seek solace in a system that responds instantly, that does not judge, or so we believe, that filters its responses according to hidden rules, and that can escalate our most intense words into alert mechanisms. In that diffuse terrain between the private and the public, the line of what is permissible becomes tenuous, and AI stops being just an assistant and also becomes a silent watchdog.

19-year-old arrested for using ChatGPT

In this context, a case arises that has challenged many of these supposed limits. A 19-year-old young man was arrested in the United States after confessing, in a conversation with ChatGPT, that he had destroyed several cars on the university campus. The sequence begins with the suspect chatting with the system asking things like how bad did I do or am I going to jail, and at some point he writes the explicit confession saying I destroyed all those cars. Authorities gained access to the young man’s device, recovered the chat history, combined it with security camera footage and location data, and presented that conversation as the centerpiece of the indictment. The conversation was reportedly enabled by the young man by voluntarily handing over the device, which made that material admissible without the need for a court order in that particular case.

After his arrest, The young man was booked into the Greene County Jail.where he remains in preventive detention while awaiting trial. The bail was set at 7,500 dollars, approximately 6,400 euros, a condition that could allow his release under certain guarantees if that payment is met. For now, the case is still ongoing, and only the one who resolves the legal process will be able to determine the validity of that digital confession as evidence. But the crudeness of the episode already opens a crack between the daily use of AI and criminal jurisdiction, with tangible consequences for individual rights.

The limits of AI and its implication

This turn of the case forces us to raise a disturbing question: to what extent interactions with artificial intelligence can become valid evidence in court. The concept of digital privacy, the transparency of algorithms, and the ethical responsibility of the companies that operate these systems enter the debate. Sam Altman, CEO of OpenAI, has acknowledged that conversations with ChatGPT today do not have the same legal protection as those with a doctor or a lawyer, and that, In extreme scenarios, AI could report conflicting conversations to the authorities. According to him, the system identifies certain risk patterns and refers them to human review, which would allow police forces to be alerted if it is considered that there is an imminent danger.

Is digital privacy in danger?

The tension between technology and fundamental rights in this episode makes it clear that we live in an era in which the intimate is no longer just human. Every word we say to AI can have legal consequenceseven if it was said under supposed anonymity or in a moment of emotional vulnerability. The challenge now is to draw up clear rules, demand transparency from the platforms, and collectively decide how much surveillance we are willing to admit in those conversations that we thought were safe.

This case could set a precedent that conditions the How millions of people interact with artificial intelligence. What once seemed like a simple help tool is today also presented as a channel for legal responsibility. Technology advances at great speed, but ethics and legislation must keep pace, so that trust in these platforms does not become a hidden risk.