Chatbot confessions: AI confidants deserve legal confidentiality, too
Jonathan Rinderknecht’s casual chats with ChatGPT are now central to a federal arson case — a chilling example of how private AI conversations could soon be treated as public confessions

On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is lift because of your cigarettes,” misspelling the word “lit.” “Yes,” ChatGPT replied. Ten months later, he is now being accused of having started a small blaze that authorities say reignited a week later to start the devastating Palisades fire.
Rinderknecht, who has pleaded not guilty, had previously told the chatbot how “amazing” it had felt to burn a Bible months prior, according to a federal complaint, and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire, while a crowd of rich people mocked them behind a gate.
For federal authorities, these interactions with artificial intelligence indicated Rinderknecht’s pyrotechnic state of mind and motive and intent to start the fire. Along with GPS data that they say puts him at the scene of the initial blaze, it was enough to arrest and charge him with several counts, including destruction of property by means of fire.
This disturbing development serves as a warning to our legal system. As people increasingly turn to AI chat tools as confidants, therapists, and advisers, we urgently need a new form of legal protection that would safeguard most private communications between people and AI chatbots. I call it “AI interaction privilege.”
All legal privileges rest on the idea that certain relationships — lawyer and client, doctor and patient, priest and penitent — serve a social good that depends on candour. Without assurance of privacy, people self-censor, and society loses the benefits of honesty. Courts have historically been reluctant to create new privileges, except where “confidentiality has to be absolutely essential to the functioning of the relationship,” Greg Mitchell, a University of Virginia law professor, told me. Many users’ engagements with AI now reach this threshold.
People speak increasingly freely to AI systems, not as diaries but as partners in conversation. That’s because these systems hold conversations that are often indistinguishable from human dialogue. The machine seemingly listens, reasons and provides responses — in some cases not just reflecting but shaping how users think and feel. AI systems can draw users out, just as a good lawyer or therapist does. Many people turn to AI precisely because they lack a safe and affordable human outlet for taboo or vulnerable thoughts.
This is arguably by design. Just last month, the OpenAI chief executive, Sam Altman, announced that the next iteration of its ChatGPT platform would “relax” some restrictions on users and allow them to make their ChatGPT “respond in a very humanlike way.”
Allowing the government to access such unfiltered exchanges and treat them as legal confessions would have a massive chilling effect. If every private thought experiment can later be weaponised in court, users of AI will censor themselves, undermining some of the most valuable uses of these systems. It will destroy the candid relationship that makes AI useful for mental health and legal and financial problem-solving, turning a potentially powerful tool for self-discovery and self-representation into a potential legal liability.
At present, most digital interactions fall under the Third-Party Doctrine, which holds that information voluntarily disclosed to other parties — or stored on a company’s servers — carries “no legitimate expectation of privacy.” This doctrine allows government access to much online behaviour (such as Google search histories) without a warrant.
But are AI conversations “voluntary disclosures” in this sense? Since many users approach these systems not as search engines but as private counsellors, the legal standard should evolve to reflect that expectation of discretion. AI companies already hold more intimate data than any therapist or lawyer ever could. Yet they have no clear legal duty to protect it.
AI interaction privilege should mirror existing legal privileges in three respects. First, communications with the AI for the purpose of seeking counsel or emotional processing should be protected from forced disclosure in court. Users could designate protected sessions through app settings or claim privilege during legal discovery if the context of the conversation supports it. Second, this privilege must incorporate the so-called duty to warn principle, which obliges therapists to report imminent threats of harm. If an AI service reasonably believes a user poses an immediate danger to self or others or has already caused harm, disclosure should not just be permitted, but obligated. And third, there must be an exception for crime and fraud. If AI is used to plan or execute a crime, it should be discoverable under judicial oversight.
Under this logic, Mr. Rinderknecht’s case reveals both the need and the limits of such protection. His cigarette query, functionally equivalent to an internet search, would not merit privilege. But under AI interaction privilege, his confession about Bible burning should be protected. It was neither a plan for a crime nor an imminent threat.
Creating a new privilege follows the law’s pattern of adapting to new forms of trust. The psychotherapist-patient privilege itself was only formally recognised in 1996, when the Supreme Court acknowledged the therapeutic value of confidentiality. The same logic applies to AI now: The social benefit of candid interaction outweighs the cost of occasional lost evidence.
To leave these conversations legally unprotected is to invite a regime where citizens must fear that their digital introspection could someday be used against them. Private thought — whether spoken to a lawyer, a therapist, or a machine — must remain free from the fear of state intrusion.
@The New York Times

