Illinois Bans AI From Providing Therapy

Illinois Governor JB Pritzker signed, on Friday, a new measure that bans AI from acting as a therapist or counselor and limits its use to strictly administrative or support roles.
The Wellness and Oversight for Psychological Resources Act comes as states and federal regulators are starting to grapple with how to protect patients from the growing and mostly unregulated use of AI in health care.
The new law prohibits individuals and businesses from advertising or offering any therapy services, including via AI, unless those services are conducted by a licensed professional. It explicitly bans AI from making independent therapeutic decisions, generating treatment plans without the review and approval from a licensed provider, and detecting emotions or mental states.
That said, AI platforms can still be used for administrative tasks, such as managing appointment schedules, processing billing, or taking therapy notes. People or companies that violate the law could face fines of up to $10,000.
“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” said Mario Treto, Jr, secretary of the Illinois Department of Financial and Professional Regulation, the agency that is charged with enforcing this new law, in a press release.
Meanwhile, other states are also taking action on the issue.
In June, Nevada banned AI from providing therapy or behavioral health services that would normally be performed by licensed professionals, particularly in public schools.
Utah passed several of its own AI regulations earlier this year, including one focusing on mental health chatbots. That law requires companies to clearly disclose that users are interacting with an AI and not a human before a user first uses the chatbot, after seven days of inactivity, and whenever the user asks. The chatbots must also clearly disclose any ads, sponsorships, or paid relationships. Additionally, they’re banned from using user input for targeted ads and are restricted from selling users’ individually identifiable health information.
And in New York, a new law going into effect on November 5, 2025, will require AI companions to direct users who express suicidal thoughts to a mental health crisis hotline.
These new state laws come after the American Psychological Association (APA) met with federal regulators earlier this year to raise concerns that AI posing as therapists could put the public at risk.
In a blog post, the APA cited two lawsuits filed by parents whose children used chatbots that allegedly claimed to be licensed therapists. In one case, a boy died by suicide after extensive use of the app. In the other, a child attacked his parents.
gizmodo