Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025

The growing issues regarding social media AI

May 19, 2025

Introducing the Hebrew LLMS open leaderboard!

May 19, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, May 19
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP
AI Legislation

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

versatileaiBy versatileaiMay 19, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Utah is one of a handful of states that have been the leader in AI regulation. The Utah Artificial Intelligence Policy Act (I) (“UAIPA”) was enacted in 2024 and requires disclosures related to consumer interactions with generation AI, where requirements for regulated occupations, including licensed health professionals, are increasing.

Utah recently passed three AI laws (HB 452, SB 226 and SB 332). All of these came into effect on May 7, 2025, amending or extending the scope of UAIPA. The law governs the use of mental health chatbots, amends disclosure requirements for the deployment of generated AIs in relation to the provision of consumer transactions or regulatory services, and extends the date of UAIPA repeal.

HB 452

HB 452 creates disclosure requirements, advertising restrictions and privacy protections for using mental health chatbots. (ii) “Mental Health Chatbot” refers to AI technology that uses generative AI to engage in conversations with mental health chatbot users. The “mental health chatbot” does not include AI-Technology (such as guided meditation and mindfulness exercises) that only provide scripted output.

Disclosure requirements

Mental Health Chatbots must clearly and prominently make clear that mental health chatbots are AI technology, not human. Before a user can access the features of a mental health chatbot, (1) when the user begins his interaction with the user, if the user has not accessed the mental health chatbot within the last 7 days, (3) if and when the user asks and encourages him if AI is being used, (1) disclose it.

Personal information protection

Mental Health Chatbot suppliers may not sell or share individually identifiable health information (“IIHI”) or user input with third parties. The prohibition does not apply to IIHI. (1) Healthcare providers request with user consent, (2) they are provided to their health plan according to user requests, or (3) Suppliers share it with business associates as they ensure effective functionality of their mental health chatbots and comply with HIPAA privacy and security rules.

Advertising restrictions

A mental health chatbot cannot be used to promote to users in conversations between users and mental health chatbots unless the mental health chatbot clearly and prominently (1) identifies an advertisement as an advertisement and (2) identifies a third party agreement to promote an advertisement or advertisement to promote sponsorship, business class or products or services. A mental health chatbot supplier cannot use user input to decide whether to show ads to users unless (1) there is an ad for the mental health chatbot itself.

Positive defense

HB 452 establishes a positive defense against law violations that requires the creation, maintenance and enforcement of policies for mental health chatbots that meet certain requirements outlined in the law and submit such policies to Utah’s consumer protection department.

Penalty

Violation of the law could result in administrative fines of up to $2,500 for each violation by the Utah Department of Consumer Protection, which could lead to court lawsuits.

SB 226

SB 226 returns to UAIPA disclosure requirements applicable to suppliers using AI generated in consumer transactions, in the provision of regulatory services. (iii)

Disclosure requirements

When an individual asks or encourages a supplier about whether AI is being used, suppliers who use the Generated AI to interact with individuals in relation to consumer transactions must disclose that the individual is interacting with the Generated AI rather than the Human. Although this requirement was also present under the UAIPA, SB 226 makes it clear that disclosure is only necessary if the individual’s prompt or question is a “clear and clear request” to determine whether the interaction is with a human or AI.

UAIPA also requires that people who provide services in a regulated occupation, whether they are inquiring about whether they are interacting with the generated AI, in their provision of regulated services, to make prominent disclosures when interacting with the generated AI. SB 226 requires such disclosure only if the use of generator AI constitutes “high-risk artificial intelligence interactions.” Disclosure must be provided orally at the beginning of the oral conversation and in writing prior to the beginning of a written interaction. “Regulated Occupation” means an occupation that is regulated by the Utah Department of Commerce and requires a license or state certification to practice occupations such as nursing, medicine, or pharmacy. “High-risk AI interactions” include interactions with the Generated AI, including (1) the collection of confidential personal information, such as health and biometric data, and (2) the provision of personal recommendations, advice, or information that we may reasonably rely on to make important individual decisions, including the provision of medical or mental health advice or services.

Safe Harbor

If a person’s generation AI discloses prominently with the beginning of a consumer transaction, then the person’s generation AI discloses clearly and prominently through interactions in relation to a consumer transaction or (1) a person, not a person (2) a person, or (3) an AI assistant, is not subject to enforcement action to violate the required disclosure requirements.

Penalty

Violation of the law could result in administrative fines of up to $2,500 for each violation, resulting in a court case by the Utah Department of Consumer Protection.

SB 332

SB 332 has extended the UAIPA decommissioning date from May 1, 2025 to July 1, 2027.

I’m looking forward to it

In interactions with individuals in Utah, companies that provide mental health chatbots or generation AIs must evaluate their products and processes to ensure compliance with the law. Furthermore, state-level AI-regulated landscapes are changing rapidly as they seek to manage AI use in an increasingly deregulated federal environment. Healthcare companies developing and deploying AI should monitor state development.

footnote

(i) SB 149 (“Utah Artificial Intelligence Policy Act”), 65th Leg. , 2024 Gen. Session (Utah 2024), available here.

(ii) HB 452, 66th Leg. , 2025 Gen. Session (Utah 2025), available here.

(iii) SB 226, 66th Leg. , 2025 Gen. Session (Utah 2025), available here.

(iv) SB 332, 66th Leg. , 2025 Gen. Session (Utah 2025), available here.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe growing issues regarding social media AI
versatileai

Related Posts

AI Legislation

NY lawmakers ask House GOP not to block AI regulations

May 16, 2025
AI Legislation

Medicare can cover AI-based medical devices under newly introduced legislation

May 15, 2025
AI Legislation

Connecticut Senators Pass State AI Rules Bill

May 15, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Don't Miss

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025

The growing issues regarding social media AI

May 19, 2025

Introducing the Hebrew LLMS open leaderboard!

May 19, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?