Texas Attorney General Ken Paxton secured a first-of-its-kind settlement with a Dallas-based artificial intelligence healthcare technology company called Pieces Technologies¹, marking the first major legal enforcement action targeting AI in healthcare. The enforcement action represents the first settlement under a state consumer protection act involving generative artificial intelligence² used in medical settings.
The September 2024 settlement addresses allegations that Pieces Technologies deceived hospitals about its AI products’ accuracy rates. At least four major Texas hospitals have been providing their patients’ healthcare data in real time to Pieces so that its generative AI product can “summarize” patients’ condition and treatment for hospital staff¹. The company claimed its AI in healthcare tools had error rates of less than one in 100,000 uses, but state investigators found these metrics were likely false.
“AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk,” said Attorney General Paxton¹. This landmark case sets critical precedent as AI in healthcare rapidly expands across medical facilities nationwide, with the global market valued at $26.57 billion in 2024 and projected to reach $187.69 billion by 2030³.
The settlement requires Pieces Technologies to provide clear disclosure about AI accuracy, train healthcare workers on proper AI use, and submit to ongoing state monitoring. While no financial penalties were imposed, the five-year compliance agreement establishes new standards for transparency in AI in healthcare applications that could influence regulations nationwide.

Texas Leads National AI in Healthcare Enforcement
State regulators are moving faster than federal agencies to address AI in healthcare safety concerns. In June, Paxton announced the launch of a dedicated team housed within his office’s Consumer Protection Division focused on “aggressive enforcement of Texas privacy laws”⁴, which he called the largest such unit in the United States.
The Texas enforcement approach demonstrates how existing consumer protection laws can effectively regulate AI in healthcare when specific AI legislation doesn’t exist. State attorneys general are increasingly focusing on regulation of AI as the technology proliferates, utilizing privacy and consumer protection laws to regulate it⁴ rather than waiting for new federal frameworks.
The Texas OAG alleged that Pieces advertised and marketed the accuracy of its AI technology with metrics of a “critical hallucination rate” and a “severe hallucination rate” of less than 0.001%². In AI terminology, “hallucination” refers to when artificial intelligence systems generate false or misleading information while presenting it as factual. For AI in healthcare applications, hallucinations can lead to incorrect patient diagnoses or treatment recommendations.
The investigation revealed serious concerns about how AI in healthcare companies market their products to hospitals. The OAG determined these metrics were likely inaccurate and deceptive despite Pieces’ denial of misrepresentation⁵. State officials worried that hospitals were making critical patient care decisions based on AI tools that performed worse than advertised.
“Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly,” Paxton emphasized⁶. The settlement establishes specific requirements that extend beyond Texas, as many healthcare AI companies operate across multiple states and could face similar scrutiny from other attorneys general.
States Race to Regulate AI in Healthcare Applications
Beyond enforcement actions, multiple states are enacting comprehensive AI in healthcare regulations that will reshape how medical AI systems operate. Colorado’s Artificial Intelligence Act takes effect February 1, 2026, becoming the first comprehensive state AI law with significant healthcare implications⁷.
Colorado’s groundbreaking legislation specifically targets “high-risk AI systems” used in healthcare decisions. Healthcare providers must evaluate AI across patient care, administrative functions, and financial operations, with particular attention to algorithmic discrimination in billing and claims processing⁷. The law requires impact assessments, governance frameworks, and disclosure requirements for any AI system making consequential healthcare decisions.
The Colorado approach addresses growing concerns about bias in AI in healthcare applications. Studies have shown that AI systems can perpetuate or amplify existing healthcare disparities, particularly affecting minority and low-income patients. An algorithm used to guide healthcare decisions was found to be biased against black patients, who had to be much sicker to be recommended for extra healthcare⁸.
Texas passed the Texas Responsible Artificial Intelligence Governance Act in June 2025, making it the fourth state with AI-specific legislation⁹. The new law establishes substantial penalties for violations: civil penalties between $10,000-$12,000 for curable violations and $80,000-$200,000 for uncurable violations⁹.
Several other states are developing similar frameworks. The regulatory requirements these states are implementing include several key components that AI in healthcare companies must now address. These requirements typically mandate that companies provide detailed documentation about how their AI systems work, conduct regular assessments of potential bias or discrimination, and maintain ongoing monitoring of system performance in real-world healthcare settings.
Key state regulatory requirements include:
- Transparency in AI decision-making processes
- Regular bias and discrimination assessments
- Performance monitoring and reporting
- Clear disclosure of AI limitations to healthcare providers
- Training requirements for medical staff using AI tools
Federal AI in Healthcare Regulation Accelerates
Federal agencies are simultaneously advancing their own AI in healthcare oversight frameworks, though industry experts note that state action is moving faster. On January 6, 2025, the FDA published Draft Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations¹⁰, representing the agency’s first comprehensive guidance for AI-enabled medical devices.
As of May 2025, the US Food and Drug Administration has authorized 950 AI/ML-enabled medical devices for marketing in the US¹¹. The majority of these approvals focus on medical imaging, with 76% of all AI-enabled medical devices being used in radiology¹¹ applications like analyzing X-rays, MRIs, and CT scans.
However, the federal regulatory approach faces significant pushback from the AI in healthcare industry. The FDA is taking a tougher stance on AI tools in medicine, with the health tech industry arguing regulators are overstepping¹² by attempting to regulate clinical decision support tools as medical devices under existing frameworks.
The Clinical Decision Support Coalition filed a petition asking the FDA to withdraw its guidance on regulating AI tools used for clinical decisions. “FDA is not listening to Congress,” said Bradley Merill Thompson, the coalition’s general counsel. “They are not following the statute that Congress wrote”¹² regarding the 21st Century Cures Act, which exempts certain clinical decision support tools from medical device regulations.
The U.S. Food and Drug Administration announced that all its centers will deploy artificial intelligence internally immediately, with full integration by June 30¹³, demonstrating the agency’s own embrace of AI technology while developing regulatory frameworks. This internal adoption gives the FDA practical experience with AI capabilities and limitations that could inform future regulations.
The FDA is requesting public comment on this draft guidance until April 7, 2025¹⁰, and will hold a public webinar in February to discuss the recommendations. The guidance addresses critical issues including performance monitoring, algorithmic bias, and transparency requirements that directly relate to concerns raised in the Texas enforcement action.
Explosive Market Growth Drives Safety Concerns
The rapid expansion of AI in healthcare is creating both enormous opportunities and significant safety risks that regulators are scrambling to address. The global AI in healthcare market size was valued at $29.01 billion in 2024 and is projected to grow to $504.17 billion by 2032, exhibiting a CAGR of 44.0%¹⁴.
This unprecedented growth is driven by several factors that make AI increasingly essential for modern healthcare delivery. According to the DATCON index, the healthcare data explosion will exceed 10 trillion gigabytes in 2025¹¹, making AI analysis crucial for processing vast amounts of patient information that would be impossible for humans to analyze manually.
However, public trust in AI in healthcare remains a significant challenge. Data from the Pew Research Center shows that six in 10 (60%) Americans would feel uncomfortable if their healthcare provider relied on AI for their medical care¹⁵. This skepticism reflects legitimate concerns about AI accuracy and safety that the Texas settlement directly addresses.
Research studies validate these public concerns about AI reliability in healthcare settings. Researchers found that GPT had 21 summaries with incorrect information and 50 with generalized information, while Llama had 19 errors and 47 generalizations¹⁶ when generating medical summaries from detailed medical notes.
“I think where we are with generative AI is it’s not transparent, it’s not consistent and it’s not reliable yet,” Dr. John Halamka, president of the Mayo Clinic Platform, told Healthcare IT News. “So we have to be a little bit careful with the use cases we choose”¹⁶. The Mayo Clinic has developed its own risk-classification system to evaluate AI algorithms before external use.
The challenge extends beyond individual AI tools to systemic healthcare integration. North America dominates the AI in healthcare market, accounting for over 54% of revenue as of 2024³, but this rapid adoption often outpaces safety protocols and regulatory oversight.
Industry Compliance and Future Outlook
The Texas settlement establishes practical compliance requirements that AI in healthcare companies nationwide should expect to become standard practice. Companies must be aware of reasonably foreseeable risks and impacts that the AI system poses, and that, if something goes wrong, the company cannot simply blame the developer⁴.
Healthcare organizations should anticipate similar enforcement actions as state attorneys general gain experience with AI-related cases. The FTC has developed guidance for companies employing AI products and advertising their capabilities, warning against exaggerating what an AI product can do and noting that claims that lack scientific support could be considered deceptive⁴.
The regulatory landscape will continue evolving rapidly as both state and federal AI in healthcare frameworks mature. Texas AG Ken Paxton will have exclusive enforcement powers, including the ability to issue civil investigative demands to obtain training data, purpose documents, and metrics related to AI systems⁹ under the new state law.
Healthcare organizations deploying AI systems should immediately establish governance frameworks, conduct regular impact assessments, and ensure transparent communication about AI capabilities and limitations. The Texas precedent demonstrates that states will not wait for federal action to protect patients from potentially harmful AI applications, making proactive compliance preparation essential for any healthcare entity using artificial intelligence technology.
References
- Attorney General Ken Paxton Reaches Settlement in First-of-its-Kind Healthcare Generative AI Investigation | Office of the Attorney General
- Texas Attorney General Settles with Healthcare AI Firm Over False Claims on Product Accuracy and Safety | Privacy World – https://www.privacyworld.blog/2024/09/texas-attorney-general-settles-with-healthcare-ai-firm-over-false-claims-on-product-accuracy-and-safety/
- AI In Healthcare Market Size, Share | Industry Report, 2030 – https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market
- Takeaways From Texas AG’s Novel AI Health Settlement | Troutman Pepper Locke – https://www.troutman.com/insights/takeaways-from-texas-ags-novel-ai-health-settlement.html
- Novel Settlement Reached in Generative AI Deceptive Trade Practices Healthcare Investigation | Insights | Holland & Knight – https://www.hklaw.com/en/insights/publications/2024/09/novel-settlement-reached-in-generative-ai-deceptive-trade-practices
- Ken Paxton settles with healthcare tech company over Generative AI software – https://www.foxsanantonio.com/newsletter-daily/ken-paxton-settles-with-healthcare-tech-company-over-generative-ai-software-local-news-near-me-imaging-scanning-medicare-medicaid
- The Colorado AI Act: Implications for Health Care Providers | Foley & Lardner LLP – https://www.foley.com/insights/publications/2025/02/the-colorado-ai-act-implications-for-health-care-providers/
- AI Meets the Law: Colorado First State to Regulate AI in Healthcare – https://compliancy-group.com/colorado-artificial-intelligence-act/
- Texas Legislature Passes Comprehensive AI Governance Act | Regulatory Oversight – https://www.regulatoryoversight.com/2025/06/texas-legislature-passes-comprehensive-ai-governance-act/
- FDA Issues Comprehensive Draft Guidance for Developers of Artificial Intelligence-Enabled Medical Devices | FDA – https://www.fda.gov/news-events/press-announcements/fda-issues-comprehensive-draft-guidance-developers-artificial-intelligence-enabled-medical-devices
- AI in Healthcare Statistics: 20+ Key Facts for 2025-2029 – https://binariks.com/blog/artificial-intelligence-ai-healthcare-market/
- The FDA plans to regulate far more AI tools as devices. The industry won’t go down without a fight – https://www.statnews.com/2023/02/23/fda-artificial-intelligence-medical-devices/
- US FDA centers to deploy AI internally, following experimental run | Reuters – https://www.reuters.com/business/healthcare-pharmaceuticals/us-fda-centers-deploy-ai-internally-immediately-2025-05-08/
- AI in Healthcare Market Size, Share | Growth Report [2025-2032] – https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-in-healthcare-market-100534
- 50+ AI in Healthcare Statistics 2024 · AIPRM – https://www.aiprm.com/ai-in-healthcare-statistics/
- Texas AG settles with clinical genAI company | Healthcare IT News – https://www.healthcareitnews.com/news/texas-ag-settles-lawsuit-clinical-genai-company