Why Tennessee Must Act on AI…Now!

This week, the artificial intelligence industry experienced what may be its defining moment, and most Americans are only beginning to understand what it means for them.

On February 27, 2026, President Donald Trump ordered every federal agency to immediately stop using products made by Anthropic, the AI company that makes Claude, one of the most capable and widely used AI systems in the world. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk to national security.” Hours later, rival company OpenAI announced it had struck a deal with the Pentagon to deploy its AI models inside the military’s classified network.

The reason this happened? Anthropic refused to remove safety guardrails that prevented its technology from being used for mass domestic surveillance of Americans and fully autonomous weapons that kill without human oversight.

Read that again.

The federal government’s demand that triggered the blacklisting of a $380 billion AI company was the removal of protections against surveilling American citizens and deploying weapons that kill people without a human deciding to pull the trigger.

This is not fearmongering or a science fiction scenario, this is breaking news in the United States of America.

What Just Happened and Why It Should Terrify Every American

Anthropic CEO Dario Amodei made a public statement refusing the Pentagon’s ultimatum. He wrote: “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

The Defense Department’s response was to threaten invoking the Defense Production Act, a Cold War emergency law designed to commandeer industrial production, to force a private technology company to remove its own safety limits, to be used AGAINST American citizens!

Over 330 employees from Google and OpenAI signed an open letter in solidarity with Anthropic’s stand, writing: “We hope our leaders will stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

OpenAI CEO Sam Altman publicly sided with Amodei, but was simultaneously negotiating a deal its own deal with the Pentagon.

To Recap, Anthropic (Claude) stood firm against the Pentagon by refusing to allow its AI platform to be used for mass domestic surveillance of Americans and fully autonomous weapons, meaning no human control or direction. In retaliation for Anthropic’s insistence on ethical boundaries and safeguards, President Trump canceled Anthropic’s government contract and basically blacklisted them from future federal contract opportunities.

All the while, OpenAI took up the mantle and agreed to work with the Pentagon, insisting on AI safety guardrails.

The United States, with Israel, began a military operation against Iran, which it seems the government used the Claude AI platform for some of its military operations against Iran.  AI is extremely effective at achieving many goals, but it is equally dangerous without proper safeguards.

The problem is that there isn’t a legislative mechanism in place to ensure AI isn’t being used in a manner that the Pentagon initially demanded of Anthropic, to which Anthropic said, NO.

Congress has passed no comprehensive AI legislation that addresses this and a host of other issues pertaining to AI use in the United States. There is especially no mechanism for public oversight of how the most powerful AI systems on the planet are being deployed, whether by the government or by the private sector. Even so, Trump’s America’s AI Action Plan is designed to eliminate oversight and safeguards at the state level by threatening to sue states that try to protect their own citizens.

Tennessee can be an innovative leader in AI without sacrificing necessary safeguards and protections for Tennesseans.

AI Is Currently Operating in Tennessee without Sufficient Safeguards

The Pentagon showdown is the dramatic headline. But it is only one dimension of an AI landscape that poses daily, immediate risks to ordinary Tennesseans, risks that have nothing to do with military weapons systems, but are equally disastrous.

Right now, in Tennessee:

AI is making hiring decisions that determine whether you get a job interview — and there is no law requiring those systems to be tested for racial or gender bias before they are deployed.

AI is scoring criminal defendants before trial, influencing bail and sentencing decisions, without defendants having any right to know what data was used or how to contest the result.

Social media platforms are using AI recommender systems that silently suppress your posts if they contain certain words or express certain political, religious, or social viewpoints, without you ever knowing this is happening.

Deepfake AI systems are generating synthetic media of real people in fabricated scenarios designed to deceive. Tennessee’s ELVIS Act protects musicians from AI voice cloning, but it does not protect your neighbor running for school board.

Foreign-government-controlled AI systems, including systems built by companies operating under the direction of the Chinese Communist Party, have been assessed by Tennessee’s own state security evaluators as posing serious data and national security risks. Those systems are not banned from operating in Tennessee today.

Children and youth are daily targets of AI systems specifically engineered to maximize their time on screens through psychological exploitation such as variable reward loops, social comparison algorithms, or infinite scrolling designed to make them lose track of time. There is no state law stopping any of it.

Why Tennessee Is Already Ahead, And Why That Matters

Tennessee has more AI governance infrastructure than many other states in the country, but we are still incredibly vulnerable to the risks of unregulated AI integration.

The ELVIS Act, signed in March 2024 and effective July 2024, made Tennessee the first state in the nation to pass AI-specific legislation. It is nationally recognized as the gold standard for AI likeness protection, and its core principles have been cited in proposed federal legislation.

The Tennessee Information Protection Act (TIPA), established consumer data rights that lay the foundation for AI accountability: the right to access, correct, and delete your personal data.

The STS Enterprise AI Policy (200-POL-007), developed by Tennessee’s Strategic Technology Solutions division, already requires state agencies to comply with the NIST AI Risk Management Framework, the same technical standard adopted by the world’s leading AI governance frameworks. This policy established an AI Review Committee, a Standard Products List for approved AI tools, and a prohibition on unapproved AI on state systems.

The Tennessee AI Advisory Council, operating under the Governor’s charter, has produced detailed Action Plans identifying six priority areas for AI governance: workforce, cybersecurity, public trust, civil rights, election integrity, and economic competitiveness.

Tennessee’s own security evaluators conducted a formal assessment of DeepSeek AI in 2025, identifying it as a security risk. This prompted a state employee ban via executive order, but lacks broader legislative enforcement for private use or vendors.

Tennessee has built a foundation, but Tennessee needs to continue with comprehensive legislation that safeguards us moving forward with the precariousness of AI.

Tennessee has a golden opportunity to craft comprehensive AI legislation that positions the Volunteer State as a national leader, combining cutting-edge innovation, creativity, robust consumer privacy, and strong protections in a way that generates real economic wins, puts Tennessee on the map, and keeps everyday Tennesseans safe from emerging risks.

Dr. Bonds’ Proposal for Comprehensive AI Legislation for Tennessee

I am proposing the Tennessee Responsible and Unified Standards for Technology (TRUST) Act.

This is a comprehensive framework, drawing perspective from efforts already in place, grounded in the National Institute Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0) and related standards, and built directly on the existing Tennessee statutory and administrative infrastructure.

In the interest of transparency and responsible use of technology, I used artificial intelligence tools to assist with research organization, comparative analysis, and structural drafting while developing this comprehensive AI legislative proposal. All policy positions, legal interpretations, and final conclusions reflect my independent judgment and review.

Overview of my proposed TRUST Act:

Risk Classification: AI systems in Tennessee are classified into four tiers based on their potential for harm. The most dangerous practices – social scoring, domestic surveillance AI, manipulation, foreign-government-controlled AI on state systems – are prohibited outright. High-stakes AI systems used in criminal justice, hiring, healthcare, and education face rigorous oversight requirements. Social media recommender systems and chatbots face transparency and disclosure obligations.

Human Oversight Requirements: No Tennessean should have their life materially changed by a machine alone. Codifying what Tennessee already requires for its own state AI systems (under ISC Gen AI Policy 3.00), the TRUST Act requires human review before any AI-automated decision with serious consequences takes effect, such as job terminations, bail determinations, benefit denials, healthcare decisions.

The Right to Know and Contest: Extending the rights already established in TIPA/HB1181, Tennesseans would have the right to know when AI made a decision affecting them, to receive a plain-language explanation, and to contest the result before a human being.

Algorithmic Bias Mitigation:  AI systems making consequential decisions must be tested for racial, gender, age, and disability bias before deployment, monitored annually, and audited by independent evaluators every two years. This operationalizes the Tennessee Human Rights Act for the AI era.

Generative AI Training Data Transparency: Any developer that commercially deploys a large-scale generative AI system in Tennessee must publicly disclose a high-level summary of their training data: what categories of sources were used, what date ranges they cover, what demographic groups are underrepresented, and whether any data originated from foreign-government-controlled systems.

Viewpoint Discrimination and Platform Accountability: Social media platforms using AI to suppress or amplify content based on viewpoint, political opinion, or religious expression,  without disclosing that they are doing it, are engaging in deceptive practices. The TRUST Act requires platforms to document their content moderation criteria, disclose their keyword filtering parameters publicly, and carry the burden of proving their enforcement is viewpoint-neutral. Before introduction, the Office of Legal Services will prepare a constitutional analysis specifically documenting why this provision is a commercial deception regulation, not a speech regulation

Election Integrity: During the 90 days before any Tennessee election, Very Large Online Platforms cannot change their algorithmic parameters for election content without 30 days’ public notice. AI-generated synthetic media depicting candidates in false scenarios must carry prominent disclosure. Any government request to suppress election content must be reported within 72 hours.

Child and Adolescent Protection: Addictive AI design targeting minors, behavioral profiling of children for commercial purposes, and collection of sensitive child data for AI training are prohibited.

The Tennessee Artificial Intelligence Commission: A nine-member independent regulatory body, extending and formalizing the existing AI Advisory Council, with rulemaking authority, investigation powers, and civil penalty enforcement.

Enforcement with a Calibrated Cure Period: Civil penalties up to $250,000 per violation for prohibited practices, trebled for willful conduct. Class D felony criminal liability for the most egregious violations. A private right of action for Tennesseans who suffer real harm. Prohibited-practice violations carry no cure period and are immediately actionable. High-risk AI compliance failures get a 60-day cure window for first offenses. Cures are not self-executing: the Commission determines whether a submitted cure is genuine, and cosmetic or terminological fixes that leave the underlying harm in place do not qualify.

NIST AI RMF Integrated into TCA: The TRUST Act is the only state framework in the country that elevates the NIST AI Risk Management Framework to an explicit statutory baseline for all Tier 2 systems, directing the Commission to develop a Tennessee-specific NIST AI RMF Compliance Profile within 18 months, a sector-by-sector mapping of exactly which NIST subcategories apply to which deployment contexts in Tennessee. This turns a statutory reference into an operational compliance standard.

Frontier AI Safety: The highest-capability AI systems, those trained on compute resources exceeding 10²⁶ floating-point operations (FLOPs), must publish a Frontier AI Safety Framework addressing catastrophic risk thresholds, third-party safety evaluations, cybersecurity of model weights, and a 72-hour incident notification protocol.

Interstate AI Governance Compact: The Commission is authorized to negotiate a multi-state AI governance compact with other states to establish mutual recognition of conformity assessments and harmonized definitions.

IMPLEMENTATION-READY PROPOSAL

Because AI is not waiting for the legislative process to move at its normal pace, the TRUST Act framework includes five recommendations that can be deployed immediately, some requiring no new legislation at all:

Recommendation 1 – Governor’s Executive Order: Using existing authority under T.C.A. § 4-1-201 and STS policy authority, the Governor can immediately mandate statewide compliance with 200-POL-007, ban Foreign-Adversary AI (including DeepSeek) from all state systems, require NIST AI RMF compliance for all state AI procurement, and direct the Attorney General to issue enforcement guidance on AI deception as an existing TCPA violation. Zero new legislation required.

Recommendation 2 – Omnibus AI Safety Amendment Act: A single fast-track bill amending three already-enacted Tennessee statutes – TCPA (§ 47-18-101), TIPA/HB1181 (§ 47-18-2101), and the Election Code (§ 2-19-143) – to add AI-specific protections. This is the fastest legislative path because it builds on laws that already completed the legislative process.

Recommendation 3 – AI Advisory Council Formalization Act: A short standalone bill elevating the Governor’s AI Advisory Council from an administrative body to a statutory commission with rulemaking authority, locking in the Council’s work product as durable institutional infrastructure regardless of future administrations.

Recommendation 4 – AG and Human Rights Commission Enforcement Directive: Immediate guidance from the Tennessee Attorney General clarifying that AI manipulation and viewpoint-discriminatory algorithmic suppression are existing TCPA violations. No legislation required, as these are unfair and deceptive practices under law already on the books.

Recommendation 5 – Child Safety and Mental Health AI Act: A standalone bill targeting the AI harms to children and adolescents

Tennessee can be the state that proves responsible AI leadership is not only possible, but necessary. The foundation is already laid, the framework is already written, the moment is now.

Leave a Reply

Your email address will not be published. Required fields are marked *