Tennessee wants to pioneer AI, but its leaders are throwing Tennesseans to the wolves in the process. I support technological advancement and Tennessee’s economic growth, but the Tennessee legislature’s approach, mirroring gutted federal protocols, puts us in the path of chaos, unpredictability, and consumer AI safety risks.
Despite talk of “responsible AI,” Tennessee advances development while relying on gutted federal protocols, leaving Tennesseans exposed to systems that favor the wealthy over the vulnerable.
This is not speculation…it is a documented policy drift.
I’m not anti-technology, anti-Trump, or anti-Tennessee; my concern is AI implementation that prioritizes politicians’ profits and media/AI companies’ access while sacrificing safeguards for Tennesseans. This unfolds quietly while President Trump “floods the zone” with headlines from kidnapping foreign leaders, threatening to invade Greenland, removing the United States from the World Health Organization (WHO), all while supporting the Department of Justice (DOJ) as it disregards the Epstein Transparency Act, shielding wealthy pedophiles who exploited children through Epstein Sex Trafficking Ring.
Genesis Mission Recklessness: Trump’s Manhattan Project Redux
The Genesis Mission is a United States Department of Energy project that President Trump quietly implemented via Executive Order on November 24, 2025, during the holiday season amid various breaking news stories, that is continuing to progress behind the scenes. Through the “Big Beautiful Bill,” Republicans in Congress cut funding to the following major programs to in order to fund this AI endeavor:
- Medicaid: $800–$990 billion
- Supplemental Nutrition Assistance Program (SNAP, food stamps): $187 billion
- Student loans / Higher education programs: elimination of Grad PLUS loans, caps on Parent PLUS and unsubsidized Stafford loans starting July 2026)
These cuts strip benefits that were once earmarked for
- ~37 million children (720K Tennessee kids)
- ~15.4 million disabled (288K Tennesseeans)
- ~7.7 million seniors (144K Tennesseeans)
- SNAP Food Benefits for 42 million Americans (1.44M Tennesseeans)
Tennessee is a central player of the Genesis Mission through Oak Ridge National Laboratory, where Trump has basically commissioned ORNL to drive AI at high speed, but without a safety belt and air bags! We are on a collision course headed for disaster!
When Trump created the Genesis Mission via Executive Order, he revoked Biden’s previous EO regarding AI (Executive Order 14110), and without getting completely technical, there are several safeguards for Americans that were in place that Trump has sacrificed to maximize speed and profitability:
- No Congressional oversight
- No legislative safeguards
- AI consumer protections ignored
- Bias, Civil Rights protection, and discrimination prevention Reversed
Trump is sacrificing the most vulnerable Americans (Tennesseans) through his approach to advancing the United States’ footing in the AI realm, which is driven by Trump’s greed and dictatorial hunger for power. This is evident in the lack of congressional consultation and support, even though Trump has majorities in both the House, Senate, and the Supreme Court; additionally, the individuals and businesses who have committed to financially supporting the Genesis Mission basically have a quid pro quo relationship with Trump and by association the federal government. Practically, this means that because entities such as OpenAI (ChatGPT), xAI (Elon Musk & X/Twitter), Microsoft (Copilot), etc. have made financial contributions to Donald Trump, he is giving them unrestricted access to federal government systems, which includes military plans, intelligence reports, and even Americans’ private data.
That is not governance…it’s a voluntary hostile takeover at the expense of Americans.
Artificial Intelligence (AI) Safeguards Were In Place But Trump Revoked Them
The National Institute of Standards and Technology (NIST) is a U.S. federal agency that develops technical standards and guidance to support innovation, security, and reliability in areas like Cybersecurity and Artificial Intelligence (AI). In the area of AI, NIST created the AI Risk Management Framework (AI RMF 1.0), a voluntary blueprint that helps organizations identify, measure, and manage risks such as bias, security vulnerabilities, and unreliable model behavior across the AI lifecycle. The framework’s core goal is to promote trustworthy AI by encouraging practices that improve transparency, fairness, accountability, robustness, and alignment with legal and societal expectations, while still being flexible and non‑binding for industry.
NIST’s AI RMF 1.0, released in 2023, was built with those goals to manage risk with AI, organizing real risk work into four functions – Govern, Map, Measure, Manage – meant to ensure institutions to actually identify, monitor, and mitigate harms before AI systems are deployed on the public. Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked Biden’s AI safety order and launched a new America’s AI Action Plan designed to remove “burdens” on AI development. Trump’s plan explicitly instructs NIST to revise the AI RMF to eliminate references to DEI, misinformation, and climate-related risks, precisely the categories that most directly protect vulnerable communities, democratic discourse, and environmental safety.
The State of Tennessee’s Strategic Technology Solutions (STS) Roadmap for Generative AI explicitly commits the state to alignment with NIST AI Risk Management Framework (AI RMF 1.0), which is backed by research and appropriate safeguards for consumers, but it directly conflicts with Trump’s Winning the Race – America’s AI Action Plan and Executive Order 14179 (Genesis Mission). Tennessee State officials are not only producing conflicting statements and plans for protecting Tennesseans, but also are implying that federal oversight will provide sufficient protection as Tennessee explores AI integration, including the State’s active communication with OpenAI and xAI over the integration of ChatGPT and/or Grok into State systems, but without any legislative safeguards in place.
Tennessee government officials are basically tossing Tennesseans out of a proverbial plane without a parachute and just saying, “Just trust us…you’ll be okay.” THIS IS UNACCEPTABLE! Neither the federal government or Tennessee government has provided sufficient legislative safeguards for venturing into the AI realm, but Trump has secured governmental contracts for leaders in the AI and similar fields who have been financial supporters for him.
As it stands, there are few safeguards or laws in place that informs and protects the public from nefarious use:
- The White House can post an AI-modified image of an individual that gives a false impression of what actually happened (Source)
- The Department of Defense and Pentagon may be incorporating xAI & Elon Musk’s Grok into its systems. (Source) Elon or xAI developers could program Grok to classify individuals or entire communities as “biological threats” based on health records, travel history, or genetic data. (Source)
- Tennessee has actively been having meetings with OpenAI representatives, which means ChatGPT could be integrated into Tennessee systems, which means Grok and ChatGPT could be deciding who gets evacuated or aided first in a weather or nuclear crisis, embedding programmed biases into life-or-death decisions. (Source)
- Consider how effective an actual Matrix simulation would be if weaponized on a group of people or population. AI has the capability of creating quantum simulations that create hyper-realistic illusions for psychological operations, blurring the line between reality and state-crafted deceptions. (Source)
- A foreign or domestic operative could use AI-integrated controls allowing selective blackouts of zip codes or smart homes flagged for “non-compliant” energy use or online activity. (Trump alluded to the United States having the ability to control power grids…they could target any zip code and restrict services for any reason they justify. (Source)
- It is also possible that AI can engineer contagious diseases that selectively harm vulnerable groups, exploiting disparities for devastating effects. (Source)
These aren’t just worst case scenarios, these are actual possibilities that can happen today because there are not any legal or structural safeguards in place to mitigate these concerns and protect Americans from AI exploitation and weaponization. We voted to send lawmakers to state and national offices not for them to solely boost their ego and financial standing, we voted for them to advocate and create a state and country where we can grow, thrive, and be safe.
Today is January 23, 2026 and the deadline of the next phase of the Genesis Mission was submitted per the instructions of the Genesis Mission Executive Order. Backdoor governing is not what our founding fathers set up through the Constitution that is FOR THE PEOPLE.
Today, I filed multiple Freedom of Information Act (FOIA) requests with federal agencies, as well as a Public Records Request to the State of Tennessee, seeking information from specific department of governments.
Federal Offices/Departments I sent FOIA requests to:
- Federal Trade Commission
- United States Department of Commerce – National Institute of Standards and Technology (NIST)
- United States Department of Energy
- United States Department of Homeland Security – Cybersecurity and Infrastructure Security Agency (CISA)
- United States Department of Justice – Office of Information Policy (OIP)
- United States Department of Justice – Federal Bureau of Investigation (FBI)
- United States Department of State
- The White House – Office of Management and Budget (OMB)
- Tennessee Department of Finance and Administration
I have received confirmations from each of these that my request(s) were received.
Summary of Records Requested
Requested Under the Tennessee Public Records Act (Tenn. Code Ann. § 10‑7‑503 et seq.)
AI Risk Framework Implementation
- Documents showing how Tennessee operationalizes the NIST AI Risk Management Framework across state government.
- Materials detailing use of the Govern, Map, Measure, and Manage functions.
- Internal procedures for AI risk identification, mitigation, monitoring, governance, bias management, incident response, and accountability.
- Policies tied to the STS Roadmap for Generative AI, Enterprise AI Policy, Enterprise GenAI Policy, and the November 2025 AI Advisory Council Action Plan.
Assessments and Evaluations
- Internal or external audits, gap analyses, or compliance reviews assessing Tennessee’s alignment with NIST AI RMF 1.0 or the NIST Generative AI Profile.
- Algorithmic impact assessments, bias or error reviews, readiness evaluations, and pilot program outcome reports.
- Any formal evaluations referenced or implied in the STS Roadmap or related AI policies.
Internal and Interagency Communications
- Emails, meeting minutes, agendas, presentations, notes, and correspondence discussing:
- AI RMF implementation status
- Identified gaps or challenges
- Decisions to delay, defer, or partially implement AI risk controls
- Reliance on pilots, legacy security policies, or future revisions instead of full operational plans
- Explicit exclusion: AI Advisory Council meeting minutes already publicly posted (specific dates listed); request targets unposted, supplemental, or internal records not in the public archive.
AI Governance Roles and Accountability
- Records defining AI risk management roles within cabinet-level agencies.
- Designation of AI risk officials or governance leads.
- Training materials, reporting templates, and accountability structures referenced in the November 2025 AI Advisory Council Action Plan or related governance documents.
Response to Federal AI Policy Changes
- Documents discussing whether Tennessee has:
- Records addressing potential removal of risk concepts such as DEI, misinformation, or climate change from AI governance.
- Any analysis of consistency or conflict between revised federal guidance and Tennessee AI policy.
Federal Coordination and Partnerships
- Correspondence and coordination with federal entities on AI governance and risk management, including:
- Department of Energy
- NIST
- Oak Ridge National Laboratory (Genesis Mission)
- White House Office of Science and Technology Policy
- Records where NIST frameworks are referenced in federal–state AI discussions or agreements.
Requested Under the Freedom of Information Act (5 U.S.C. § 552)
Government–Platform Communications on Content Control and AI Access
(Requested from State Department, DHS/CISA, FBI, FTC)
- All communications between federal agencies and major technology or AI platforms, including:
- Meta
- X (Twitter)
- Google / Youtube
- TikTok
- OpenAI
- xAI
- etc.
- Records concerning:
- Content moderation, labeling, or downranking
- Removal or suppression of posts or accounts
- Government access to APIs, models, or tools used to monitor or shape political, civic, or election-related discourse
Federal Advisory Roles and AI Governance Influence
(Requested from State, DHS/CISA, FBI, FTC)
- Records identifying advisory boards, task forces, or working groups where:
- AI or social-media executives participated
- Topics included AI governance, election security, misinformation, or content moderation
Government Use of Generative AI for Narrative or Influence Operations
(Requested from State, DHS/CISA, FBI, FTC)
- Records describing use, testing, or planned use of generative AI systems for:
- “Narrative management”
- “Counter-disinformation”
- “Influence operations” (domestic or international)
- Supporting documentation requested:
- Risk assessments
- Ethical guidelines
- Guardrail and governance documentation
DOJ Use of AI in FOIA Processing and Sensitive Request Handling
(Requested from U.S. Department of Justice and OIP)
- Records detailing DOJ or component use of AI tools in FOIA processing, including:
- Auto-redaction
- Technology-assisted review
- Machine learning systems
- Requested documentation includes:
- Vendor contracts and procurement records
- Policies, SOPs, and training materials
- Performance audits, error-rate analyses, and bias or fairness evaluations
- Inspector General or OIP reviews
- Guidance or training materials addressing:
- FOIA requests deemed “politically sensitive”
- Application of Exemption 5 and “foreseeable harm”
- Requests involving high-profile officials, major corporate partners, or high-media-interest topics
FBI Communications and AI Use for Information Control
(Requested from FBI)
- Communications with AI and social-media companies regarding:
- Content moderation or suppression
- Monitoring or shaping online political or civic discourse
- Records on FBI use or testing of generative AI for:
- Narrative management
- Counter-disinformation
- Influence operations
- Associated risk, ethics, and guardrail documentation
Federal Trade Commission Involvement in Content and AI Governance
(Requested from FTC)
- Communications with AI and platform companies on:
- Content moderation and suppression
- API or tool access for discourse monitoring
- Advisory participation involving AI governance, elections, or misinformation
- FTC use or evaluation of generative AI for narrative or influence-related purposes, including safeguards
Executive Order 14110 Revocation and Deregulatory Impact
(Requested from Office of Management and Budget)
- Internal deliberations and analyses following revocation of EO 14110 (“Safe, Secure, and Trustworthy AI”), including:
- Communications and consultations with private companies regarding:
- AI governance shifts
- Deregulatory actions post-revocation
- Records identifying private companies given consultative or implementation authority in AI policy matters
NIST and Commerce AI Safety Definitions and Policy Shifts
(Requested from NIST / Department of Commerce)
- Internal documents defining or operationalizing:
- “Misinformation”
- “Deception”
- “Harmful content”
- “Safety” for government-funded or evaluated AI models
- Analyses assessing how EO 14110’s revocation affected:
- AI benchmarks
- Red-teaming practices
- Safety guidance and safeguards
- Records related to AI RFIs or public consultations, including:
- All submissions received
- Internal summaries and decision memos
- Records of non-publication or selective publication of comments
- Use of AI tools in FOIA processing at Commerce or NIST, including audits and bias/error assessments
Department of Energy and the Genesis Mission
(Requested from DOE)
- Contracts, MOUs, and data-sharing agreements related to the Genesis Mission, including partnerships with private AI and technology firms.
- Risk, privacy, and governance documentation addressing:
- FOIA and transparency obligations
- Privacy and data protection
- Civil-rights and anti-discrimination safeguards
- Export control and national-security constraints
- Intellectual property and data-sharing rules
- Records tied to AI RFIs under Pub. L. 119-21, including:
- Full submissions and scoring matrices
- Decisions not to post certain comments publicly
- Deviations from standard notice-and-comment practices
- Internal evaluations of consumer privacy, safety, civil-rights, and fraud risks arising from DOE AI initiatives.
- DOE use of AI tools in FOIA processing, including:
- Vendor contracts
- User manuals
- Performance and bias evaluations