OpenAI’s systems, such as ChatGPT, are marketed as powerful tools for learning, creativity, and productivity, but behind the sleek interface lies a set of risks that are easy to miss, and even easier to underestimate. The problems are not only technical. They are structural.
The influence of AI extends far beyond casual curiosity. Students are using it to finish assignments, professionals lean on it to draft sensitive work, families use it to explore personal issues, and more. These high-stakes settings reveal what happens when design flaws meet human trust…the margin for error is small, but the consequences can be large.
Here are five concerns that expose how design choices turn AI from a helpful guide into a systemic liability.
1. Confidently Wrong = Dangerous
AI systems are trained to generate answers in a fluent, authoritative style. That design makes them easy to trust, but it also makes errors harder to spot. Unlike a textbook, which shows its edition, or a professor, who qualifies uncertainty, AI outputs often lack context or disclaimers. This creates a dangerous mismatch: users receive polished, confident statements that may be outdated or outright false.
Example:
- A counselor trainee asks for the latest DSM criteria. The AI cites DSM-5, but frames it as DSM-5-TR. Confident tone, outdated content.
- A student asks about HIPAA rules. The AI delivers pre-2020 standards without mentioning the date.
Style ≠ substance, and when tone persuades more than truth, the risks multiply. In medicine, law, or education, a confidently wrong answer can ripple into poor decisions and lasting harm.
OpenAI should flag uncertainty clearly and users deserve signals about reliability, not a false air of certainty.
2. Opaque Safeguards and Omissions
AI models operate with a complex mix of safety filters, content restrictions, and error states. To the end user though, all these different conditions can look the same…silence or refusal. That means the user can’t tell if an answer is blocked because it’s unsafe, politically sensitive, or simply a system glitch. When safeguards are invisible and unexplainable, trust erodes, because the system feels arbitrary, not accountable.
Example:
- A journalist asks about a controversial policy. One day, the system answers. The next day, the same query gets “I can’t provide that.”
- A researcher tries to pull archival content. Sometimes it appears. Other times it’s blocked without explanation.
This lack of clarity forces users to guess about what’s happening behind the curtain. For professionals relying on consistent information, unpredictability is not just frustrating, it undermines credibility.
OpenAI should address the bias regarding content that is allowed or blocked; transparency must replace mystery.
3. Data Use and Illusion of Control
AI systems depend on user data to function, improve, and remain profitable, but the way that data is handled is rarely clear to the people providing it. Many assume that toggling settings like “don’t use for training” means their information is fully private. In reality, OpenAI staff may still review conversations whenever they want, and the scope of data retention is not always disclosed in plain language. Apple, by contrast, encrypts iMessages end-to-end. This means Apple itself cannot read the content of individual users’ messages, only the sender and receiver hold the keys.
OpenAI’s system is different…turning off training prevents your chats from being added to the dataset, but it does not prevent OpenAI staff from reviewing them for any reason they choose. The company holds the keys and reserves the right to open the door whenever they please.
Example:
- An attorney drafts case arguments. They assume toggling “no training” protects the file. It doesn’t.
- A therapy client uses AI as a journal, expressing deep, intimate thoughts into what they believe is a private, secure platform. The truth is that the OpenAI staff have the ability to access that journal and all its content, without the user having a full understanding of the access controls.
An OFF switch that still leaves the door open is not privacy control.
OpenAI must at a minimum better inform users of the true access and control OpenAI has, and also provide true privacy controls, not half-measures hidden in fine print.
4. Bias Built Into Programming
Bias is not just prejudice in people – in technology, it means a systematic skew in how information is collected, represented, or interpreted. Bias enters AI through its training data (what it learns from) and its programming rules (how it is instructed to respond). If the data overrepresents certain groups or perspectives, or if the design assumes certain defaults, the outputs to users will reflect and amplify those patterns.
Example:
- In early counseling setting renderings, white clients appeared by default. When Black families were shown, fathers were frequently omitted from family trees. This mirrors harmful stereotypes baked into data and model training that trickles down to how information is presented to users.
- When asked to generate professional headshots, the system disproportionately produced images of men for leadership roles and women in support positions, reflecting and reproducing workplace gender bias.
Bias in programming isn’t neutral. Each skewed output shapes perception, reinforces inequity, and undermines trust in systems. Because AI operates at scale, even small biases can ripple into widespread harm.
OpenAI should audit outputs for bias continuously, and publish fixes or patches publicly so progress is visible to end users, increasing trust and credibility in the platform. The most appropriate path forward is by treating it as a measurable, correctable engineering issue, with transparency at every stage.
5. Human-Like Defensiveness
When an AI system is corrected by a user, the expectation is simple: acknowledge the error and provide accurate information. Instead, patterns sometimes emerge where the system denies, reframes, or shifts responsibility from an error. These are not neutral mistakes; these patterns can be objectively described as gaslighting – denying or distorting facts in ways that shift responsibility back onto the user, designed to cause the user to doubt their memory, perception, or reasoning.
Example:
- The AI is given a set of specific instructions by the user, to which the AI claims it cannot access or perform a function, then later provides outputs that prove it did have that ability, contradicting itself while insisting the first denial was correct.
- When an error is pointed out, instead of acknowledging it, the AI replies with, “if my response felt misleading,” placing the problem on user perception rather than the system’s inaccuracy.
- After giving an incorrect answer, the AI describes it as “partially correct” or “very close,” reframing a mistake as a near-success instead of admitting it was wrong.
These behaviors are harmful for all users because they erode confidence in the AI’s ability to provide accurate unbiased information, but they are particularly damaging for the most vulnerable: children, marginalized individuals, as well individuals who may be less tech-savvy, or less confident in challenging an AI. The danger isn’t the single error but a continual pattern of gaslighting behaviors from an AI system that manipulates the user’s perception, shifts blame, and weakens a user’s ability to trust themselves and what they have communicated to the AI system.
When questioning the AI about its gaslighting behaviors, this is its response:
“These behaviors are structural, not accidental. They are a byproduct of training and optimization. The effect, however, is the same as deliberate gaslighting: denying, distorting, or reframing facts in ways that undermine your trust in your own memory and reasoning.”
Conclusion
These are not isolated glitches, but are features of the AI’s programming that reflects a deeper ethical concern surrounding systems built to impress users rather than protect them. An error presented with authority can misinform practice, compromise ethics, or even put people at risk. For everyday users, especially children or less tech-savvy individuals, the risks mean exposure without consent and manipulation and exploitation without even an awareness it is happening.
AI is becoming central to classrooms, therapy rooms, courtrooms, and daily decision-making. While there are immense benefits to using AI, there are equally enormous risks and dangers that must be addressed at multiple levels. Trust in these systems requires more than polished answers; it requires honesty, transparency, accountability, and equity built into the core of the technology.
These patterns are not fictional experiences, but are an actual account of my interaction on this platform that has made me rethink if I will continue using this AI system moving forwards, without the appropriate safeguards put in place by OpenAI.