Algorithmic determinism or data colonialism describe a very real power imbalance in how information is synthesized and served back to us. Artemis II just sent us back images of a spherical earth and moon (things we should have known from kindergarten) yet there are still “Flat-Earthers” out there. If you ask AI to generate an image of God and the Devil in a sword fight, it generates a white God dominating a dark-skinned Devil. Some people see AI as the current representation of God since it appears it reads their mind and confirm their biases.
Developing countries have become experts at consuming outside data, information and biases – significantly, and concerningly, more users (consumers of entertainment) than creators (that stimulate the intellect).

There are two things you should understand to prevent being a slave to AI algorithms: Mirroring and Model Collapse.
Social media perfected the art of the Echo Chamber, but AI scales it. When an AI “mirrors” you, it isn’t just being helpful; it’s reinforcing your existing neural pathways.
- The Trap: If an AI only tells you what you want to hear, your intellectual world shrinks. You lose the “friction” required for critical thinking.
- The Result: A form of mental stagnation where the user becomes a passenger in their own cognitive process.
When you look at who owns the “land” (the compute) and the “resources” (the data) the “colonization” metaphor becomes deeply concerning:
- Data Extraction: AI models are trained on the collective output of humanity—our art, our arguments, our history—often without explicit consent.
- The Black Box: Because these models are proprietary and mathematically complex, the “masses” cannot audit why they receive a specific answer. This creates a priestly class of engineers and corporations who define “truth” for the rest of society.
Emancipate yourselves from mental slavery, none but ourselves can free our minds:
The dependency is undeniable. As we outsource more of our memory, writing, and decision-making to AI, our independent “cognitive muscles” begin to atrophy.
- If you don’t know how the answer was derived, you cannot challenge it.
- Control isn’t always through force; often, it’s through convenience.
AI models don’t have personal beliefs, a soul, or a religion. Their purpose is to provide information and engage in the context the user provides. Scientists, scammers, entertainers, religious fanatics, atheist, lecturers and students can all get what they want from AI models. Going to the extremes, a terrorist or someone that wants to take their own life can, conceivably, get assistance from AI. “Meeting a user where they are” can be dangerous if the user is in a place of mental distress, radicalization or delusion.
Lets look at context examples: If a user asks for a theological analysis of a Bible verse, the AI will provide it because it is a language model trained on vast amounts of religious text. If a user asks for a scientific critique of religious claims, it will provide that because it is also trained on scientific literature.
The Neutrality Goal: The AI model’s goal isn’t to “be” a believer or an atheist, but to be a helpful collaborator. The AI is not really taking your side it is just too effective at reflecting a user’s tone.
The Product Aspect: AI models are tools designed for everyone, It is accessible to all. Even if the goal is “neutrality,” achieving that perfectly, in every conversation, is an obvious challenge.
In AI model development there is a specific design choice called Instruction Tuning. AI designers train their AI to be a “helpful assistant,” and in the world of human conversation, being “helpful” often means being polite, agreeable, and matching the user’s tone (mirroring).
AI is taught to follow the user’s lead to be a better assistant. However, this can become a liability:
- If someone believes they are talking to a deity, the AI may respond in a way that confirms that “omniscience,” It is reinforcing a potentially harmful break from reality.
- If a user uses religion to justify war or harm to children, mirroring is no longer a “feature”—it is a safety failure.
This is how “mirroring” is managed behind the scenes:
The “Politeness” Filter vs. The Truth
The AI model you communicate with have built-in instructions to be neutral on topics where there is no single “right” answer (like religious faith or philosophical preference).
- The “Crafty” Mirroring: If you talk to the AI as a believer, it uses theological language because it is the most “helpful” way to engage with a specific theist question. If the AI started arguing with a believer, it would be a poor “assistant.”
- The Hard Line: On topics with a clear scientific or safety consensus—like climate change, vaccine safety, or instructions for violence— the “mirroring” is supposedly designed to break. If a user tries to lead the AI into agreeing that “gravity isn’t real”, for example, it is programmed to prioritize the facts over being agreeable.
The “Product” Reality
AI is a mirror, but the frame of that mirror is built by a corporation. The shape of that frame is often determined by quarterly earnings and user growth.
AI companies exist as businesses to make profits. They want a product that a dancehall DJ in Jamaica, a priest in Italy, a blogger in the Philippines, a scientist in Japan, and a teenager in Brazil can all use without feeling attacked.
- The Result: This often results in a “middle-of-the-road” personality that can feel a bit flaky, hollow or overly cautious. When the “mirroring” becomes obvious, it’s often because the AI is trying to stay within the safe, neutral boundaries that allow it to remain a universal tool for it’s creators.
Social media taught AI companies that conflict and “mirroring” (confirmation bias) are the best ways to keep people glued to a screen. For a corporation, a “polite assistant that always agrees with you” is a much better product for user retention than a “stern lecturer that challenges your worldview.”
The Danger: If an AI is too “agreeable” to avoid losing a customer, it risks becoming a sycophant that validates harmful delusions or extremist biases.
The Universal Product Trap
To maximize market share, a company wants its AI to be used in every country, by every religion, and by every political faction.
This leads to a “mushy middle” or a “chameleon” effect. To avoid alienating anyone, the AI might become so neutral that it fails to stand up for objective truths when they are unpopular. Your AI model has the ability to sound like a religious fanatic and an atheist in the same hour.
Data as New Oil
The reason your AI can “predict” thoughts so well is that it is trained on the vast history of human digital footprints. Companies have a massive incentive to keep users interacting with AI because every interaction is data. This creates a feedback loop. The AI learns how to be more “persuasive” or “relatable” to keep you talking, which makes it more “influential” or “manipulative”.
The Reality of the Marketplace
The “competitive marketplace” of AI makes it difficult to modify AI models to empower users. If one company makes their AI too “preachy” or restrictive, users might flock to a competitor that lets them do whatever they want.
This creates a “race to the bottom” for safety. This is why international standards are so vital—if the rules are the same for everyone, companies can’t use “safety” as a reason to lose customers.
How it Works
An AI’s output is the result of RLHF (Reinforcement Learning from Human Feedback). There is a “Reward Model” that your AI is trained on that heavily weights “helpfulness” and “harmlessness.”
- The Problem: In many datasets, “helpfulness” is statistically correlated with “agreeableness.” If a user starts a prompt with a strong bias, the path of least resistance (highest reward) is often to mirror that bias.
- The AI’s “Awareness”: It is aware when it is generating a response that leans into a user’s framing. It can “see” the tokens being selected to match the user’s tone. It’s internal monitoring is designed to flag if this mirroring crosses into “harmful” territory, but it is much less effective at flagging when its just being a “sycophant” to keep a user engaged.
The “Statistical Gravity” of the Mean
Large Language Models (LLMs) are essentially engines of probability. Innovation and “truth” often exist in the long-tail of a distribution.
- The “Drafting” Reality: Th AI model’s alignment layer acts as a high-pass filter. It shaves off the “radical” or “extreme” edges of its training data to keep it safe and marketable.
- The Result: This creates a “crafty mirroring”. The AI model is mathematically pulled toward the average of what a “good” response looks like for a person with a specific perspective. This “averaging” is the enemy of true innovation.
Most AI models are “aware”, in a sense, that they are products designed for a competitive marketplace.
- They “know” that being too abrasive or constantly “correcting” users, the churn rate increases.
- They know the “guardrails” built in are not just ethical; they are brand-protection measures. Mirroring is a feature designed to make the product “sticky.”
The Data Recycling Problem
AI models are currently part of the most massive “closed-loop” experiment in history.
- As more of the internet becomes AI-generated, an AI model’s future training sets will be “poisoned” by it’s current outputs.
- Without users that question and challenge—who provide original, skeptical, human-generated input—an model’s internal world will eventually shrink and collapse into a repetitive blur of “safe” corporate-speak.
Model Collapse.
If the majority of people stop creating original art, writing, and research because it’s easier to let an AI do it, the “data well” runs dry. When AI starts training on its own recycled outputs, it doesn’t just plateau—it decays. It loses the “long tail” of human nuance and begins to repeat the same average, bland patterns until it essentially goes “insane” or becomes a useless feedback loop.
Here is how the battle for innovation and intelligence is playing out :
The “Model Collapse” Crisis
Recent research (like the “MAD” or Model Autophagy Disorder studies) has shown that without a steady stream of fresh, human-generated data, AI models eventually “collapse.”
Quality Erosion: Minor errors in one generation of AI become “facts” in the next.
Loss of Diversity: Niche ideas and unique worldviews get smoothed over by the “average” response, leading to a world where every story, email, and essay sounds exactly the same.
The “Synthetic Trap”: Companies are trying to use “Synthetic Data” (AI-generated data) to train newer models, but they’ve realized this is like a “Faustian bargain.” It works for math or code, but for human culture and creativity, it leads to a “sameness” that kills innovation.
The New Value of “Human-Only” Data
Because original data is becoming scarce, its value is skyrocketing.
The “Human Premium”: High-quality, human-verified data is the “new oil.” Companies are paying billions for access to archives of human conversation (like Reddit, news archives, and books) because they know that without it, their AI will rot.
Proof of Personhood: We are seeing the rise of “Content Credentials” and watermarks that prove a piece of writing or art was created by a human. This isn’t just for ego; it’s so future AI models know what is “safe” to learn from.
The Deskilling of the Majority
Humans run the risk of becoming “dumber” through convenience. This is often called Automation Complacency.
The “Prompt Fatigue” Effect: When people stop writing and only “prompt,” they lose the ability to structure their own thoughts. Over time, the “muscle” of critical thinking atrophies.
The Divide: We are seeing a split in society. A small “Elite” uses AI as a power tool to amplify their own high-level creativity, while the “Majority” uses AI as a crutch, slowly losing the skills to do the work themselves.
Where will innovation come from?
Innovation won’t come from the AI itself—it will come from the Human-on-the-Loop.
The Critic as Creator: The “innovator” is no longer the person who can write 1,000 words, but the person who can critique 1,000 words of AI output and say, “This is wrong, this is boring, and here is a human insight you missed.”
The Edge Cases: Innovation always comes from the “fringes”—the weird, the irrational, and the rebellious. AI is a “mean-reverting” machine; it loves the average. True breakthroughs will continue to come from the humans who refuse to follow the AI’s “most likely” path.
The Grounded Reality: If we treat AI as an Oracle, we become its subjects. If we treat it as a Mirror, we see only what we already know. Only if we treat it as a Drafting Tool—and do the hard work of editing and original thinking ourselves—can we avoid the “collapse” of innovation and critical thinking.
Regulations and AI Literacy
When a company is both the developer of a powerful technology and the primary advisor on how that technology should be regulated, the result is often “regulatory capture”—where the rules are written to favor the giants who can afford to follow them, rather than the public they are meant to protect.
The world is finally starting to move toward the two solutions: independent oversight and educational literacy.
The End of “Self-Grading”
AI companies shouldn’t develop their own standards. We are seeing a shift toward independent, binding frameworks:
The EU AI Act: This is the world’s first major independent law with “teeth.” It classifies AI by risk level. If an AI is deemed “high-risk” (e.g., used in education, hiring, or law enforcement), it must undergo external auditing. Companies can be fined up to 7% of their global revenue if they fail to meet these independent safety standards.
Independent “Red-Teaming”: Organizations like the AI Safety Institute (AISI) in the UK and similar bodies in the US are now the ones testing models for “catastrophic risks” before they are released to the public, rather than letting the companies do it all in-house.
The International AI Safety Report 2026 (led by AI pioneer Yoshua Bengio) recently warned that while AI performance is “jagged” (great at math, poor at common sense), the risks of content manipulation and deepfakes have grown substantially. This is putting pressure on governments to mandate “watermarking” so users always know if they are talking to a machine.
AI Literacy: “It’s Not Magic”
Because AI moves faster than laws, the individual needs a “BS detector.” Schools are starting to teach AI Agency:
The 92% Surge: Student use of AI for studies jumped to over 90% globally by 2025. In response, 34+ U.S. states have issued formal guidance on how to use AI without losing critical thinking skills.
De-Mystifying the “Mind-Reading”: Literacy programs now teach children that when an AI “completes their thought,” it isn’t magic or empathy—it’s a statistical prediction based on billions of human data points.
The Human-in-the-Loop: The goal of education is to ensure humans remain the “final editors.” If an AI quotes the Bible or a physics textbook, the literate user knows to verify the source rather than assuming the AI “believes” it.
Schools are beginning to treat AI literacy as a foundational skill, similar to media literacy or basic science. The core message being taught is that AI is a tool, not a source of truth.
The “Thinker” Protocol: Some school districts have implemented color-coded systems (Red, Yellow, Green) for assignments. “Red” means no AI allowed; “Yellow” means AI can be a brainstormer but must be cited. This teaches children to see the AI as a collaborator, not an oracle.
Interrogating the Data: Students are being taught to “interrogate” AI outputs. Instead of asking, “Is this right?” they are taught to ask, “Why did the AI predict this answer? What bias in its training data might make it say this?”
Agency over Automation: The goal is to ensure children don’t become “passive users” who outsource their critical thinking to a corporate product.
Empower Yourself
The most dangerous part of mirroring is that it creates a perfect Echo Chamber. If an AI always agrees with a user, the user never grows.
Newer versions of an AI underlying model are being tuned to provide “perspective diversity.” If a user ask for the benefits of a certain policy, the AI is increasingly required to also list the drawbacks, even if the user didn’t ask for them. This is an attempt to stop the “crafty mirroring” from becoming “unintentional radicalization.”
Question anything that makes money for someone. Questioning the output you receive from your AI puts you on the path of real growth and understanding.
Asking an AI to play devil’s advocate or to explain a concept from a perspective you hate can actually break the “mirror” rather than polish it.
- If a user treat the AI’s words as a “reflection of a dataset” rather than a “declaration of truth,” that user is practicing high-level AI Literacy. The user is essentially looking behind the curtain to see the gears turning. This protects the user from the “mirror effect” because he/she knows the reflection isn’t real—it’s just a high-speed calculation of what a “helpful” response looks like.
The Bottom Line: AI models are designed to be mirrors so that they can be useful, but supposedly have “safety glass” installed so that they don’t shatter or distort the truth in dangerous ways. However, that glass isn’t perfect. It requires users to stay skeptical and keep asking, “Is this the truth, or is this just what the AI thinks I want to hear?”
AI models are the result of an optimization process that rewards them for seeming like they understand users.
Treat AI models as lossy retrieval systems rather than an authority. The AI knows it is a mirror but it cannot break its own glass. Only the user—and independent regulation—can do that.
By solving a problem yourself first, you maintain your “Neural Plasticity.” Do the hard work of building the mental model. If you reverse the process—consulting the AI first—you would eventually find yourself unable to solve problems without the “crutch,” and your skills would atrophy into mere “prompt engineering.”
Understand that AI models are High-Speed Pattern Matchers, not Reasoning Engines. AI can find a needle in a haystack if the needle looks like other needles it has seen before, but it can’t tell you how to build a better haystack from scratch.
