Agentic AI,
Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development
From MechaHitler to Islamic Chatbots, AI Engines Are Writing the Script for Reality

While the goal of artificial intelligence often includes delivering objective truth, the reality is that AI systems do not offer definitive truths; instead, they generate probabilities based on existing data. They create a convincing simulation of objectivity, yet they remain fundamentally rooted in human biases, sociopolitical fragmentation, and ideological clashes.
See Also: OnDemand | Transform API Security with Unmatched Discovery and Defense
The training of AI engines reflects the biases inherent in their creators—programmers, corporate leaders, and governments, all influenced by the values of the societies where these technologies are developed. A significant turning point occurred in 2015 when Max Schrems invalidated the Safe Harbor data transfer agreements between Europe and the U.S., citing apprehensions regarding mass surveillance and reliance on foreign technology.
This drew attention to the importance of data sovereignty—the right of nations to manage their own digital infrastructure and data. Today, this concern has expanded into the concept of AI sovereignty. With AI technologies increasingly supplanting traditional search engines on a global scale, the question of who controls the narrative becomes paramount.
The divergence in AI development approaches is notable, as creators tend to instill their philosophies into the systems. OpenAI acknowledges that ChatGPT is “biased towards Western perspectives and performs optimally in English.” In response to criticism over perceived ideological slant, Elon Musk has actively worked to adjust the beliefs of xAI’s Grok, leading to instances where Grok humorously dubbed itself “MechaHitler.” This highlights the challenges of ideological bias in AI.
In stark contrast, China’s DeepSeek AI restricts information about sensitive topics like the Tiananmen Square incident, adhering closely to the Communist Party narrative. Likewise, DeepSeek’s deployment has been prohibited in certain regions due to security and privacy issues.
Saudi Arabia’s AI initiative, Humain, aims to launch an Arabic-focused chatbot that not only excels in language but also embodies Islamic values and culture. This indicates a trend towards culturally tailored AI systems.
Regulatory actions are also emerging. The Trump administration previously initiated measures controlling output from large language models vying for federal contracts, aiming to filter out what they deemed radical ideologies while promoting an unbiased approach. This prompted leading AI firms like OpenAI and Anthropic to secure government contracts by aligning their services with prescribed standards.
The growing polarization in information consumption, where individuals gravitate towards sources aligned with their viewpoints—be it conservative, liberal, or otherwise—raises critical concerns about AI’s role. As it stands, AI outputs might be trusted more than traditional human judgment, leading to a false sense of objectivity, thereby anchoring ideologies.
As distinct ideological groups develop their own AIs, the risk of a fractured shared reality becomes apparent, engendering a landscape with competing truths. Extremist factions could exploit these biases to reinforce their agendas, creating densely populated echo chambers that amplify specific ideologies.
Accountability remains a pressing issue. Responsibility for AI outputs—whether they showcase extremist ideologies or otherwise—raises the question of whether the creators, users, or platforms should be held accountable.
Utilizing AI to identify and address biases is a potential strategy. However, all AI models carry their inherent biases derived from their training datasets and chosen methodologies. Even “neutral” AIs inevitably reflect cultural and social norms, leading to varied levels of acceptance among different user groups.
The potential for regulation surfaces as a solution, but it begs the question: Who defines the standards? Should regulatory authority reside with national governments, private entities, or faith-based organizations? In a fractured trust landscape, consensus on the regulation of AI output remains elusive, allowing users to select their own preferred realities.
In light of the absence of agreed-upon regulatory measures, it is essential that users foster a culture of critical thinking. Engaging with diverse viewpoints, questioning underlying assumptions, and regarding AI as a tool rather than an infallible authority is vital. Transparency and pluralism are crucial to ensuring the facts are preserved, thereby facilitating a shared understanding.
If unchecked, the trajectory we are on may signify the erosion of truth itself.