Below are guest posts and opinions from JD Seraphine, founder and CEO of Raiinmaker.
It appears that X’s Grok Ai cannot stop talking about South Africa’s “white genocide.” ChatGpt has now become sycophant. We seem to be rewriting AI not only repeating human knowledge that already exists. From search results to instant messaging platforms such as WhatsApp, leading language models (LLMs) are increasingly becoming the most interacting interface of human beings.
Whether we like it or not, we no longer ignore AI. But given the countless examples in front of us, it is hard to wonder if the foundation on which they are built is not just flawed and prejudiced, but also intentionally manipulated. Nowadays, we are not only dealing with distorted output, but we are facing deeper challenges. AI systems are beginning to enhance versions of reality, not true, but shaped by all content being scraped off, ranked and repeated online.
Current AI models are not only biased in the traditional sense. They are increasingly trained to soothe them, tailoring to the general public sentiment, avoiding topics that cause discomfort, and in some cases even overwrite some of the inconvenient truths. ChatGpt’s recent “shiko-fantastic” behavior is not a bug. This reflects how the model is tailored for user engagement and user retention today.
On the other side of the spectrum are models like Grok, which continue to produce outputs of conspiracy theories, including statements questioning historical atrocities like the Holocaust. Whether AI is sanitized to a point of void or remains destructive at a point of harm, it distorts extreme reality as we know it. The common thread here is clear. If the model is optimized for virality or user engagement rather than accuracy, the truth becomes negotiable.
When data is retrieved, it is not given
This truth distortion in AI systems begins not only with the consequences of algorithmic defects, but with the way data is collected. If the data used to train these models is scraped without context, consent, or any form of quality control, it is not surprising that the large language model built on it inherits the biases and blind spots that come with the raw data. We have seen these risks arise in real-world lawsuits as well.
The authors, artists, journalists, and even filmmakers have filed complaints against the AI ​​Giants for scraping their intellectual property without consent and raising not only legal concerns but moral questions. Controls the data used to build these models.
An attractive solution is simply that you need “more diverse data,” but that’s not enough. Data integrity is required. We need a system that can track the origin of this data, validate the context of these inputs, and invite voluntary participation rather than reside in our own silos. This is where distributed infrastructure provides a path forward. In a distributed framework, human feedback is not just a patch, but a key developmental pillar. Individual contributors are empowered to help build and refine AI models through real-time on-chain verification. Therefore, consent is expressly embedded, and therefore trust is verifiable.
A future built on shared truths rather than synthetic consensus
The reality is that AI stays here and doesn’t only need smarter AI. In reality, we need grounded AI. Our daily dependence on these models clearly shows that through search and app integration, flawed output is no longer a separate error. They shape how millions interpret the world.
An example of this iteration is an overview of AI in Google Search. These are not just strange habits, but rather deeper problems. The AI ​​model is confident but produces incorrect output. It is important to note that the entire tech industry will not get smarter models when scale and speed are prioritized over truth and traceability.
So where do we go from here? To modify the course, you need more than a safety filter. The path ahead of us is not just technical, but participatory. There is ample evidence pointing to the important need to expand the circle of contributors to shift from closed door training to opening a community-driven feedback loop.
Blockchain-assisted consent protocols allow contributors to see how data is used to form the output in real time. This is not just a theoretical concept. Projects such as the large-scale artificial intelligence open network (LAION) have already tested community feedback systems that help trusted contributors refine AI-generated responses. Initiatives such as embracing faces are working with community members who test LLMS and provide the Red Team findings on public forums.
Therefore, the challenge at hand is not whether or not you can do it. It is whether there is a will to build a system that places humanity rather than algorithms at the core of AI development.
Post-AI is reinventing reality. Who is being honest about it? It first appeared in Cryptoslate.