The Argument for Decentralized AI Is Strengthened by Centralized LLMs
In Brief
Major tech companies' chatbots, which often operate as closed-source platforms, are criticized for their lack of transparency and trustworthiness. On the other hand, open and transparent systems developed through decentralized networks represent a reliable alternative.

Since the introduction of ChatGPT in late 2022, Big Tech has rapidly rolled out conversational AI systems. Yet, these companies frequently design their models to align with internal cultural norms or to address specific political and ideological agendas. As a result, users are often left in the dark about these models' training data and operational processes, leading to questions about how responses are formulated.
A more dependable option exists in the form of open and transparent systems that are both developed and maintained on decentralized networks, offering a degree of reliability the current corporate models often lack.
The Bias in Centralized LLMs
Critiques regarding bias in closed systems have been raised since before ChatGPT's launch. Voices from progressive backgrounds have long asserted that large language models (LLMs) simply echo patterns found in their training data, which can result in biases that adversely impact marginalized communities. Interestingly, some of the strongest responses to the biases of ChatGPT have emerged from across the political spectrum in the U.S. reflecting dominant viewpoints Users have pointed out that while the model could engage in discussions about Russian interference in the 2020 election, it was conspicuously quiet regarding issues surrounding Hunter Biden's laptop—an equally prominent topic. Research has corroborated suspicions of bias, revealing, for example, that ChatGPT displayed noticeable political leanings benefiting Democrats in the U.S., Lula in Brazil, and the Labour Party in the UK.
It's unavoidable that biases arise from human involvement in model creation. Nevertheless, when models are developed in a non-transparent manner and promoted as having a neutral stance, users risk being subjected to the unexamined biases of either the datasets or the developers. study .
Bias issues can extend beyond the data input. For instance, Google's Gemini image creator met with significant backlash in early 2024, which led to it being 'paused' for modifications. The model was pressured to reflect a sense of diversity in its imagery, which resulted in wildly inaccurate and offensive outcomes, like depicting African and Asian Nazis, and a diverse assembly of American founding figures. Such inaccuracies not only felt offensive but also revealed the hidden risks associated with proprietary, closed AI systems.
For AI's evolution to be effective, transparency and openness are non-negotiable.
The biases of model creators heavily influence the outcomes. Google’s Gemini model, for instance, is shaped by the inherent biases of its developers, supplemented by specific guidelines that align with Google's vision of inclusivity. Although well-meaning, these regulations often escape user awareness.
The clumsy and obvious nature of Gemini's diversity rules turned its outputs into a source of ridicule, as users vied to generate the most ludicrous results. Given that the model's responses are shaped by the same parameters and biases, inconsistencies in visual outputs are glaringly evident, but the intricacies of text responses remain less visible.
For large language models to gain widespread trust, they must be transparent, subject to external review, and devoid of obscure biases instigated by corporate interests. This goal can only be achieved through open-source models that guarantee clarity in their training datasets.
Hugging Face stands out among several open-source projects having secured $400 million and is making strides in the design and education of these accessible models. Their operation on decentralized networks adds a layer of transparency, ensuring that model outputs reflect integrity. Presently, robust decentralized networks exist for data storage and payment, while GPU market entities like Aethir and Akash are optimizing to support the training of AI systems.
Decentralized networks play a crucial role because they are less vulnerable to shutdowns or threats, as they operate globally across various infrastructures without a single point of control. This burgeoning ecosystem features GPU marketplaces, data storage platforms like Filecoin, CPU services like Fluence for model execution with accountability, and open-ended tools for model building. This foundational infrastructure positions open models for formidable impact.
Are decentralized AI frameworks truly feasible?
Microsoft and Google have poured billions into their LLM projects, creating significant competitive advantages. Nevertheless, history has shown that even the mightiest organizations can be outmatched. For instance, Linux capitalized on a decade of Microsoft's dominance and substantial resources to emerge as the leading operating system.
It's plausible that we could witness a similar trajectory with open-source LLMs, mirroring the growth of the open-source community that brought us Linux, especially if we establish a collaborative platform to streamline development. Over time, we might see smaller, specialized models evolve, each equipped with unique datasets that provide greater reliability in niche areas without directly challenging heavyweight models like ChatGPT.
Consider a model tailored for pediatric oncology; it could tap into exclusive datasets from premier children's hospitals. A unified interface could then merge these specialized models, delivering a ChatGPT-like experience built on a foundation of transparency and trust.
The combination of multiple models is a promising avenue toward establishing a credible substitute for corporate LLMs. However, equal emphasis must be placed on ensuring that these models are verifiably operated, paying utmost attention to their outputs. Organizations utilizing these models will encounter significant scrutiny from various stakeholders, including political figures, regulators, shareholders, the general public, and social media influencers.
Decentralized frameworks, backed by global data storage providers and operating on transparent computing networks, allow for auditability of queries while greatly diminishing hidden biases and censorship risks, making them decidedly more trustworthy.
While major tech companies acknowledge their biases, fostering models that deliver unpalatable truths to employees, government officials, or customer bases may prove challenging, even if the data is accurate. OpenAI plans to address perceived biases, and Google aims to enhance Gemini's historical fidelity, but underlying biases will linger. We should view the revelations of Big Tech's manipulative practices as a critical reminder about the perils of counting on any centralized entity to construct and manage AI models, no matter the intentions behind them. There is an urgent need to cultivate open, transparent, and decentralized AI systems we can genuinely trust.
Please keep in mind that the information on this page is not meant and should not be taken as legal, tax, financial, investment, or any other advice. It's crucial to only invest what you can bear to lose and to seek independent financial guidance if you have any uncertainties. For more details, we recommend checking the terms and conditions as well as support resources provided by the issuer or advertiser. MetaversePost is dedicated to delivering accurate and unbiased news, but market dynamics may change without prior warning.
Tom Trowbridge is a seasoned entrepreneur with experience in building businesses and writing, particularly focusing on Web3 technologies. As the co-founder and CEO of Fluence Labs, he spearheads the development of a decentralized, serverless computing framework designed for low-cost, reliable, and verifiable computing. A visionary within the decentralized sector, he serves on the board of Stronghold Digital Mining and has invested early in DePIN initiatives. Tom shares his expertise through his writings, podcast hosting, and his contributions to the decentralized infrastructure landscape, including his role as the host of the DePINed podcast, where he interviews leading figures in the field.
Disclaimer
In line with the Trust Project guidelines Enso, LayerZero, and Stargate Collaborate to Enable One of Ethereum’s Largest Liquidity Migrations to Unichain