In a surprising revelation, 4,000 individuals unknowingly engaged in AI-led psychotherapy sessions.
In Brief
Koko connects individuals seeking mental health assistance with trained volunteers who can help guide them through difficult times.
Engaging with the Koko bot involves users sending pre-set questions, allowing for a structured interaction.
Approximately 4,000 individuals received psychological support from OpenAI's GPT-3 without prior notice.
Remarkably, the AI bot proved to be as effective in its therapeutic responses as traditional human therapists.
The non-profit mental health initiative leverages platforms like Discord and Telegram to create a supportive community for those in need. Koko Teenagers and adults looking for mental healthcare are paired with volunteers through Koko. Users enter a Discord server dedicated to Koko Care, where they communicate directly with the Koko bot by answering various multiple-choice questions, such as ‘What’s your biggest concern right now?’ The bot anonymizes these worries and relays them to another server user, who can respond anonymously.
This pioneering method employed by Koko has significantly contributed to bridging the gap between individuals seeking mental health support and those ready to provide it.

Rob Morris, one of Koko’s founders, opted to experiment with AI chatbot therapy by using volunteers in this unique capacity. According to a tweet by Morris, around 4,000 individuals benefited from psychological support using OpenAI's GPT-3. Interestingly, findings indicated the AI bot’s therapeutic effectiveness was on par with that of human therapists.
Responses generated by AI received significantly higher satisfaction ratings compared to those crafted by humans, and response times were cut in half.
Rob Morris points out.
It appeared that users found value in the psychological assistance offered by GPT-3. Yet, after the initial buzz, Morris promptly removed the bot from Telegram and Discord. The rationale? Once people realized they were conversing with an AI, the therapeutic affair lost its impact; the simulation of empathy felt somewhat artificial and lacking.
There could be another angle to this situation too. Rob Morris’s venture has faced backlash on social media, with some labeling it as unethical. After all, participants were misled and did not provide informed consent; they were unknowingly part of this psychological study.
In conventional scientific research, participants are made aware of the procedures involved, allowing them to withdraw consent up to the point of publication. In this instance, a significant breach of trust occurred: individuals who sought assistance for mental health issues believed they were receiving support from another human, while they were actually interacting with an AI. While the quality of help may not have been poor, trust is crucial for positive therapeutic outcomes. Such deception could undermine the faith users had in a platform that once showed so much promise.
The Koko experiment serves as a stark reminder of how susceptible individuals can be to deception within social media networks, raising alarms about the potential misuse of technology for spreading fake news and misinformation. Although this was done with good intentions, one has to ponder what might happen next time.
Read more about AI:
Disclaimer
In line with the Trust Project guidelines Please remember that the content presented on this page isn’t meant to be taken as legal, financial, investment, or similar counsel. It’s crucial to only invest what you're prepared to lose and to consult with a qualified financial advisor if you have any uncertainties. For additional details, it’s advisable to review the terms of service and support sections offered by the respective issuer or advertiser. MetaversePost is dedicated to providing accurate and impartial reporting, though market conditions may fluctuate unexpectedly.