ElevenLabs Steps Out of Beta with Revolutionary AI Speech Model Supporting 28 Languages
In Brief
The Voice AI platform ElevenLabs has transitioned out of beta.
Alongside this, the platform has unveiled Eleven Multilingual v2, a groundbreaking foundational deep learning model that encompasses 28 diverse languages.

This AI speech model, named Eleven Multilingual v2, promises to deliver ‘emotionally rich’ AI-generated audio across 28 languages, as highlighted by ElevenLabs during its grand exit from beta.
Developed through extensive in-house research, ElevenLabs reports that this latest AI speech model has undergone an intensive 18-month development cycle. During this timeframe, the team delved into the complexities of human dialogue, devising new methods for the model to interpret context and convey emotions in audio outputs, as well as creating fresh, unique vocal profiles.
Initially available only in English, Polish, German, Spanish, French, Italian, Hindi, and Portuguese, the model now extends its capabilities to include Chinese, Korean, Dutch, Turkish, Swedish, Indonesian, Filipino, Japanese, Ukrainian, Greek, Czech, Finnish, Romanian, Danish, Bulgarian, Malay, Slovak, Croatian, Classic Arabic, and Tamil.
According to ElevenLabs, this increase in language support will empower creators to generate localized audio content targeted at audiences across diverse global markets including Europe, Asia, and the Middle East.
Users can generate speech in any of the supported languages by simply entering text into the text-to-speech platform powered by Eleven Multilingual v2.
Moreover, whether users choose to utilize a synthetic or cloned voice , the company assures that the distinctive vocal characteristics of the speaker will remain consistent across all languages, maintaining their original accent. This versatility allows a single voice to produce speech across the entire 28 supported languages.
Mati Staniszewski, CEO and co-founder of ElevenLabs, shared, \"Our text-to-speech tools level the playing field, providing high-quality spoken audio capabilities to all creators. These advantages now extend to multilingual applications in nearly 30 languages. Ultimately, we aim to broaden our reach to include even more languages and voice options through the power of AI, working to dismantle linguistic barriers in content creation.\"
The launch of Eleven Multilingual v2 follows the recent public availability of Professional Voice Cloning which allows users to create an accurate digital replica of their voices. With this recent enhancement, the tool now enables users to seamlessly translate their voice audio into any of the newly incorporated languages.
Since kicking off its beta phase in January, ElevenLabs claims to have attracted over one million registered users across various sectors such as creative arts, entertainment, and publishing. The company announced successfully completed a $19 million Series A funding round in June, led by former GitHub CEO Nat Friedman, ex-Y Combinator partner Daniel Gross, and investors from Andreessen Horowitz.
ElevenLabs also recently partnered in collaboration with D-ID, the generative AI video content platform, to integrate their respective AI capabilities.
Disclaimer
In line with the Trust Project guidelines , please note that the information provided on this page should not be construed as legal, financial, investment, or any other form of advice. It's crucial to only invest what you can afford to lose and to seek independent financial counsel if you have any uncertainties. For additional details, we recommend reviewing the terms and conditions as well as the help and support resources provided by the issuer or advertiser. MetaversePost is dedicated to delivering accurate, unbiased reporting, but market conditions may fluctuate without prior notice.