TwinMind, an artificial intelligence startup founded by former Google X scientists, has announced the successful completion of a $5.7 million seed funding round. Concurrently, the company has released an AI-powered application designed to passively capture and structure spoken information, alongside its new advanced AI speech model, Ear-3.
The TwinMind application, available on Android and iOS, operates by capturing ambient speech with user permission, constructing a personal knowledge graph. This functionality allows the app to generate AI-powered notes, to-do lists, and answers from various spoken interactions, including meetings, lectures, and conversations. Key technical features include on-device audio transcription for offline operation, continuous audio capture for 16-17 hours without significant battery drain, and real-time translation support for over 100 languages, according to the founders.
A core differentiator highlighted by CEO Daniel George is the application's native implementation in pure Swift for iOS, enabling sustained background audio capture. This contrasts with many competitors that often rely on cloud-based processing and frameworks like React Native, which face restrictions on extended background operation on Apple devices.
Further expanding its contextual intelligence capabilities, TwinMind offers a Chrome extension. This extension utilizes vision AI to scan open browser tabs and interpret content from platforms such as email, Slack, and Notion, integrating browser activity into the user's knowledge graph. The company internally leveraged this extension to shortlist interns from over 850 applications, as stated by George.
In parallel with its application development, TwinMind has introduced the Ear-3 model, a successor to its Ear-2 speech model. The Ear-3 model supports more than 140 languages, reporting a word error rate of 5.26% and a speaker diarization error rate of 3.8%. This model, a fine-tuned blend of open-source technologies, will be made available to developers and enterprises through an API, priced at $0.23 per hour. While Ear-3 operates in the cloud, the application intelligently switches to the on-device Ear-2 model during internet outages.
The seed funding round, which values TwinMind at $60 million post-money, was led by Streamlined Ventures, with participation from Sequoia Capital and investor Stephen Wolfram. The startup currently reports over 30,000 users, with approximately 15,000 active monthly, and a significant portion comprising professionals. TwinMind states that its models do not train on user data, and audio recordings are deleted on the fly, with only transcribed text stored locally within the application to address privacy concerns.