A new AI music transcription startup is making it easier for musicians to turn audio into sheet music—no music degree required. Songscription, a tool built for both professionals and music hobbyists, launched last week with an AI model that can convert a song into sheet music in just a few minutes.
Founded by Stanford MBA/MA student Andrew Carlins, Songscription works on a freemium model and aims to be accessible for everyone—from solo artists to school music teachers. At launch, the tool supports multiple instruments, but piano transcriptions are currently the most accurate.
“We want to make learning and playing music more joyful,” Carlins said. He envisions a world where a high school band teacher in rural Nebraska can easily access custom-arranged sheet music—tailored for their band’s instruments and the skill levels of each student.
From YouTube to Sheet Music: Transcription Made Simple
Songscription lets users upload audio or even paste in YouTube links to generate sheet music. It also creates piano rolls, giving non-readers of sheet music a visual way to follow along with a virtual keyboard display. This is especially useful for self-taught musicians or learners who rely more on visual cues than traditional notation.
And for musicians who write their own songs, the tool offers a time-saving advantage: just upload your recording and skip the manual transcription step entirely.
While users must confirm they have the rights to transcribe uploaded files, the system relies on an honor check—a simple checkbox. That raises potential copyright concerns, especially when users upload popular tracks. Still, Carlins argues the tool falls into a legal gray area, similar to writing out a song by ear, as long as it’s for personal use and not for profit.
Unlike generative AI platforms that create new compositions, Songscription is positioned as an assisted transcription tool, helping users create and edit their own scores more efficiently.
AI Built by Musicians, for Musicians
The tech powering Songscription was developed by co-founder Tim Beyer, whose research paper with Angela Dai helped shape the model’s core architecture. The startup sources its training data from a mix of public domain sheet music, donated or purchased piano performances, and a large set of synthetically generated audio. To make the AI more robust, the team adds realistic conditions like background noise and reverb.
Even in its early days—just seven months since founding—Songscription has gained strong support. It raised pre-seed funding from Reach Capital and joined the Stanford StartX accelerator, setting it on a promising path for future development.
As the platform evolves, Carlins says they plan to expand support beyond piano and add outputs like guitar tabs and full-band arrangements. It’s not just about automation—it’s about helping musicians focus more on creating and less on notation.