Jul 03

Annotation tools – market overview 


Since so many companies trust us to handle their data processing for them, we thought it would be interesting to give you a little tour of what goes on behind the scenes. In an era characterized by exponential growth in artificial intelligence (AI), the demand for high-quality speech data and annotations has skyrocketed. Speech annotation, the process of transcribing and labeling spoken language, has emerged as a vital component in training and fine-tuning speech recognition systems, natural language processing algorithms, voice recognition and virtual assistants. Accurate and comprehensive annotation of vast amounts of audio data is critical to these technologies’ success and also paves the way for future groundbreaking advancements. 

That said, the manual annotation processes of speech data are labor-intensive and require a significant amount of time, effort, and expertise. We’ve noticed a new trend in the market of innovative speech annotation platforms emerging, as companies attempt to revolutionize the processing and labeling of speech. These platforms vary in their features but ultimately share the same goal: to streamline the annotation process and improve efficiency. 

In this article we’re going to delve into the speech annotation market, explore the functionalities, benefits, and drawbacks of different platforms, and investigate their impact on the industry. We will examine how these platforms enable companies to unluck the true potential of their audio data by providing accurate and reliable annotations at scale by presenting their key features, capabilities, and their unique approaches to resolving challenges associated with manual annotation procedures.   

History of speech annotation 

In some sense, speech annotation as a concept can be traced back to the earliest days of written language, around 3200 BC with proto-literate symbols (think cave drawings). This evolved into more sophisticated phonetic notation over time, and examples can be found in the ancient Egyptian hieroglyphs. While the logographic hieroglyphs, symbols used to represent words, are perhaps better-known, the Egyptians also had an entire system of symbols to represent specific sounds. With time this evolved into new systems of writing, but it wasn’t until the late 19th century that we witnessed further evolution and a more concrete apparition of the field. 

sound waves speech annotation

The emerging field was linguistics and phonetic research. As scholars sought to comprehend the intricacies of speech production, the importance of accurate transcription came to light, and they set off to create systems to allow us to analyze spoken words and sounds more accurately. This resulted in the international phonetic alphabet (IPA) being developed in 1888 which is still used to this day for phonetic and phonemic transcription of any language. You can explore the IPA and the symbols for different types of sounds here.   

As linguists started to explore using machines to record and analyze speech, new inventions hit the market. For example, Thomas Edison invented the phonograph in 1877 which had two needles – one for recording speech, and another for playback. The transcription happened manually during playback and allowed for more accurate checking of the speech, and more intense analysis of the properties of the speech sounds themselves. 

With the advent of electronic technology, the field of speech analytics expanded rapidly. Linguists started to use spectrograms which were visual representations of speech over time. Spectrograms allowed linguists to study the acoustic properties of speech and identify different phonetic units, accents, and patterns in spoken language. Today, spectrograms can be used with recurrent neural networks for speech recognition.  

As accents and speech styles were brought to the forefront of linguistics, it became apparent that some accents and dialects were evolving over time. Linguists and anthropologists agreed on the importance of preserving linguistic diversity and cultural heritage, and started to work more closely together to record and transcribe different languages. Speech annotation played a significant role in documenting endangered languages and dialects. This field is known as Language Documentation and Conservation and you can find out more about it here

In more recent decades, speech annotation has proved crucial for training automatic speech recognition (ASR) systems and developing natural language processing (NLP) algorithms. Large, annotated speech datasets are used to train machine learning (ML) models to recognize and process spoken languages. This enables applications like virtual assistants and speech-to-text transcription tools to “understand” speech in a sense and therefore provide the results with quality. 

robot hand using keyboard

The emergence of the internet and the subsequent development of crowdsourcing platforms has facilitated speech annotations at scale. Researchers, linguists, and enterprises have started to collect speech data from sources online, which allows for creation of an extensive corpus of speech for research and development purposes. Crowdsourcing allows companies and individuals to access annotations from contributors globally, speeding up the processes considerably. 

Today speech annotation is established as a critical foundational element in multiple fields from linguistics to speech technology and ML. It enables linguists to study, preserve and understand spoken languages, improves speech recognition systems’ functioning, and advances our own comprehension of human communication. 

Overview of different platforms 

Today a plethora of platforms exists for annotating speech. The process of commercial speech annotation typically involves segmentation and transcription of an audio clip. This means that the audio clip is segmented into sections of speech such as sentences or paragraphs according to predefined criteria.  These segments are then transcribed into sentences representing what the speakers are saying. These transcriptions can also be annotated with details such as background noise, coughs, non-human noises, and so on. 

There are a number of basic features that are expected as standard from annotation platforms. Those are: 

  •  Data collection: scripted data, unscripted data, conversational data… 
  •  Speech labeling: applying labels such as “low/mid/high noise” etc. 
  •  Speech segmentation: marking segments of an audio recording as different types of content (e.g., speech segments, overlapping speech segments of two or more people speaking simultaneously, noise segments, and music segments). 
  •  Speech transcriptions: most companies have their own requirements. 

Here's our brief rundown of the main platforms available today, who cover at least the above criteria. 


Labelbox is a commercial speech annotation platform that provides tools for creating high-quality annotations on speech data. As a commercial platform, the first consideration should be price, and indeed some users report that Labelbox is too expensive, and might not be the most valuable choice for startups or smaller businesses.   

The user interface receives mixed feedback, with reports of difficult navigation rendering it difficult to understand and preventing efficient completion of multiple tasks. As of yet, it does not offer support for speech segmentation or annotation, but it does handle transcription of single, shorter files.   

As a cloud-based platform, users relinquish a certain level of control over their data when using Labelbox. Companies concerned about data privacy or regulatory compliance should carefully evaluate their data management policies and ensure that Labelbox aligns with their specific requirements before proceeding. Companies should also remember that stable internet connection is needed for accessing and using Labelbox features effectively, so in situations where low-latency requirements exist, dependency on internet access might pose challenges and impact productivity.   

Of note is also the dependency on external services – some of the features in Labelbox such as using external data storage services or integrating with ML frameworks can require additional setup and dependencies on third-party tools or services. This can introduce complexities, potential compatibility issues, and unwanted additional costs. 

As of yet, it completely lacks AI-assistance whatsoever. Users express a need for improvements in Labelbox technical support, as well as less downtime due to maintenance.   

Read more: Why the leading enterprises use partners to obtain training data

Label Studio:  

Label Studio is an open-source data annotation tool developed by Heartex. Using Label Studio might require significant technical expertise, especially for customization and integration with existing workflows. Users with limited programming or software development experience might find it challenging to fully leverage the platform.   

Label Studio supports various annotation types related to speech, speaker diarization, and sentiment analysis. The platform provides real-time collaboration features, enabling teams to annotate projects together.   

It appears to lack support for suitable segmentation, and its performance on transcription remains murky at best. Data output is only available in one format, and users need to write their own scripts to process the data in their own format. Scalability seems limited – while it might be suitable for small-scall annotation projects, it might face challenges in handling larger-scale or more complexed annotation tasks. 

Being opensource, users are responsible for setting up and maintaining the platform themselves. This includes installing and configuring the necessary software components, ensuring compatibility with their own environment, and handling update and bug fixes on their own. This means it may require additional time and resources to establish and maintain the infrastructure for using Label Studio effectively. Like Labelbox it currently lacks AI-assisted features for now.  

As an opensource tool, users are responsible for ensuring data security and privacy. Sensitive data needs to be protected throughout the whole process.

Read more: StageZero's guide and checklist to privacy and AI 

a person speaking to their smartphone using voice assistant

Annotation Pro:  

Annotation Pro actually runs as an installed app and functions exclusively on Windows PCs, so its usage is unfortunately quite restricted. If their website is up to date, it seems that updates are few and far between. 

As yet, it has no AI-assisted features and basically no customization options. Users will have to write their own separate scripts to parse the output into their required formats – indeed, the output alternatives have definite room for improvement.   

Our annotators describe its performance as “mediocre at best”. It covers the same features as everyone else in this ecosystem but without any remarkable unique selling point. However, as with the other opensource platforms on this list, it comes with the risks associated with set up, maintenance, technical compatibilities, and data privacy concerns. 

It was difficult to find information relating to the company behind Annotation Pro, and upon closer searching it seems that this is a tool made by a professor of Applied Linguistics, so the fact her tool made a list of professional tools made by multinational companies deserves an honorary mention for sure. 


We expected this to be the highest performer on our list but unfortunately it was very difficult – indeed impossible to get hold of. This is a commercial platform, but after 10 days and 31 messages, they still failed to put our rep in touch with a relevant salesperson. Therefore, we were unable to test the platform, but rather leveraged our own ecosystem to find out more about Apptek’s solution.   

The feedback was mostly positive, and it’s possible to make customisations. It is not possible to tailor the annotation interface to your own requirements, but the user interface is relatively intuitive. Using it out-the-box can be difficult, but the learning curve wasn’t as steep as might be expected with other platforms, as users take time to familiarize themselves with the interface and features. 

Apptek’s annotation platform offers users the opportunity to handle segmentations, timestamping, speaker identification and metadata creation among other features. Sounds great, but the commercial aspects are lacking severely. Despite multiple attempts we could not reach a salesperson by telephone, website, or email. Finding an employee on LinkedIn resulted in a lengthy but fruitless conversation. They do not offer demos or trials, and the price their representative quoted during LinkedIn conversations was 102 000 USD annual commitment, which puts it firmly out of reach for most SMEs. 

Otherwise seemingly a great platform, just beware of the hassle of begging them to let you buy it – costly in time, and eventually potentially very costly in dollars too.

Introducing the StageZero Audio Annotation platform 

At StageZero we work with a network of over 110 million native speakers and accurate annotation is a pillar of our daily work. We need to segment, transcribe, label and more, with absolute accuracy and speed. Like many, our annotators were frequently frustrated by the lack of quality tools available on the market – so we decided to develop our own.   

The StageZero Audio Annotation platform is the first of its kind. This transcription and segmentation platform (TSP) is web-based, with no set-up required. Intuitive interface means you won’t need a demo – users can get started immediately, although our team is available for help when needed. Integrated AI means users automatically increase their accuracy scores, and speed up their segmentation and transcription times, saving up to 66% of the related costs. 

StageZero audio annotation tool

The integrated AI includes AI-assisted segmentation and AI-assisted transcription. The tool preprocesses the audio recordings then segments them automatically by speaker and by non-human sounds. With one mouse click, users can modify the types of segments and move the start and end points of segments if needed. The tool supports overlapping segments too, and annotators can click on segments to replay specific sections or to jump to a specific segment. Our users save up to three hours of time for each recorded audio. It’s also possible to use the AI-assisted transcription without segmentation. The StageZero Audio Annotation tool will suggest transcriptions in all major languages and some rarer languages too. In specific use cases, you can provide a list of industry-specific terminology for our AI to recognize. 

The outcome is that the tool helps users to generate perfect audio or speech recognition training data, quickly, in just four easy steps: 

  1. Create a project and upload the relevant audio files from Microsoft Azure cloud. 
  1. The StageZero Audio Annotation tool will create segmentations and/or transcriptions immediately. 
  1. Adjust the suggested segmentations and/or transcriptions as necessary. This is extra easy thanks to the tool’s intuitive interface. 
  1. Download the finished output in JSON and CSV formats or get direct delivery. 

Contact us today to request your free trial license. 

Share on:

Subscribe to receive the latest news and insights about AI

Palkkatilanportti 1, 4th floor, 00240 Helsinki, Finland
©2022 StageZero Technologies
envelope linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram