Online video streaming has been increasing rapidly over the last several years. The global video streaming market size is expected to reach 184.27 billion USD by 2027.
As technology evolves, and we rely on it more and more, we must be sure to include those with disabilities, including hearing loss, to make this content both accessible and legally compliant. One way to do so is by providing live captioning on streaming video.
What is Live Captioning?
Live captioning refers to providing captions, or time-synchronized text, in real-time. Live captions can be provided for a number of different mediums including virtual events, meetings, online courses, or performances.
There are several different techniques when it comes to the live captioning process, including live automatic captioning and live human captioning. Let’s explore each of these techniques and processes and how they compare.
Understanding Live Automatic Captioning Quality
3Play Media’s new white paper, Understanding Live Automatic Captioning Quality, was written in collaboration with Speechmatics to address the changing landscape of live captioning. In the white paper, you will learn about the differences between live automatic captioning and live human-enabled captioning, what makes high-quality captioning and what speech recognition features are critical to the live captioning space. The white paper also includes research findings from our most current State of Automatic Speech Recognition research, which will be released in a full report in early 2021.
A Look Inside
With the continual improvement of ASR technology, it is now possible to use it for live captioning – especially for online video content. This white paper will help you understand how the quality of live automatic captioning is determined.
Read more by downloading the full white paper for free!