2000 Series
-
ACE-2000
The ACE-2000 is a server based automatic speech recognition system that uses a cutting-edge computational linguistics program to process speech into text and caption data from audio content for live captioning purposes. The ACE-2000 is the perfect solution for captioning content with high accuracy, low latency, and multiple output options. The ACE-2000/ENC includes a built-in caption encoder, ...more -
ACE 2100
The ACE-2100 is a server based automatic speech recognition system for post production purposes. It uses a cutting-edge computational linguistics program to process speech into text from audio content. The ACE-2100 is the perfect solution for rapid processing of audio/video files with high accuracy, no latency, and customizable file options. The ACE-2100 system has a user-friendly web Graphical ...more -
ACE 2200
The ACE-2200 is a combination of the ACE 2000 and ACE 2100 in a single unit and is capable of live captioning from one of multiple live sources, as well as being able to caption in post production, however, both programs cannot run simultaneously. This unit provides a one stop captioning solution for both live captioning and post production captioning. The ACE-2200/ENC includes a built-in ...more -
ACE Flex 2
The ACE-FLEX-2 is a server based automatic speech recognition system that uses a cutting-edge computational linguistics program to process speech into text and caption data from audio content for live captioning purposes. When two channels of audio are needed, the ACE-FLEX-2 is the perfect solution for captioning content with high accuracy, low latency, and multiple output options. The ...more -
ACE Flex 4
The ACE-FLEX-4 is an Automated Captioning Engine that will receive audio containing speech and send out captioning data to be encoded by a closed captioning encoder. This unit has four inputs and with four outputs that can be used simultaneously. The unit uses a cutting-edge computational linguistics program to convert speech to text and then sends that text out with captioning data. The ...more