Behind the Curtain
The fundamental blocks of our technology encompass several machine-learning (ML) techniques, including:
- Hidden Markov models, and
- Graphical models.
We use these building blocks to analyze and extract data from media files. The information we derive from this data we call Insights.
Today, people refer to what we do as AI or Cognitive Computing. In our quest to commercialize state-of-the-art research at an affordable cost, we just try to use the right tool for the right job. Specifically, we use:
- Conditional random fields, Neural networks, word vector spaces (with and without word2vec) for NLP,
- Support vector machines and Multi-layer perceptrons (Neural networks) for classification,
- Deep learning for speech recognition, image recognition, video processing, and emotion recognition, and
- Traditional signal processing techniques to operate on the raw signals.
How we work
We support our customers in any way necessary to ensure that they get maximum value out of their recordings. We can:
- support a team of developers,
- build widgets / reports, or even
- develop custom software.
We develop custom and tailored Insights to meet client requirements.
Although our primary deployment is cloud-based, we can deploy and support our platform on-premise.
This is a list of Insights we currently produce, grouped by type. Not all Insights will be applicable to all media. Note that we are constantly building new Insights based on customer feedback.
For more information about any of these Insights, see our technical documentation.
- Spoken words
- Percentage speech
- Speech speed
- Language detection
- Indexing / search
- Named entities
- Topics ††
- Related words
†. Many language functions can be customized for particular domains.
††. Can use customer knowledge graph.
- Conversation segments
- Crosstalk / interruptions
- Emotional tone (beta)
- PCI data detection
- Script adherence (beta)
Speaker Data (alpha)
- Approximate age
- Personality type
Any insight across a set of recordings.