NLP: Unlock the Hidden Business Value in Voice Communications

Today organizations capture an enormous amount of information in spoken conversations, from routine customer service calls to sophisticated claims processing interactions in finance and healthcare. But most of this information remains hidden and unused due to the difficulty of turning these conversations into meaningful data that can be effectively analyzed through Natural Language Processing (NLP).

Simply applying speech recognition software to voice conversations often results in unreliable data. State-of-the-art speech recognition systems still have trouble distinguishing between homophones (words with the same pronunciation, but different meanings), as well as the difference between proper names (i.e. people, products) and separate words. In addition, there is also the challenge of identifying domain-specific words accurately. Thus, in most cases, using speech recognition software alone doesn’t produce accurate enough data for reliable NLP.

How to Conduct Accessibility Testing on Android Devices

As per recent research by the World Health Organization, roughly 15% of the worldwide population is ‘specially-abled’ in some form or another. Developers creating a mobile application for all need to keep this 15% in mind. Other than creating dedicated applications for differently-abled people, making sure the current applications are disabled-friendly is a moral responsibility for every developer to ensure all their users are able to access their application with ease.

To accelerate this process, innovative tools are used to check if the applications are functioning as per the Web Content Accessibility Guidelines or WCAG. This is where accessibility testing comes into the picture. Accessibility testing helps identify errors in software and functions incorporated in your systems and mobile phones to ensure that an application is accessible to people with disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities.

Tutorial: How to Build a Progressive Web App (PWA) with Face Recognition and Speech Recognition

This is a follow up to the second tutorial on PWA. You can also follow this tutorial if you haven't followed the second one or my first tutorial about PWA. We are going to focus on some new Web APIs, such as:

We add these APIs to our existing PWA for taking 'selfies.' With face detection, we predict your emotion, your gender, and your age.

How Neural Networks Recognize Speech-to-Text

Speech to text

Gartner experts say that by 2020, businesses will automate conversations with their customers. According to statistics, companies lost up to 30% of incoming calls because call center employees either missed calls or didn’t have enough competence to communicate effectively.

To quickly and efficiently process incoming requests, modern businesses use chatbots. Conversational AI assistants are replacing standard chatbots and IVR. They are especially in demand among B2C companies. They use websites and mobile apps to stay competitive. Convolutional neural networks are trained to recognize human speech and automate call processing. They help to keep in touch with customers 24/7 and simplify the typical request processing.