November 16, 2016 — A poster presentation at the American Heart Association Scientific Sessions shows that voice analysis can be used to identify the presence of coronary artery disease (CAD). The study indicates a strong correlation between certain voice characteristics and CAD, which represents a significant breakthrough in this field. This finding can significantly improve continuous monitoring and reduce remote healthcare costs related to CAD.
CAD is one of the leading causes of cardiovascular mortality worldwide. When screening for the disease, patients would benefit from simple and noninvasive tests that could also improve the accuracy of risk estimation models. In the past, Beyond Verbal (www.beyondverbal.com) has found voice signal characteristics that can be associated with various conditions such as autism and Parkinson’s and worked with Mayo Clinic in order to research this same concept with CAD patients.
After conducting the double blind study, which included 120 patients referred for elective coronary angiography and corresponding control subjects, one voice feature was shown to be associated with a 19-fold increased likelihood of CAD. After adjustment for age, gender and cardiovascular risk factors, this feature was shown to be independently associated with a significant 2.6-fold increased likelihood of CAD. This is the world’s first study to suggest a link between voice characteristics and CAD, and hold the potential to assist physicians in estimating the pre-test probability of CAD amongst patients with chest pain.
“We are excited by the potential in finding correlation between voice features and CAD condition. This may open the door to other studies to assess the association between voice features and other health conditions,” said Amir Lerman, M.D., of Mayo Clinic.
“A patient's voice is the most readily available, easy to capture, and rich outputs the body offers," said Yuval Mor, CEO of Beyond Verbal. "We are very excited to be able to work with Mayo Clinic on such a breakthrough research, studying the potential of using the human voice in healthcare monitoring and specifically CAD.”
Since its launch in 2012, Beyond Verbal has been using voice-driven emotions analytics to dramatically change the way we can detect emotions and monitor health just by monitoring the human voice. Beyond Verbal’s technology has been developed based on ongoing research into the science of emotions that started in 1995. By combining the company’s patented technology with its proprietary machine learning-based algorithms and AI, Beyond Verbal is focusing on emotions understanding and discovering vocal biomarkers. During the past 21 years, the company has been able to hone its technology through multiple internal tests and independent external validations. Over time, Beyond Verbal has collected more than 2.5 million emotion-tagged voices in more than 40 languages, and secured their technology with multiple granted patents.
For more information: www.beyondverbal.com