Advertising

Google’s AI Taps into Bioacoustics to Predict Early Signs of Disease

Using Sound to Predict Early Signs of Disease: Google’s Bioacoustics AI

Google’s AI division is exploring the field of bioacoustics, a combination of biology and sound, to gain insights on how pathogen presence affects human sound. In a recent report by Bloomberg, it was revealed that Google has developed an AI model called HeAR (Health Acoustic Representations) that uses sound signals to predict early signs of disease. This technology could be particularly useful in areas where accessing quality healthcare is difficult, as it only requires a smartphone’s microphone.

How does Google’s bioacoustics AI work?

HeAR was trained on 300 million, two-second audio samples that include coughs, sniffles, sneezes, and breathing patterns. These audio clips were sourced from non-copyrighted, publicly available content on platforms like YouTube. Google even used videos that recorded sounds of patients in a Zambia-based hospital where individuals came in for tuberculosis screenings. Notably, HeAR has been trained on 100 million cough sounds, allowing it to detect tuberculosis.

Bioacoustics: Revealing Subtle Signs of Illness

Bioacoustics has the potential to offer “near-imperceptible clues” that reveal subtle signs of illness, aiding health professionals in diagnosing patients. Google’s AI model can detect minute differences in patients’ cough patterns, enabling it to spot early signs of a sickness’ progression or regression. This technology could revolutionize healthcare by providing early detection and intervention, especially in underserved areas with limited access to medical resources.

Partnership with Salcit Technologies: Improving Accuracy for Tuberculosis Screening

Google has partnered with Salcit Technologies, an AI healthcare startup based in India, to improve the accuracy of HeAR for tuberculosis and lung health screening. Salcit’s AI model, called Swaasa (meaning “breath” in Sanskrit), is being used in conjunction with HeAR. Salcit’s mobile app allows users to submit a 10-second cough sample, with an accuracy rate of 94 percent in identifying diseases. The cost of this auditory-based test is only $2.40, significantly cheaper than a spirometry test in an India-based clinic, which costs around $35.

Challenges and Future Prospects

While Google’s bioacoustics-based AI model shows promise, there are still challenges to overcome. One such challenge is dealing with audio samples that have excessive background noise. Google and Salcit are actively working on improving this aspect of the technology.

It is important to note that Google’s bioacoustics AI model is still in the development stage and not ready for the market. However, the concept of using AI combined with sound in the medical field is undeniably innovative and holds great potential for improving healthcare outcomes.

In conclusion, Google’s foray into bioacoustics and its partnership with Salcit Technologies exemplify the potential of AI in the medical field. By leveraging sound signals, this technology could provide early detection and diagnosis of diseases, particularly in areas with limited access to healthcare. As research and development continue, we can expect further advancements in this groundbreaking field, ultimately transforming the way we approach healthcare.