Researchers from Tsinghua University and Shanghai Jiao Tong University School of Medicine have devised a graphene-based intelligent, wearable artificial throat (AT), sensitive to human speech and vocalization-related motions.
The AT’s perception of the mixed modalities of acoustic signals and mechanical motions enables it to acquire signals with a low fundamental frequency while remaining noise resistant. It can recognize everyday words vaguely spoken by a patient with laryngectomy with an accuracy of over 90 percent through an ensemble AI model. The content was synthesized into speech and played on the AT to rehabilitate the patient for vocalization.
The research team concludes there is still ample room for optimization, such as sound quality, volume, and voice diversity.