I was working on a file the other day when my iPhone popped up with a message: "A sound that may be a ringtone was recognized." Sure enough, a doorbell had just rung.

This is one of the new collection of accessibility notifications for the hearing impaired. Apple has implemented many of these lately, and Google's Android has followed suit.

In fact, the iPhone has quite a few sounds that it's trained to listen for: fire alarms, sirens, smoke detectors, cats and dogs, appliances (although I'm not sure which appliances), car horns, doorbells, door bangs, etc. broken glass, kettles, running water, babies crying, coughing and screaming. You should also turn off "Hey Siri" voice commands if you're listening to other sounds. It is not known why this is the case; if the phone is already listening, why not just include the "Hey Siri" command in the list of items to listen to?

But what if this sound recognition could be modified to perform basic computing and operational tasks? Think of it as an option to customize your phone to hear sounds specific to your business. Just like in the classic machine learning example, could the phone hear a sound in a work area and say, "It looks like the XYZ component of this huge machine is overheating."

Or maybe the feature could be something even more useful, like detecting when a specific person is coming down the hall. "Alert! Ken from Legal walks up. Hide now." Or maybe you could place the phone near an open window so you can hear the sound of your boss's car approaching?

It could also become an evil management tool, alerting someone if no keyboard clicks are detected for a predetermined period of time. How about a useful identifier? If caller ID is irrelevant, could it be programmed with the voices of all users so that it can report the name of the caller? (A diabolical version would be to identify employees who call an anonymous tip line.)

Take that up a notch and a smartphone could be customized to identify the sounds you want, to help business. We already know that video conferencing systems are always listening, even when you've muted the microphone, but what if your phone could help identify who's really talking? Some systems offer this now, but it's not universal and doesn't even work consistently with systems that claim to have it.

Have you ever met a talker at work? What if your phone could listen and transmit a slower, clearer interpretation to your headset? Yes, it could also display a real-time transcript on the screen, but it's hard to look at that screen all the time and go unnoticed. Listener prompts are more discreet.

Then there are always real-time "voice detection" alerts. Imagine you're talking to your supervisor and you hear, "That's probably a lie." This could help during presentations to the board or audience hearing a high volume of sighs or yawns leading to a warning message: “Finish. You lose them. Sure, a good speaker should know this, but if the speaker is concentrating on a complicated topic, he may not realize that the audience is distracted.

As Apple, Google, and others strive to perfect some truly useful and useful accessibility features, it's clear that much more can be done with these devices.

Copyright © 2022 IDG Communications, Inc.

Share This