I know I take my ability to hear for granted. Every day I go about my business making calls and having in person conversations without thinking twice about the challenges the deaf or hard of hearing have. Fortunately, some college student at Washington University are on top of it, and have developed an algorithm that optimizes video for ASL users over slow US cellular networks allowing the deaf to make ‘cell phone video calls’.
Somehow it looks at the streaming video, determines which parts are the hands and face (pertinent for a quality conversation in sign language) and increases the quality of the video in those areas only, leaving the remaining portions as low quality as possible. Ultimately, this allows the video to work over low bandwidth or slow cellular connections. Video compression tools do something similar, only they dedicate the majority of the bandwidth to the moving images. This technology is far more detailed as it has to determine what body parts are what.
Hit the ‘leap’ to see a video explaining the Mobile ASL project