Ask a Deafie

Ask a Deafie: Clear Masks

A discussion about transparent medical masks which are friendly for the Deaf/Hard of Hearing has been trending in the mainstream media lately. While the reasons are self-evident for the Deaf community, my hearing friends keep asking me, “Why is that a big deal?” The majority of articles do not offer such arguments, so I want to cover that gap then.

I am Jesada Pua, and I am Deaf. I was born deaf in San Francisco, CA. Luckily, my parents never expected me to lipread because it’s mostly illogical and I have always been taught in ASL. I am a POC and Gallaudet University alum. I am employed as an ASL expert at SignAll Technologies, contributing to the team’s efforts towards developing technology that improves the inclusion for the Deaf, both through automated ASL-English translation and interactive ASL learning.

What is obviously wrong with opaque masks?

Non-manual markers (NMM) account for 30% of any given sign language structure. Putting the body movements, handshapes, and sign locations aside, the NMM are essential indicators of meaning, often quite important in order to comprehend nuances and concepts behind signs. I should probably compare facial expressions in ASL to a voice tone in English. For instance, the sentence “just go home”can have many meanings, depending on the tone. The tone can be imperative - “JUST GO HOME!” - or weak or indifferent. In ASL, if I sign “just go home” without any facial expressions, my recipient may have no idea what that even means - am I angry? Am I tired? Do I worry they won’t get home on-time? Or something like that.  

Therefore, the interpreters wearing an opaque mask while translating may generally be too challenging to understand - like mouthing to someone while wearing a masked Halloween costume. Perhaps, some native ASL users are able to understand even fully-masked signers, especially if their parents are also Deaf and grew up signing. Put simply, for the majority, it’s like trying to listen to someone at a very loud nightclub. You try to filter out all other sounds in order to really listen what the other person is trying to say.  

Opaque masks and lipreading.

The situation is unfortunately even worse for those who didn’t have access to sign language education, and therefore rely heavily on lipreading. It requires intensive skills usually acquired during childhood. Many deaf people suffered langauge deprivatoin due to the ban of sign language in their early age – it can be forbidden at kindergartens, schools, and other institutions. That is still the case nowadays in many parts of the world! Being raised “orally” means forcing deaf children to lipread and speak, almost like making a blind person paint colorful pictures.

For the sake of assimilation, many hearing educators expect deaf kids to learn how to speak/lipread, which is proven much more difficult than vice versa. (I personally think this is a nonsensical approach. It is undoubtedly much easier for hearing people to learn a sign language instead.) This situation is unfortunately rampant in countries like Hungary or those in the Global South, where most deaf kids are still forbidden to use their native sign language in primary schools. Most "oral" deaf students tend to learn their national sign language much later in life, often right after they complete secondary education (high school level).  Many universities and institutions tend to allow education to be carried out in sign language, as opposed to primary schools.  Especially during the Covid-19 pandemic, when most of society has essentially shut down, these people are even more disadvantaged and rather isolated than others.  

The function of NMM.

The function of NMM is different for different languages. In English, 18% of the letters are silent (or unpronounced), meaning that the lipreader is not able to actually recognize those silent letters via lipreading. On the contrary, Japanese is 100% phonetic, meaning all words are spoken exactly the way they are written. Consequently, Japanese lipreading is much easier than it is in English. As seen in the picture below , the red texts represent the silent letters which are NOT included in the audible pronunciation. The silent letters account for 18% of the whole English dictation,  25% in French, and in Danish it is 32%. This means that Danish is one of the most difficult languages to lipread. This is balanced with the fact that Denmark is one of the best places for sign language education, although cochlear implants are rising very fast among deaf children. Well, this is Scandinavia with free universal health care after all. Pretty ironic.

What is the solution?

Clearly, due to current situation, transparent masks are essential. They are an important tool towards inclusion, as well as other highly sophisticated technologies developed for years or decades. Clear masks are pretty scarce and still difficult to find. You may, however, never know when this smart communication gimmick will save someone’s life in any given critical situation.  

There are other solutions on the market, like learning ASL– whether deaf or hearing. SignAll is building a bridge between deaf and hearing worlds by employing AI technology in order to include ASL in the world full of digital wonders. The system requires meticulous cross-cultural, cross-linguistic, and cross-lexical factors and is being constantly fine-tuned. This is also true for most AI products at this stage of their development.   When I first joined the team two years ago, the technology still had a lot of room for improvement. Now, the product is mostly ready and and available on the market! This is impressive considering all of the the collaboration of specialized expertise, intensive knowledge of linguistics and linguistical computer modeling, databases, and programming were needed!

Despite SignAll’s immense efforts, their software can only be as good as the external hardware allows. Until recently, we tested a new generation of Real-Sense cameras. Now, these cameras seem to be getting the gist and becoming better at recognizing facial expressions. (As I have just described, this is essential towards ASL-translation accuracies.)   To futher clarify this, here’s an analogy: Siri is a well-known digital assistant for iPhones. If you ask Siri something with a cheerful, indifferent or frustrated voice, Siri won’t intrinsically acknowledge your mood but will react to your factual request: words and syntax. The same issue occurs with Google Translate. Due to a lack of local vernacular, slangs, and abbreviations, sometimes such translations would miss the point.

I refer to SignAll to show that instant barrier-free communication between signers and speakers is yet to become mundane. However, I still want to encourage you to wear a clear mask whenever you encounter a deaf person. I can already see that more and more interpreters are wearing clear masks or plastic visors, making it easier for ASL recipients to receive essential information. I do hope that someday we all will have a wider range of solutions to make the Deaf an equal member of the society.

Three text sheets in Danish, French and English. Each has a part in red, to explain  he amount of  inormation that is ommited if to lipread

The illustration is taken from this project

Want to know more about our sign language learning product SignAll Learn Lab? Check out the link

Editor: Zaryana Lisitsa

Share post on

You might also like