We translate sign language. Automatically.

Introducing SignAll

SignAll is pioneering the first automated sign language translation solution, based on computer vision and natural language processing (NLP), to enable everyday communication between individuals with hearing who use spoken English and deaf or hard of hearing individuals who use ASL. The SignAll prototype is currently able to translate part of an ASL user’s vocabulary. According to research professionals and leading tech companies, SignAll is the best automatic sign language translation system available worldwide. We believe that the boundaries between hearing and hearing impaired people can be removed by technology. We are working for a world where the Deaf/HoH can communicate with other people spontaneously and effectively - anytime, anywhere.

Watch the introduction Video

100,000,000 people

Over 100 million people - more than 1% of the world’s population - are unable to hear. Being deaf from birth or childhood, many of these people use sign language as their primary form of communication.

There are several hundred sign languages around the world and these also have their own dialects. One of the most common of these is American Sign Language (ASL). More than 500,000 people use ASL in the US alone, and millions more use it worldwide.

Most hearing people don’t know that written English is only the second language of people who are born deaf. Although they can settle mostly everything in writing, there might be such official situations in which the cooperation of a sign language interpreter is necessitated as they prefer communicating on their first language – sign language.

Background map: Areas where ASL or its dialect/derivative is the national sign language or is used alongside another sign language. (Background Pic Source: Wikipedia)

Sign-to-Text Technology

Visit!

Stop by the place you want, whether it's a bank or a visit at the doctor!

Sign!

Sign what you need at the front desk, using SignAll!

Watch!

The translation appears on the computer

  • The Prototype

    The prototype uses 3 ordinary web cameras, 1 depth sensor and an average PC.

    The depth sensor is placed in front of the sign language user at chest height and the cameras are placed around them. This allows the shape and the path of the hands and gestures to be tracked continuously. The PC syncs up and processes the images taken from different angles in real-time.

    Having identified the signs from the images, a natural language processing module transforms the signs into grammatically correct, fully formed sentences. This enables communication by making sign language understandable to everyone.

    The prototype has a modular construction, which allows components to be replaced as technology improves. This means that SignAll will be become faster, smaller and even more accurate over time.

    Insight into SignAll's Complexity

    Most people think that sign language is just about different hand movements. Sign language is actually much more complex than that.

    Research has shown that fully automated sign language recognition requires a solution that combines all of the important factors. That is why - according to computer vision experts - the automated interpretation of sign language is one of the biggest challenges for technology.

    Sign language has many different aspects that are combined to convey the intended meaning. These are:

  • Manual Components (Parameters)

    Manual components (also called parameters) are the bases of signs. Their significance is so obvious that most (hearing) people think they make up the entirety of sign language. Signs have four (manual) parameters:
    -Handshape: arrangement of fingers to form a specific shape
    -Movement: characteristic movement of the hands
    -Orientation: the direction of the palm
    -Location: place of articulation: can refer to the position of the hands itself (relative to each other) or position of markers (e.g. Fingertips) relative to other places (chin, other wrist, etc.)

    Facial expression

    Facial expressions, also referred to as non-manual components, convey important grammatical meaning too. They are formed from two components: upper part (eyes, eyebrows: nms – non manual markers/signals) and lower part (mouth and cheeks: mms – mouth morphemes)

ASL registers

Just like with spoken languages, there are different levels and manners in which sign languages can be used to communicate. These are referred to as registers that can be intimate, consultative, casual, formal, cold and distant, though the relevance of the last one is still debated. Signing also has different levels of politeness: not all signed phrases are polite/proper in all situations.

Prosody

Prosody is the elusive component of languages that subtly shapes the way we say what we say. It incorporates, among other things, the setting of rhythmic and intonational features that allow us to perceive the ways linguistic units are combined. In asl, this is realized in a visual-spatial manner, involving head and body movements, eye squints, eyebrow and mouth movements, the speed and formation of signs, pacing, and pausing. Such impalpable components are often found difficult by interpreters to comprehend.

Use of space

Using space is a powerful tool in sign languages. It can be used to visually represent measures and arrangements of the objects and concepts in a dialogue.

About us

SignAll spun off from the research lab of Dolphio Technologies, one of the most successful technology companies in Central and Eastern Europe. Our unique dream team of fifteen experienced researchers gained more than 100 years’ of experience together in the field of computer vision and natural language processing. After closing a seed investment round of EUR 1.5 million from an international consortia SignAll’s enthusiastic team is on it’s way to provide a technology that can improve the quality of life of the Deaf community.

  • EBA

    EBA

    Deloitte 50

    Deloitte 500

  • Singularity University

    CEE LiftOff

    LT-innovate

Partners & Sponsors

The Team

  • Zsolt Robotka

    Zsolt Robotka

    A mathematician specialized in computer vision, Zsolt Robotka is SignAll’s co-founder and CEO. Mr. Robotka also founded Dolphio Technologies, together with János Rovnyai.

    János Rovnyai

    János Rovnyai

    An economist, passionate entrepreneur and the co-founder of SignAll and Dolphio Technologies. Mr. Rovnyai is the CEO of Dolphio and He supports SignAll’s financial management.

    Nóra Szeles

    Nóra Szeles

    With an MBA from Purdue University (US) and qualifications as an economist, Ms Szeles manages the international expansion of the business. She has worked in business development for 15 years, and joined the team in 2013.

  • Dávid Retek

    Dávid Retek

    An applied mathematician specialized in data mining. He is a senior data scientist at SignAll.

    Márton Kajtár

    Márton Kajtár

    Mathematician, with 10+ years experience in computer vision, ASL user and fanatical problem solver.

    Sean Gerlis

    Sean Gerlis

    A leader in Deaf community and subject matter expert in ADA, Sean provides SignAll with his invaluable insights and advises regarding business operation and product management.

  • László Nagy

    László Nagy

    As an economist he gained his professional experience as senior financial manager of several R&D projects. Laszlo Nagy is the CFO of SignAll Technologies.

    Team Member

    Milad Vafaeifard

    We are proud to have Milad, Our deaf developer in our team. He motivates us to fulfill our mission all the time.

  • Team Member

    Dawn Croasmun

    Dawn Croasmun has BA in American Sign Language (ASL) and Deaf Studies, with a minor in Linguistics. She completed her Masters at Gallaudet University in Sign Language Education in 2016. She teaches ASL at SignAll and is part of the Grammar team.

László Nagy

Dávid Pálházi

The project manager and SCRUM master of SignAll. He has advanced skills in agile software development and people-centric management.

András Szeicz

Helga Mária Szabó

A linguist, postdoctoral researcher. She has been in love with Sign Languages and has been engaged in Sign Language research for more than 20 years.

Team Member

It can be your photo

Come and join us to work on the most exciting, groundbreaking project! If you are interested in R&D or Marketing, drop us a line! We also have a great internship program for ASL users, check it below!

Open positions

  • NLP

    NLP Researcher

    RD

    Alogrithm Developer

    intern

    Machine learning specialist

    Latest news

Portfolio Item

Meet Lisa Hower, SignAll’s new ASL teacher!

We are extremely delighted about being able to introduce SignAll’s newest team member, Lisa Hower from Texas, USA, whose presence will grant the opportunity for each team member to learn ASL from a qualified, native teacher.

Read more

Portfolio Item

Gallaudet University partners with SignAll to develop automatic sign language translation software

Washington, D.C. and Budapest, Hungary – March 2017 – Gallaudet University and SignAll have formed a partnership to develop an automatic sign language translation software…

Read more

Portfolio Item

SignAll, the Winner of Special Award at Think Big

Think Big is an event organized annually since 2013 for entrepreneurs and companies in the CEE region whose business is built on new technologies. And as such, we had to be there – but for SignAll, being there is never enough, so we grabbed the Special Award…

Read more

Drop us a line

Email

hello@signall.us