Through over a decade of technological advancements in VRS, one thing has always remained the same: the video screen design. We have always seen the Video Interpreter and ourselves. Watch our video to see how we're finally changing that. For more information on the work by the researchers in the video, go to:

Dorsality (Robert Sirvage)- https://youtu.be/EPTrOO6EYCY
Deaf Studies Lab (Peter Hauser)- deafscientists.com
VL2 (Melissa Malzkuhn)- vl2.gallaudet.edu

TRANSCRIPT:

Wayne: Today, we have Video Relay Service (VRS) functioning, but if you take a look back, you’ll find that the technological advancements have been incredibly fast. We started with a big, clunky computer with a webcam on top, plugged in with an usb. The Internet was painfully slow. We kept progressing until we changed to a Videophone on a TV with much faster Internet. Now, we have our phones that we can pick up and use from our pockets. But, the question here is, while the technology has changed rapidly, has the service aspect of VRS also changed?

Ever since the idea [of VRS] came into the picture, the design of the screen has been the same. The interpreter fills the screen with a small box of self-view in the corner. That hasn’t changed at all. Convo took a look at this design and wondered how we could change it and improve it to make those of us who are singing feel more connected in a way that just feels right.

Robert: Yes, regardless of the quality of the interaction with the interpreter, it’s very important to find a way to connect directly to the hearing caller. Even if the hearing caller doesn’t know how to sign— that’s why I’m using the interpreter— I still need cues from the contextual information of the hearing caller. Seeing their demeanor, their meaning, their facial expressions, the way they empathize, and their environment help us to get on the same wavelength and form a dorsal connection. This dorsal connection is more important than words, in my opinion.

Melissa: Seeing with our eyes is how we process our worlds. It is how we absorb information, so it’s really interesting how through visual modality, we see thing and our brains work with plasticity.

Wayne: That dorsal connection is something Convo truly felt inspired by. That was the problem; when you’re in a VRS call, you can see the interpreter, but where’s the hearing person? If we found a way to pull that hearing person in, would that make a difference?

Peter: I’ve done research for over 20 years on how Deaf people’s brains operate in relation to sign language. Oh, it’s very important for Deaf people to see the person they're talking with at all times. If you don’t see that person, your brain will make guessworks or just stop working or lighting up brain neurons. If you do see, however, your brain will work more, light up more neurons, work faster, and be able to control things like attention, emotion, and more.

Wayne: It’s proven to be true; when you pull in the hearing person and see that person plus the interpreter, we feel more in sync with the conversation. Can you imagine we’ve gone all these years with the same video screen design? Now, we’re changing this.

Melissa: This is remarkable. I saw this design and observed it. The concept of that is just… The infinity design is incredible; it really fits the core of being Deaf-centric.

Robert: This is a big technological leap, a huge advancement.

Peter: It impacts the Deaf brain because we finally have more access to information.

I’ll go and meet you on Monday.