Saturday, December 21, 2024
HomeFeatured NewsQ/A Interview with Waverly Labs: Using Tech Toward Crossing Language Divides

Q/A Interview with Waverly Labs: Using Tech Toward Crossing Language Divides

“Q/A Interview with Waverly Labs: Using Tech Toward Crossing Language Divides” continued:

E4G: Speaking of events, can you tell us what you showcased in Las Vegas during CES 2022? How was the event experience for Waverly Labs?

Freitas: Yes! We introduced two new products, Subtitles and Audience. 

Subtitles provides translation for hygienic, safe in-person interactions. With a two-sided screen providing near real-time translation to each person on either side, Subtitles is the perfect solution for all customer service interactions.

Audience is a translation solution for lecturers, educators, and conference event organizers. It allows any presenter to converse with a room regardless of what language they or their audience speak. The Audience app allows audience members to select and enjoy translations on their own mobile device.

We also shared some updates to Ambassador Interpreter, including auto-reconnection which eliminates the need to re-pair your device at every use.

The event went well for us! Obviously, there was a lot less traffic than we expected due to the surge, but we had tons of interest in our new products—especially Subtitles.

E4G: Are there other events this year where Waverly Labs plans to attend and exhibit that you can share with us? 

Freitas: We don’t have any other confirmed events right now, but we’re looking into events in the education and hospitality industries, as well as maybe some tech events in Europe this summer and fall.

E4G: What developments can we expect from Waverly Labs in 2022 that you can talk about?

Freitas: Our focus in 2022 is to scale Audience and Subtitles, so that we can have a holistic approach to solving in-person interpretation for our customers. That also includes iterating on Ambassador Interpreter with the next generation, which will have greater advances in speech accuracy, connectivity, and other features.

E4G: Are you working with any venues and conferences and other event organizers at this point, or do you expect to in the near future?

Freitas: Yes, we’re currently working on a number of larger-scale opportunities that we’re very excited about, but we can’t quite share yet.

E4G: Is the upcoming Audience app designed to pair with the Ambassador or Pilot, or is it wholly separate and unrelated for folks who don’t use either hardware option?

Freitas: It’s a wholly separate product and doesn’t require any additional hardware.


E4G: Is microphone, connectivity, and data-processing technology fully up to speed with the technical needs of an optimal translation experience?

Freitas: 100%. We believe that’s what sets our products apart. One of the reasons we designed an over-the-ear model was to maximize the surface area which would allow a microphone array engineered to optimize voice clarity. We also designed a multi-streaming platform that allows us to stream translated conversations to people all speaking different languages. And lastly, we employ several layers of compression to ensure fast translations, while using an amalgamated system to provide the highest quality translation accuracy.

E4G: Making sure translation is nearly simultaneous with the original content or speaker isn’t always easy for automated tools, but it is a preferred outcome. How do your tools handle latency, and is it an aspect you are still working on?

Freitas: We’re constantly working on improving the speed of our translations. Our translations are as fast as 1.5 seconds. With that said, we’re currently working on a true real-time translation algorithm that we have prototyped and hope to release later this year. It’s extremely exciting, and solving what is known as “simultaneous interpretation” would be a major advancement in the pursuit of real-time translation that we see in science fiction.

Group of people networking with a skyline behind them.
Will networking at events and in meetings across language differences eventually be easier? Some companies think, with the help of technology, we’re heading in that direction. (Photo by Charles Forerunner on StockSnap).

E4G: A year from now (early 2023), how would you describe a multinational or multilingual event experience, whether online or in-person, if the venues, audience, speakers, and sponsors fully implemented your translation tools?

Freitas: Seamless and accessible. Checking in and picking up your badge would be simple with Subtitles, as well as getting answers to your questions at the information desk. No need to attempt to get your point across using hand signals. Your business meetings would go smoothly with the Ambassador Interpreter, without the need for a prohibitively expensive human interpreter—plus you’d have meeting notes through the saved transcript. Finally, you can take full advantage of the educational elements like keynotes and other presentations with Audience.

All of these tools are great for in-person events, but in a year from now we hope to be powering the online experience too, with our novel approach to multi-language streaming and real-time translations.

E4G: Lastly, looking further ahead, what do you think the future of language translation might look like in a few years?

Freitas: Our roadmap for the next 5–7 years of innovation is to: 

  1. Develop offline translation. Currently, our entire solution is cloud-based as the earbuds must have connectivity to the cloud to where the servers are. It would be great to have our translation used in remote locations. That means we are developing our translation device to store data onto AI chips, thus taking it off its dependency on a cloud-based system. We have an early offline prototype, but eventually using these new chipsets with our offline models will open new opportunities for professionals and travelers in areas with no internet connectivity.
  1. Achieve true real-time translation or what we know as simultaneous translation. Our target is to have a translation device that interprets right along as you speak. Machines have not figured this out yet, but our researchers are working on developing applications that will provide a solution for simultaneous translation. Our beta version target is to complete this by the end of 2022. 
  1. Ultimately, we’d like to tackle feature speaker characteristics. That’s when we feel we’ve truly achieved Star Trek communicator levels of machine translation. We aim to develop translation devices that will simultaneously translate spoken language, sound like the speaker’s voice, and match the speaker’s emotions and excitement. 

We’d like to thank Waverly Labs and Brisa Freitas for sharing their insights about their solutions for language translation in events and other businesses.

RELATED ARTICLES

Leave a Reply

Most Popular

Recent Comments

chrisdean016 on Private: Old Calendar Page