Meta’s next Ray-Ban smart glasses
Meta’s next Ray-Ban smart glasses
Meta’s next Ray-Ban smart glasses: According to reports, users will be able to livestream video to viewers who can communicate with them using the future Ray-Ban Stories smart eyewear from Meta (formerly Facebook).Users will apparently be able to livestream video to viewers who can converse with them using Meta’s (previously Facebook) upcoming Ray-Ban Stories smart eyewear.
The second generation of Ray-Ban Stories would let users stream video directly to Facebook and Instagram as well as allow viewers to whisper in their ears, according to internal docs obtained by tech writer Janko Roettgers, according to The Verge. “With the device, users will be able to broadcast directly to Facebook and Instagram. At this time, there is no information regarding support for other services, according to a post on Lowpass by Roettgers. “Live streamers will be able to directly communicate with their audience, with the glasses relaying comments via audio over the built-in headphones,” he continued.
Current-generation Ray-Ban Stories devices can take pictures and brief videos but can not offer live broadcasting. The latest iteration of the smart glasses would have “better cameras” and “improved battery life,” the article claims. Additionally, the smart glasses can have extra audio services and adaptive loudness. “Meta is attempting to integrate adaptive volume control into its smart glasses in order to enhance the overall audio experience. With this capability, the glasses will automatically detect the level of background noise and turn up the playback volume in noisy areas, according to Roettgers.
The latest model of Ray-Ban Stories has built-in stereo speakers, a Bluetooth headset function, and a more direct interaction with Spotify. Users may skip tracks and do other things by pressing the frame. According to Roettgers, Meta is attempting to introduce comparable capabilities to other music streaming services, although it is unclear which one will come next. A new all-in-one, multilingual, multimodal AI translation and transcription model for up to 100 languages has been released by Meta in the interim, depending on the workload. The single model, known as “SeamlessM4T,” can translate from speech to text, speech to speech, text to speech, and text to text.