For those with visual impairments, Apple is adding a new Point and Speak feature to the detection mode in Magnifier. This will use an iPhone or iPad's camera, LiDAR scanner and on-device machine learning to understand where a person has positioned their finger and scan the target area for words, before reading them out for the user.
Often, it is tricky to figure out how much detail to include in an image description, since you want to provide enough to be helpful but not so much that you overwhelm the user. For example,"What’s the right amount of detail to give to our users in Lookout?" Andersson said. "You never actually know what they want." Andersson added that AI can help determine the context of why someone is asking for a description or more information and deliver the appropriate info.
Google is also expanding Live Captions to work in French, Italian and German later this year, as well as bringing the wheelchair-friendly labels for places in Maps to more people around the world. Plenty more companies had news to share this week, including Adobe, which isthat would make them friendlier for screen readers. This uses Adobe's Sensei AI, and will also indicate the correct reading order.