Apple Previews a Ton of New Accessibility Features Ahead of WWDC
Apple on Tuesday previewed a handful of new features for the iPhone, iPad and Mac designed to boost cognitive, vision, hearing and mobility accessibility, ahead of Global Accessibility Awareness Day. The features are slated to roll out later this year. This comes as Apple gears up for its Worldwide Developers Conference, which kicks off June 5.
One feature, called Live Speech, is geared toward users who are nonspeaking or who have diverse speech patterns or disabilities. Live Speech lets someone type what they want to say and then have it spoken aloud. The feature can be used for in-person conversations as well as over the phone and on FaceTime. It works on iPhone, iPad and Mac, and uses any built-in device voices like Siri. You could say, “Nice to meet you, I’m …” and introduce yourself, for example, and can also save favorite phrases such as, “Can I please get a black coffee?”
Taking that feature a step further is Personal Voice, which lets users at risk of speech loss create a voice that sounds like them and then have it speak aloud their typed-out phrases. Personal Voice uses on-device machine learning. To train the feature, a person spends about 15 minutes speaking a series of text prompts aloud on iPhone or iPad.
Watch this: Tech accessibility is lagging. Here’s why that needs to change
The iPhone’s Magnifier app is also getting a new feature called Point and Speak, which allows users with vision disabilities to point to objects with text labels and have their device read that text aloud. For example, someone could use this to identify buttons on a microwave. Point and Speak uses your phone’s camera, lidar scanner and on-device machine learning to find and recognize text as you move your finger across different objects. Point and Speak can be used alongside other Magnifier features like People Detection, Door Detection and Image Descriptions, which help blind and low-vision users navigate and identify their surroundings.
Assistive Access is designed for people with cognitive disabilities, and offers a more focused device interface to lighten cognitive load. This includes large text labels and high contrast buttons on the iPhone’s home screen and across Calls, Messages, Camera, Photos and Music. The experience can be tailored for different preferences. For instance, someone who prefers visual communication can use an emoji-only keyboard in Messages or can record a video message to send.
“These groundbreaking features were designed with feedback from members of disability communities every step of the way, to support a diverse set of users and help people connect in new ways,” Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives, said in a statement.
Other accessibility updates coming this year include the ability to pair Made for iPhone hearing devices directly to Mac and to more easily adjust text size across Mac apps like Finder, Messages, Mail, Calendar and Notes. Voice Control is also adding phonetic suggestions, so that users who type using their voice can choose the correct word if there are others that sound similar, such as do, due and dew.
Apple is also launching SignTime in Germany, Italy, Spain and South Korea on Thursday, which lets Apple Store customers communicate with staff via sign language interpreters. The service is already available in the US, UK, Canada, France, Australia and Japan.
Apple is one of many companies boosting its accessibility offerings. Other tech giants like Google have rolled out features like Lookout, which helps blind and low-vision users identify objects and read documents using their phone’s camera. Last year, Google added a feature called Guided Frame to its Pixel phones, which uses audio and haptic cues to give users exact guidance for framing their selfies.
Source: CNET