iOS 17 "Personal Voice" allows typed text to be read out with your voice
Apple has officially announced the new iOS 17 system. This update brings major upgrades to communication apps and makes airdrop sharing easier. It also makes text input smarter and there are new features like Journal App and Standby functions. In addition to these, it also brings many auxiliary functions. Apple's latest iOS 17 beta launched a new accessibility feature called "Personal Voice."
This feature allows users to create a synthesized voice that sounds like them. The users can now use this voice to talk with family and friends even when what they are saying is typed. The feature is aimed at users who have issues that can affect their speaking ability over time. This includes people with a recent diagnosis of ALS (amyotrophic lateral sclerosis) as well as other conditions that can progressively impact speaking ability.
Developers can already start testing with the "Personal Voices" feature. In addition, it is available in the iOS 17 beta and can be found in Accessibility > Personal Voices.
Creating a Personal Voice is a process that takes around an hour. To record the voice requires a quiet place with little to no background noise, with Apple instructing users to speak naturally at a consistent volume while holding the iPhone approx. six inches from the face. If there is too much background noise in your location, Apple will warn you that you need to find a quieter place to record. Personal Voice requires you to read a series of sentences aloud, after which your iPhone will generate and store your Personal Voice.
Those with an iPhone, iPad, or newer Mac will be able to create a Personal Voice by reading a random set of text prompts aloud. They will do this until 15 minutes of audio has been recorded on the device. Apple said the feature will be available in English only at launch. Also, the feature uses on-device machine learning to ensure privacy and security.
Once you have created your Personal Voice, you can use it through text-to-speak. The feature integrates with Live Speech, so users can speak with their natural voice in FaceTime calls and during in-person chats.
There are a couple of pros to the new Personal Voice features. Let us now look at some of the potential benefits of this feature for users with speech issues:
Personal Voice allows users to create a synthesized voice that sounds like them. This can be used to chat with family and friends. This feature can also help users with speech issues to chat more effectively. They can talk freely even if they have issues that affect their speaking ability.
Voice assistants have several benefits that can improve digital accessibility for people with disabilities. Personal Voice can be used through text-to-speak. This can help users with speech impairments to become more independent in their daily lives.
According to a 2014 study, the top three barriers to independent living for individuals with disabilities are personal safety, assistance with household skills, and assistance with medication. Personal Voice can be fixed for household tasks, which can provide a safer environment for users. For example, a user that uses a wheelchair for mobility and receives a grocery delivery every morning may benefit from the use of voice assistant tech − rising from the bed, transferring, navigating to the light switch, turning on the lights, getting to the front door, unlocking the door, and finally receiving their delivery.
Speech recognition systems can help disabled people become more active by making tasks simple. This will make them more efficient in their activities. Personal Voice can be used to simplify tasks such as sending messages, making phone calls, and setting reminders.
Personal Voice can help users with speech issues to improve their quality of life. This is done by offering them with a way to chat more effectively and become more independent. This feature can also help users to feel more connected to their family and friends. This can have a positive impact on their mental health and well-being.
While Personal Voice is a great feature that can help users with speech issues to talk more effectively, there are some potential drawbacks and limitations to consider:
One of the potential drawbacks of using Personal Voice is related to accuracy. The synthesized voice may not be able to accurately convey the user's emotions or tone, which can lead to misunderstandings.
At launch, Personal Voice will only be available in English. This can be a limitation for users who speak other languages. However, this may not be a limit for long as Apple will most likely work on other languages. We expect more languages especially if the feature is successful.
Personal Voice needs users to read a series of texts aloud to create and store their natural voice. This process can take around an hour and may require some practice to get right.
While Personal Voice is designed to be an accessibility feature, it may not be accessible to all users with speech issues. For example, users who are unable to read or who have difficulty speaking already may not be able to use this feature.
Personal Voice uses on-device machine learning to keep users’ info private and secure. However, some users may have concerns about the privacy and security of their voice data.
Personal Voice is a promising feature that can help users with speech issues to talk more effectively. The feature is easy to use and can be created in just an hour, making it accessible to many people. With this feature, Apple is once again showing its commitment to accessibility and innovation. However, there are issues that need to be looked into. For now, there is a language barrier as the feature only supports English.