iOS devices have embedded voice synthesizers for Accessibility's VoiceOver feature. Is there a way you can use these synthesizers programmatically to generate text-based sounds?
My problem is: I'm working on a simple app for kids to learn colors and rather than recording the names of the colors in each language i want to support and storing them as audio files, i'd rather generate the sounds at runtime with some text-to-speech feature.
Thanks
[EDIT: this question was asked pre-iOS7 so you should really consider the voted answer and ignore older ones, unless you're a software archeologist]
Starting from iOS 7, Apple provides this API.
See this answer.
Objective-C
#import <AVFoundation/AVFoundation.h>
…
AVSpeechUtterance *utterance = [AVSpeechUtterance
speechUtteranceWithString:@"Hello World!"];
AVSpeechSynthesizer *synth = [[AVSpeechSynthesizer alloc] init];
[synth speakUtterance:utterance];
Swift
import AVFoundation
…
let utterance = AVSpeechUtterance(string: "Hello World!")
let synth = AVSpeechSynthesizer()
synth.speakUtterance(utterance)