Do we want computer voices to have personality?

It would be a rare day that you don’t come into contact with a computerised voice. Call for support from any business — not just IT related, but anything from cooking utensils to gardening supply shops — and you’ll hit an electronic voice letting you know when to hit the right prompts to, if you’re exceptionally lucky, get through to a human operator.
You won’t necessarily get any more satisfaction out of that human operator, but then there are only so many things that computers can do, and it’s perfectly feasible to grow irritated with either computer or human voices in equal measure.
Not that these businesses actively seek to irritate their customers; that would be a rather careless way to throw them into the arms of their competitors, after all. Still, after many years of computer voices, they’re not really a great deal better than those that could be squawked out of the sound chip on an Amiga 500 nearly 30 years ago.
Still there are efforts to make those same computer voices more palatable to human ears. It’s been discovered, for example, that Apple’s Siri is to get a boost in this area in the next revision of its iOS operating system that runs iPads, iPhones and iPod Touch devices. Specifically, the sound files that Siri uses for its speech are set to be more localised, with a specific look towards a more neutral “Australian” accent.
It’s an interesting step for Apple given the relatively tiny size of the local market, but then we’ve often been the beneficiaries when it comes to Apple products.
The files were found in the beta version of iOS 7.1, which you can’t actually install unless you’re a registered Apple developer (or don’t mind lurking around some of the Internet’s shadier corners looking for files); in any case, as the beta name implies, this isn’t entirely finished and polished software, and you run the risk of more than the usual number of system crashes simply running it.
I’m not entirely convinced that it’s an entirely necessary step; the sound files that have been dug out so far (natural sounding Siri) only have the slightest of slight ocker twangs, and it’s not something that’ll fundamentally change the way that Siri works.
Given the need for sound files to travel to and from Apple for processing in order for Siri to actually function it’s somewhat like cheating in any case, but then actual real-time, on-device language processing is something that’s phenomenally complex. Offloading it to larger systems makes sense in this context — at least for now.