I'm in my car with my boyfriend driving. We're running late for dinner at my friend Maria Teresa's house. I frantically look through my contacts for her name to warn her of our delay. My boyfriend suggests I ask Siri to call her. I say: "I would, but you know my phone is in English. It won't recognise the name." He looks over dumfounded and says: "What do you mean? I use it all the time."
I proceed to explain in length how I have to pronounce the name with an English, no, better yet, American accent so that my American sounding Siri counterpart to have the slightest of ideas of who Maria Teresa is.
And then, just then, an idea hits me like a moving train. Here, before us, or better yet, in our pockets and bags, we have technology that seemed impossible, downright scientifically fantastic just a few decades ago. We have a technologically advanced programmed entity to help us with little task like looking through our contacts list and dealing a number. Yet somehow, our phonetically predisposed brains managed to narrow the scope of such an ingenious knack by applying the same phonological algorithms that have always represented the hurdles we must jump in order to master a new language.
Now think about that marvellous theory according to which we don't actually read words, but rather beginnings and ends and we fill the middle with what we think should be there. We work on hunches and the marvellous machinations of our brain.
In the same way, I believe, albeit having no proof (or maybe I read it somewhere and it got stuck in my mind), that our brains get wired up to expects and produce certain sounds, our sounds, the sounds we start hearing in-utero and continue to hear during infancy. We are literally phonologically impaired since birth. Couldn't this possible, programmed predisposition then manifests itself in our capacity or incapacity to perceive sounds differently from what our expectations and prior knowledge are? As a teacher I realise I've fine-tuned my capacity to understand even the most phonetically butchered of words, and yet sometimes I still get blindsided with a word I struggle to understand, only to be illuminated when the student spells or writes it.
So Siri, a piece of computational intelligence infinitely more capable than us to store, select and use phonetics, needs to hear the right sounds according to her assigned accent. Just like I would, or you would. Siri needs to hear you say your words, the right way.
And so I wonder, was it technological pain that didn't allow the wonderful men and women at Apple Inc to equip Siri with a universal capacity to distinguish sounds or was it something they didn't even think about? Because despite the eeriest of memes and gifs with Siri conversations on the web, the fact that Siri can be puzzled by mispronunciation is by far the most bewildering of attributes to give our friendly pocket helper.
The Sound Eater