Yet another article on how computer technology will save us all from the tyranny of having humans in charge of the task of human communication. A BusinessWeek piece titled “White House Challenges Translation Industry to Innovate” tells the tale:

Companies have combined the power of humans and computers to simultaneously double the speed of translation and nearly halve its cost. Where each translator once converted 2,500 words a day at a cost of some 25¢ per word, they can now offer 5,000 words a day at around 12¢-15¢ a word.

Marvelous. This translator makes the same amount of money per day, according to this math, but turns out twice as much text in the target language. Efficiency up, global understanding up. But there are problems here. A few quick, unorganized thoughts:

Problem 1: We aren’t worrying about the fact that this means only half as much time can be spent on proper rereading by the translator and editing by a fresh pair of eyes. The hybrid approach of MT to begin and a human to polish the turds that are MT output means there’s an unhappy person in the mix now—at least I don’t think many people are happy about wrestling with clumsily translated text. I can’t stand it when I’m dealing with stuff a human put together, and even that clumsy human translator is leagues ahead of a machine, and will remain there for the foreseeable future.

Problem 2: The editors who deal with machine output are, ideally, bilingual and capable of doing the translation themselves. If something looks truly odd in your target text, going back to the source text to figure out what’s going on is the only way to set things straight. (Well, there’s actually another way: the monolingual editor just makes a wild guess. I didn’t say it was a good way.) In other words, the ideal form of this man-machine mind meld involves taking a translator who used to be crafting his own sentences and making him clean up the ones a computer spits out at him. Job satisfaction in this new world? Heh.

Problem 3: Don DePalma, chief research officer at a translation outfit, notes that companies need to get their information out there in front of customers in their own languages. “When you’re dealing with anything really expensive or that potentially involves a long-term financial decision—like life insurance or stocks—customers prefer to have information in their own language,” he says. But this is precisely the sort of text that needs to be handled by a specialist, and the companies that sell “really expensive” products will be the very last holdouts still using human pros for the entire process. (It would be fun to see someone trying to market life insurance via Google Translate and an editor in Bangalore, though.) It’s fine to trot this out as proof that companies will need to pay more attention to localizing their material for various markets, but it’s a poor example to bring into the “MT is the future” article.

Problem 4: This.

With [human-assisted machine translation] systems, text is fed into a computer program that tackles the first round of word and sentence conversion using statistics, language rules, or matching with past translations. That covers about 90% of the work. A human then steps in to correct mistakes, clarify sentences, and refine the language for the intended audience or market.

Anyone who’s done translation (at least at a level going beyond churning out crap drafts for rock-bottom prices) or editing knows that the 90% figure here is sheer idiocy. Experienced translators don’t tend to work in phases like this (pump out rubbish at blinding speed and then go back to correct spelling and grammar errors and think about tone and style); they have all these tasks in mind as they go through their text, and it’s hard as a result to define percentages for the effort going into each one of them. But I think the thing that makes translating between human languages a steep challenge for computers is the need to “refine the language for the intended audience or market.” Computers can’t recognize context like that. Humans can, and for human translators, keeping that context in mind and crafting a target text that meets the needs of style, readership, and client preference accounts for vastly more than 10% of their effort. I’d suggest flipping this formula around and saying that the computers handle a tenth of the work, not nine times that amount.

Problem 5: “Language translation is far from being mastered by humans, computers, or any mix of the two.” This is just annoying. It reeks of creationists’ “teach the controversy” demands for equal time for unequal worldviews. Using languages to communicate is what humans do. Birds fly. Fish swim. We talk. What mastery there is in the field of translation belongs entirely to people, and articles like this one need to be written from the perspective of how close computers are to reaching that standard.

Anyway. Enough problems. I’m of two minds when it comes to predicting the future of machine translation. On the one hand, I think the human capacity for language is too deep and too broad for machines to ever take it over completely, and even if 90% of clients end up happy with dirt-cheap mediocrity, the 10% of clients still paying for human quality will represent a healthy chunk of a growing language-services pie. So the good translators will still be making money, and it won’t be by massaging the output of a Google data center.

On the other hand, though, if the scientists ever crack this mystery wide open (perhaps by giving up on computers with nothing but 0s and 1s to deal with and creating new machines that function more like a brain) then we’ll get our translating machine. I’ll be out of a job, along with all my translator and interpreter buddies. But of course we’ll have plenty of company in the unemployment lines, since computers with real thinking power will already have taken over more menial tasks like piloting airplanes, writing software, drafting legislation, teaching children . . .