Spoke too soon!

March 9, 2010

In an earlier post on 2001, I wrote:

Some say we will know we have developoed intelligent machines not when they can speak, but when they can read our lips.

Not so fast!  Today’s article in the NYTimes on Google’s translator programs raises the possibility that we may get lip reading machines before intelligent ones.  Oh well, many people speak before they think already!

It seems that the translators, which are pretty darn good, I think, use models of language that are augmented with, among other things, huge amounts of multi-lingual transcripts from UN meetings.  The translators there are among the best – human – ones around, so their work is the gold standard.  The massive database of phrases and sentences is parsed and indexed a la Google, and that’s why they do a decent job with text that strays from textbook, factual propositions.  What’s to stop the Google folks from feeding in massive amounts of video of people’s mouths speaking words whcih the machine can already process with it’s voice-recognition software?  It would build a model of the relationship between mouth configurations and actual phonemes, which it already knows, lip reading.


Play the odds

January 4, 2010

David Brooks, the columnist I love to hate, wrote on New Years Day about the failed bomb attack on the Northwest Air jet:

…we seem to expect perfection from government and then throw temper tantrums when it is not achieved. We seem to be in the position of young adolescents — who believe mommy and daddy can take care of everything, and then grow angry and cynical when it becomes clear they can’t.

…  But, of course, the system is bound to fail sometimes. Reality is unpredictable, and no amount of computer technology is going to change that. Bureaucracies are always blind because they convert the rich flow of personalities and events into crude notations that can be filed and collated. Human institutions are always going to miss crucial clues because the information in the universe is infinite and events do not conform to algorithmic regularity. [link]

I happen to agree with him on this, and I think our social conceptions of risk are way off.  I don’t think, however, that this case is a good example of that.  A decent system should have caught that guy.  Oh well, easy for me to say in hindsight, right?  Absolutely. 

I think Brooks’ column is barking up the wrong tree.  It is so hard to make a large organization function well, and to allow the full power of individual human intelligence to be brought to bear on problems.  Organizations that handle information, quickly become, as you move up the chain, detached and mechanical in their procedures.  How can they not?  There’s all that paper, all those calls, all those lists to go through!!  Has it always been so?  Did Assyrian bureaucrats miss vital clues on food supply and impending invasions?  Did they loose their heads because of it, literally that is?

But Brooks is wrong because he doesn’t say why it is so hard to do right.  He just seems to accept it as a fact of nature – the odds are stacked against the system.  It’s hard because it goes against such entrenched political interests.  Turf wars, egos, prestige, the usual culprits.  He seems to have the attitude that, in principal, the systems are being reformed correctly, and that that their failure is an inevitable “wastage” that we must expect.  I doubt that the efforts have even scratched the surface of what should be done, and I haven’t the foggiest notion of what should be done to change it.  So maybe we agree after all?


Follow

Get every new post delivered to your Inbox.

Join 172 other followers