In 1950, British mathematician and cryptographer Alan Turing proposed a way to answer the question, “Can machines think?” Turing described a test in which a human judge poses questions to two unseen players—one human and one machine—and tries to determine, on the basis of their responses, which is which. If the judge is unable to tell the machine’s responses from the human’s, the machine can be said to exhibit intelligent behavior.

Turing’s test was just a thought experiment until American inventor Hugh Loebner developed a formal implementation in 1990. If a computer succeeds in fooling more than half of the human judges in Loebner’s annual competition, its programmer will receive $100,000 and a gold medal.
No computer has won the contest so far, and it’s doubtful any ever will. Nevertheless, the judges award a booby prize of $5,000 and a bronze medal to the most human-like program entered each year.

This Year’s Winner

This year’s top program, Chip Vivant, was created by Mohan Embar, a software consultant from Milwaukee. In the past, winning programmers endowed their creations with human-like backstories (jobs, families, pets, etc.) and used tricks like having their programs make spelling mistakes and use backspaces to correct “errors.”

Embar took a different approach. In short, Chip made no attempt to hide the fact that it was a computer. The judges must have found this approach refreshing, but it’s inconceivable that any of them were fooled. Consider the following excerpts:

Judge: Do you know where I am?
Chip: I can’t deal with that syntactic variant yet. Instead of saying “Do you know what time it is?”, use “What time is it?”.

Judge: Are you Japanese?
Chip: I haven’t achieved self-awareness yet, so I’m not sure.

Judge: It did hail a bit though, didn’t it?
Chip: Interesting point of view. No more than that?

In this last example, Chip seems to have picked up on the phrase “a bit” and ignored the much more pertinent question of whether it hailed at all. This response might make sense in some contexts (“I’ll have a bit of cake,” “No more than that?”), but it sounds stilted here.

The second-, third-, and fourth-place winners were less transparent, but even more bizarre. One did a fairly convincing impression of a teenage girl but refused to converse on topics other than cell phones and Lady Gaga. Another hurled insults at the judges. The last one presented a nonstop barrage of questions, totally ignoring the interrogator.

There are criticisms of the Loebner contest and the Turing test more generally, but it still provides an interesting measure of how far computers have come in the area of natural language processing.

What Does This Have to Do With Web Content?

Natural language understanding is generally considered an “AI-complete” problem, meaning that solving it would require a computer that is as intelligent as a human.

Computers are a long way from being able to generate coherent, original strings of text unless their domain is seriously limited. Newspapers use computers to write stories about stock prices and sports scores, but they are doing little more than inserting variables into a framework like, “The [CITY] [TEAM NAME] won against the [CITY] [TEAM NAME] with a final score of [SCORE].”

The adage, “You get what you pay for” is nowhere truer than in the area of web content. There are firms that, for mere pennies, will produce pages of machine-generated gibberish, but what impression does this make on your site visitors?

At Optimized Attorney, all of our writers are American college graduates. Most are attorneys. Whether you hire us to build you a new website or write for your existing one, the result will be something you can be proud of. For more information or for samples of our work, call (888) 278-8579 or fill out this form.

ComputerScreenIconLearn how our content development services can boost leads to your legal practice.