Tuesday, September 22, 2015

One of Alan Turing's Predictions Came True in the Weirdest Way Possible by Esther Inglis-Arkell

Alan Turing earned deserved fame in computing and cryptography, but he didn’t confine himself to one subject. One of his famous papers dealt with biology, and has just been spotted written all over the eyes of insects.
In 1952 Alan Turing published a paper titled, “The Chemical Basis of Morphogenesis.” Morphogenesis is the process by which an animal develops from an embryo into its adult shape. Turing argued that small instabilities can lead to huge differences in pattern and shape. In the paper, he writes, “It is suggested that a group of chemical substances, called morphogens, reacting together and diffusing through a tissue, is adequate to account for the main phenomena of morphogenesis. Such a system, although it may originally be quite homogeneous, may later develop a pattern or structure due to an instability of the homogeneous equilibrium, which is triggered off by random disturbances.”
Two or more chemical substances can start off in the same place in a developing organism. A few little random changes will cause them to react with each other in different ways, and form wildly different patterns over time. Turing sketched out a model to show how this could happen—now called the Reaction-Diffusion Model.

This year researchers at the Lomonosov Moscow State University took a microscope and examined the patterns of antireflective coating on the eyes of insects, hoping to use the patterns in the coating to classify them. Working from a preliminary evolutionary tree, they took a look at insects and found... absolutely no correlation between the structures on their eyes and their place on the tree.

Wednesday, August 12, 2015

For 40 years, computer scientists looked for a solution that doesn’t exist by Kevin Hartnett


For 40 years, computer scientists have tried in vain to find a faster way to do an important calculation known as “edit distance.” Thanks to groundbreaking work from two researchers at MIT, they now know the reason they’ve continually failed is because a faster method is actually impossible to create.
“Edit distance” is a straightforward and satisfyingly tangible idea. Imagine you have two sequences of numbers and want to know how many steps it takes to transform one into the other. In this transformation you can add a digit, remove a digit, or substitute one digit for another. For example, you have the data strings “1234” and “2345.” To turn the first into the second takes two steps — remove the 1, add a 5.
The number of steps required to make this transformation is the edit distance. It’s a cool concept and also a useful one. Biologists use edit distance all the time when comparing genomes of different organisms. A genome is essentially a long string of data — the sequence of A, G, T, and C that make up our DNA. By calculating the edit distance between the genomes of two organisms, biologists can estimate how far back in evolutionary time the organisms diverged from each other.
Edit distance is useful, but also very time-consuming to calculate. The current method for measuring it is known as theWagner-Fischer algorithm. It was developed 40 years ago and works essentially by placing data on a grid. One string of data goes along the top row, the other goes down the left-hand column, and the algorithm fills in the grid diagonally, counting changes as it goes.
The Wagner-Fischer algorithm is a computationally-intensive method that operates in what computer scientists call “quadratic time.” In plain terms, this means that when the length of the data strings goes up a little, the number of steps needed to compare them goes up a lot. For example, if you have two sequences, each with 10 digits, it takes 100 operations (10-squared) to compute the edit distance. But if you have two sequences, each with 100 digits, it takes you 10,000 operations to compare them (100-squared). The number of digits went up a little (just 90). The number of operations went up a lot (9,900).
The fact that edit distance is only computable in quadratic time has big implications for genomics. The human genome contains about 3 billion base pairs. To compute the edit distance between two human genomes takes 3 billion-squared operations (to appreciate how big that is, write a 9 followed by 18 zeroes). That’s a number of operations, explains Piotr Indyk of MIT, that is “computationally infeasible.” Which is to say that our best computers chugging away for a really long time still won’t generate an answer.
Because it’s computationally infeasible to compute the edit distance between two human genomes, biologists have to rely on approximations. They’d love a faster method, but Indyk and his coauthor, MIT graduate student Arturs Backurs, recently demonstrated a faster method is impossible to create. And by impossible they don’t mean “very hard” or “our technology has to improve first.” They mean something like, “by the laws of the mathematics, it can’t be done.”
Indyk and Backurs presented their work at the annual Symposium on the Theory of Computing conference in Portland, Ore., in June. The details of how they demonstrated this impossibility are hard to describe, but their approach is easy enough to apprehend. In computer science, the most famous open question is the P equal to NP problem. It’s a mega-sized issue. Whoever proves it first would receive millions of dollars in prize money and be all over the international news. Most if not nearly all computer scientists believe the P equal to NP problem is false. Indyk and Backurs developed a clever strategy by which they showed that in order for there to be a faster way of calculating edit distance, a stronger variant of the P equal to NP problem must be true. And since most everyone is convinced that this variant of the P equal to NP problem is false, it follows there’s almost certainly no way to really improve on the Wagner-Fischer algorithm.
Indyk and Backurs’ result has been greeted with something like relief among computer scientists, who can now stop beating their heads against a wall in search of a faster method that doesn’t exist.
“I remember wondering as a student, 15 years ago, whether you could substantially beat the quadratic-time algorithm for [edit distance]. Thanks to Backurs and Indyk, we now know for the first time that you can’t,” says Scott Aaronson, a computer scientist at MIT.
For Indyk, too, his recent work acts as a permission slip to move onto other questions. Like hundreds of other computer scientists, he has spent years searching fruitlessly for a faster way to compute edit distance. Now, he says, “I will stop trying to solve the problem.”

Monday, October 1, 2012

How did this game bot score higher than humans on a Turing Test?

by George Dvorsky 
Anyone who plays video games knows that game bots, artificially intelligent virtual gamers, can be spotted a mile away on account of their mindless predictability and utter lack of behavioral realism. Looking to change this, 2K Games recently launched the BotPrize competition, a kind of Turing Test for nonplayer characters (NPCs). And remarkably, this year's co-winner, a team from The University of Texas at Austin, created a nonplayer character (NPC) that was so realistic that it appeared to be more human than the human players — which is kind of a problem when you think about it.

To create their super-realistic game bots, the software developers, a team led by Risto Miikkulainen, programmed their NPCs with pre-existing models of human behavior and fed them through a Darwinian weeding-out process called neuroevolution. Essentially, the only bots that survived into successive generations were the ones that appeared to be the most human — what the developers and competition organizers likened to passing the classic Turing Test (an attempt to distinguish AIs from actual humans).

With each passing generation, the developers re-inserted exact copies of the surviving NPCs, along with slightly modified (or mutated) versions, thus allowing for ongoing variation and selection. The simulation was run over and over again until the developers were satisfied that their game bot had evolved the desired characteristics and behavior. And in fact, Miikkulainen and his team have been refining their virtual player over the past five years.

The final manifestation of their efforts was dubbed UT^2 — and it was this NPC that went head-to-head against human opponents and other game bots at the 2K Games tournament.

And the game of choice? Unreal Tournament 2004, of course. The game was selected on account of its complex gameplay and 3D environments — a challenge that would require humans and bots to move around in 3D space, engage in chaotic combat against multiple opponents, and reason about the best strategy at crucial moments. Moreover, the game is also capable of bringing about some telltale human behaviors, including irrationality, anger, and impulsivity.

As each player (human or otherwise) worked to eliminate their opponents, they were subsequently assessed for their "humanness." By the end of the tournament, there were two clear winners, UT^2 and MirrorBot (developed by Romanian computer scientist Mihai Polceanu). Both NPCs scored a humanness rating of 52%, which is all fine and well except for the fact that the human players scored only 40%.

In other words, the game bots appeared to be more human than human.

Limits of the Turing Test
Now, this is a serious problem. Human players should have been assessed with a humanness rating of 100%, not 40%. Clearly, the judges utterly failed to identify true human characteristics among the human players. So by consequence, UT^2 and MirrorBot essentially achieved a rating better than 100% — which is impossible. How can something be more than something you're trying to emulate?

And indeed, this experiment is a good showcase for the limits of the Turing Test. Admittedly, the 2K Games tournament wasn't meant to be a true Turing Test, merely one that measured the humanness of NPCs in a very specific gaming setting. That said, the results demonstrated that human behavior is much more complex and difficult to quantify than we tend to think. Human idiosyncrasies, plus the ability to adapt and counter-adapt to attempts to identify it, will likely forever put it beyond the reach of a simple Turing Test.
For example, given the implications of the 2K Games tournament, how are we supposed to assess something like a chatbot for its humanness now that we know something can apparently appear to be more human-like than humans? Moreover, given all the subjectivity involved in the evaluation, how accurate is any of this?

Perhaps its time to retire the Turing Test and come up with something a bit more....scientific.