|
They are coming. |
In
2009 Hiarcs 13 on a smartphone dominated a world-class chess tournament with humans. At latest in the moment, it should have became clear that the world of computer chess should get separated by the world of human chess.
Indeed they were: as an example, in 2006 a group of compute chess enthusiasts founded the
CCRL rating list, dedicated to testing of chess engines in controlled hardware conditions. They would donate their computers to let engines playing against each other just for the pleasure of compiling a list of the strongest chess computers. This list, still maintained after 12 years, contains 353 engines as of July 22th, 2018, many of them tested in multiple versions.
The arena
We have seen it often: if an appropriate, fair arena is set up, with a meaningful reward, people will gravitate towards it and, given time, the arms race will begin. In the 2000ies, many of such arenas were being made available, in the form of rating lists, computer chess tournaments and the like. The arena was set. As for the reward, chess engines were now very hot even for professional players and you could still sell them as programs for up to 100 $. If you think that you can squeeze a chess engine under 10k-20k lines of code, and you only need to comply with a specified protocol, it sounds like the perfect challenge for algorithms enthusiasts. The realm of nerds, so to speak. And the nerds came. In few years, we had many engines fighting for the title of strongest chess program: Rybka, Houdini, Komodo. And then came Stockfish.
The crowd
Until 2013, Stockfish was a good, but not top open-source chess engine. Until Gary Linscott, one of the contributors, constructed a
distributed test framework for the engine. In this framework, anybody would be able to write modifications of the code and test this against the currently best version. How would this be tested? People from all over the world would donate their machine and a program would take care of let those computers playing games of the proposed modification (called patch) against the current best version (called master). After some dozen of thousands of games, it would be decided statistically whether the modification could be applied. A round of human based checks for code quality and you could get your patch included into master in less than a week! In the first 5 years of its life, Stockfish had 16 people contributing code. In the second 5 years, 94. Furthermore, since the patches would be tested with some very high predefined criteria, the risk of introducing errors just dropped to almost 0. Result: in less than 1 year Stockfish was sent to the top of all rating lists.
The arms race
One further piece to the puzzle: Stockfish is an open source engine. This means whoever can go and
read the source. Furthermore, all discussion is done in public in a google
forum, so everybody can know the background of decision and design. On one hand, we have the sheer amount of ideas and computing power available on the its framework. On the other side, these ideas are public, so all proprietary engines can go a dig for ideas which will work also in their context. In the subsequent years a true arms race between three competitors, Stockfish, Komodo, Houdini, followed. They would continuously exchange the crown of the strongest engine in the world, informally assigned as the winner of the last
TCEC tournament. This continues until today.
Clouds on the horizon
Some of you will have known that chess engines were against on the main page some weeks ago. A team from Deep Mind, the AI subsidiary of Google, trained a neural network able to compete with Stockfish on custom hardware. The history repeats: first a year-long quest to have a alpha-beta engine beating the human chess world champion and its biological neural network. Now a year-long quest (to come) for an artificial neural network to regain the crown. We'll see how it ends! In the meanwhile, an open source team is trying to replicate the results of the Google team in an effort call
Leela Zero. Again, one of founder of this project is Gary Linscott, the former Stockfish maintainer which created its testing framework.
Why the matrix?
Interestingly, now the challenge is not anymore between persons writing programs. But between engines and their development paradigm. In a way, the engines are using us, the humans, to improve themselves such that they still will be operated in the chess engines arena. As I said: the matrix.