October 06, 2008 2:08

A scientific standard for chess writing?

Scientific Disc Heart CellIt's well-known that chess is not only a game, not only art, but also science. I myself became fascinated by chess not because you could beat your granddad with it, or because you could play beautiful attacking games, but because you could look things up afterwards.

By Arne Moll

The fact that you could actually find out what 'theory', an objective authority, had to say about some of the individual choices you made during the game, was, for me, perhaps the most fascinating aspect of the game.

Recently, I had an e-mail conversation with Dave Munger, a professor of psychology writer who blogs at Scienceblogs and is the president of Research Blogging, a site which promotes serious blogs on peer-reviewed science articles. He asked me if there were any scientific standards for chess writing or blogging. Suddenly, I realized there were none.

Or not really, anyway. Of course, chess is a well-known field of research in various subjects, for instance psychology (pattern recognition, etc.), mathematics and artificial intelligence, and there have been countless peer-reviewed publications on chess in these research areas. But what about chess as we chess fans know it? What about Nimzowitsch and his theories... in fact, what about the Sicilian Najdorf?

Chess Informant & NIC YearbookWell, we have Chess Informant and the New in Chess Yearbook series, which definitely use certain standards (like standardized symbols). Was this what Munger meant? Surely, there's more to chess writing than using certain symbols? For example, are the annotations in these volumes actually consitstently checked by the editors? I'm sure someone checks all variations, but do standards exist for doing this? Perhaps the Informant editors check the variations by hand, or with Rybka, and the New in Chess editors check them with Fritz? Who is right when the engines give different evaluations?

Mein SystemAnd then, of course, there's the obvious fact that chess writing is not only about moves, but also about concepts and theories. What can we say about the scientific basis for Nimzowitsch's Mein System? Can a chess author 'prove his point' by simply referring to Nimzowitsch, like a mathematician may refer to Euclid or G?ɬ?del? Surely, he has to demonstrate, in analysis with concrete variations, what he means? Come to think of it, it's not clear how we should evaluate 'authorities' in chess in the first place. Is it enough to consider Jonathan Rowson an authority simply because he is a grandmaster and working in the field of science himself? What about Max Euwe's strategy concepts? Or, if we consider chess history writing, is Edward Winter the final authority when it comes to the truth?

Still, I wrote to Munger, it should be possible to introduce scientific standards in chess writing. Some kind of symbolic notation would probably be essential. Ending each variation with a clear evaluation (rather than the vague 'needs to be checked') would be another useful thing. I would also add specifying the engine (and the computer configuration) that has checked the variations (and mentioning where the engine and the human disagree in their evaluation.) Perhaps referring to other publications would help a lot to put things in perspective. Maybe you can think of more standards which could be useful for chess publications?

A final question is whether we want such rigid standards? Who could benefit from them? Clearly, it would make life hard for some authors who only write chess books to make as much money as possible, instead of trying to find out any 'truth' in chess. (And there are a lot of them!) It would make it more difficult for an author to claim that the Grand Prix Attack leads to a forced win for White, or that there is a waterproof defence against 1.d4. And people could refer to those publications on the Research Blogging site, which would be nice. But would buyers care? Would you? Would scientific chess books get more attention in the press? Or less? Finally, do we want authors to make money with scientific publications?
Shouldn't science - including chess science - be free for all?


Share |
Arne Moll's picture
Author: Arne Moll


Charles's picture

Peer review works because scientific theory is objective, or at least as objective as any human activity can be. That is, it is TESTABLE BY EXPERIMENT. Anyone can test the boiling point of water at sea level; just get a thermometer, some water and a heat source and go to the beach! But the Najdorf? People have been disagreeing about that for almost 100 years with no firm conclusion as yet. Objective standards in a purely subjective activity like chess are impossible, particularly when opening theory changes year on year. The best you could do would be to use Rybka along with a strong group of GMs to review analysis and even that probably wouldn't be conclusive. And thank heavens for it! If chess were fully understood and analysed we'd all have to find something useful to do, or even (shudder) take up golf.

One standard that would help weaker players and beginners is FULL annotation of master games. The best examples of what I mean are Euwe's "Chess Master Vs Chess Amateur" and Chernev's "Logical Chess, Move by Move".

Dominik's picture

Quite a hard topic, I suppose, the balancing act between chess as sports and chess as science seems quite tough to me. Coming from an academic-mathematical background myself, I'd love to see an enlarged attention to the latter role of chess. However, I agree that there are quite different motivations on writing about chess. Without claim for completeness, I would list earning money (yes, there are a lot people who depends on chess out there), teaching students, personal training and of course science. What I don't understand is why the introduction of standards in chess science would have an impact on other writing motivations - there is no need for a chess author who want to teach special variations (which include physologie or even bluff in contrast to the mathematical-logical viewpoint of chess) to follow these standards.

Cooperating with a well-known research blog to emphasize the role of chess as a science (or as part / practical application of many different disciplines as you mentioned) could be quite promising. So I'm wondering what Munger is up to - did he mention some ideas to integrate chess to his research blog projects?

Manu's picture

Really interesting article ,thx.

HCL's picture

Chess as a whole is known, in the sense that it occurs on 64 squares according to specific rules. In that sense nothing new is to be found in the physical universe.

What remains is (1) the outcome evaluation of a specific position; (2) the moves and move patterns that tend to change the evaluation favorably for a player.

Unfortunately, the objective evaluation of a position isn't definitely known until it is table-based (all move sub-trees have been exhausted.) In such non-table-based positions the opinion of an authority (like a GM, an expert in a particular opening) carries weight. His opinion isn't theory, in the sense of Euclid, but it tends to take the reader closer to the truth. Guys like Nimzovich might receive origination credit (like "overprotection of a strong point") as regards chess abstractions/patterns.

The forced sequence (e.g., ending in checkmate) is the chess version of a proof.

If you're human, you're also concerned with the subjective evaluation over and above the objective evaluation. Some players judge positions based on outcome probabilities (%win, %loss, %draw) because two objectively equal positions may differ wildly in ease of defense in an actual game.

The subjective evaluation might include consideration of human psycholoy or the complexity of the position (more complex the higher likelihood of a game-changing error, etc).

HCL's picture

He asked me if there were any scientific standards for chess writing or blogging. Suddenly, I realized there were none.

Based on my previous comment, formal chess analysis is pretty much "scientific" in the sense of methodical and objective.

Journalistic chess writing's more of a free-for-all since entertainment value perhaps dominates. Game annotation usually goes like this:

1. Give algebraic notation of move.
2. If move is worthy of comment, insert evaluation or insert witty comment about opponent, weather, cell phones ringing in the tournament hallway, etc.
3. Iterate through steps 1&2 for each move of the entire game.

arne's picture

@HCL, you may want to read my article on the ultimate truth in chess from a few years back. It's rather similar to your point of view.

HCL's picture


Excellent linked article.

I'm new to this site, just found it several days ago via Susan Polgar's site, so please excuse if I repeated stuff.

Just one additional comment. To me "theory" means an abstraction or set of abstractions that map onto to physical reality somehow. In that sense, chess theory in completion is just tablebases. I don't believe chess theory's beyond human comprehension or metaphysical at all, just drearily unexciting. It's just move-trees, in the end.

Dave Munger's picture

I don't think what is required for a scientific study of chess is a symbol-system -- rather, what is needed is a system of checks and balances. There has been some serious research about chess. In fact, one of the most famous psychology experiments of all time was related to chess. In this study, Chase and Simon pitted experts against novices remembering the arrangement of chess pieces on the board. When the pieces were arranged in an actual position arrived at during real game-play, the experts were much better than the novices. But when the pieces were randomly arranged, experts did no better than novices.

A blog post discussing Chase and Simon's study would be welcome on ResearchBlogging.org, but a discussion of a newspaper column on chess would not, since the column wasn't reviewed by experts in the field, as Chase and Simon's study was.

P.S. -- I'm not actually a psychology professor; I work with my wife, who is a psych professor, to generate the content for our blog.

B's picture

I dont quite understand this article. The one thing you can be sure of though in my opinion is that with chess there will never be a "standard"

HCL's picture

The results of the The Chase and Simon study (which I have not read in the original paper but only heard of) need to be replicated. They've always seemed highly suspicious to me.

Firstly, it's only tangentially related to chess. In its essence, it's not about chess but the respective roles of experience versus innate ability in expertise formation (in any area).

The results would imply the advantage of chess masters over the average person lies solely in the area of experience (pattern recognition). The chess master (or any expert in any area) is not superior in innate cognitition (e.g. higher IQ).

(I'm going out on a limb since I haven't read the original paper.) Did they control for short-term memory? The study ought to be replicated because it makes no sense that chess masters have the same ST memory as the average person. Simply can't believe it. It's not consistent with psych research in other areas (I was a psych undergrad, so know the basics, though not much more.) IIRC, memory capacity is positively correlated with other cognitive abilities.

Chess masters should blow away the average person in tests of ST memory.

arne's picture

Thanks for your reply, Dave (and for your correction). I think there are two separate issues which I probably should have made more explicit in my post:

1. 'pure' chess research (evaluating positions, moves, variations and strategies)
2. chess research in a broader scientific context (say, psychology or history).

In my post I tried to combine these two issues but the result is probably only confusion. The two are often quite separate, but of course they can be combined in a single article, book, or blog post.

Your qualifaction 'experts in the field' seems to point to 2), but not to 1), where it is kind of vague: all are International Masters and Grandmasters 'experts in the field' by definition? Can a non-titled player (like myself) be an 'expert in the field'? Is a computer engine with good hardware, being of equal strength as a top grandmaster, an 'expert in the field'? These are difficult matters for chess players, at least for the ones who approach chess in a 'scientific' way.

What would be interesting is to see whether the principles of peer-reviewed research can be applied not only to the second issue, but also to the first.

HCL's picture

Btw, I've been long anticipating perhaps the most basic chess-related psych study: The research study that maps FIDE/USCF rating to IQ. (E.g., IQ-test participants in major tournaments such as the World Open.)

Who's gonna do it?

I believe the correlation between chess rating and IQ would be very strong, and basically refute Chase and Simon in its essence.

The study would probably also record other variables such as the number of years of playing experience, professional training (if any), number of rated USCF games (which is a proxy for 'professional' experience), etc.

Hatse Kidosie's picture

"Some kind of symbolic notation would probably be essential. "

Rubbish. Symbols are a tool, not the essence of science. This mistake seems to reveal the author does not understand science at all.

One could regard truth in chess in the way Popper regarded truth in science: a theory is considered to be true untill it is refuted. A specific opening works until someone proves it does not. In the meantime, the opinion of experts is the only thing close to truth that we have.

arne's picture

Thanks for the friendly words, Hatse, and thanks for putting words in my mouth I never said. (I'm sure you can figure out which ones.) I guess your mistake reveals that you do not understand, ehm, language at all? Or am I talking 'rubbish' again now?

otis's picture

Chess doesn't have or need a scientific writing standard because it's not a field of science. Chess is an activity or tool used to understand scientific fields (psychology being the prime example both historically and in this thread).

"Peer-reviewed" research for chess manuals is like any other non-scientific field; there are many authors who are good for beginners (Reinfeld, Chernev), intermediates (Silman), and advanced (Nimzo, Dvoretsky, Kotov). These opinions were formed over time rather than a committee of academics. Peer-reviewed research works because of the authoritative committee review system; chess does not have this structure so peer-review is not an ideal system for chess texts.

arne's picture

@otis, but do you mean to say that because chess doesn't have this structure of a review system, it can't have one in principle? That seems a circular reasoning to me. In my article I acknowledge that this system isn't there, but I also wonder if perhaps we should have one. I agree chess isn't a traditional field of science now, but again, why couldn't it be studied -in principle- the same scientific way that, for example, mathematics or physics is studied and qualified? I know there are all kinds of practical problems, but that's not the point I wanted to make.

GeneM's picture

SAN chess notation is good for recording moves during your live OTBoard games, because it takes less time to write and is less of a distraction to the player: e4 e5 Nf3 Nc6

But SAN is inappropriate for published chess writing. It suffers the majority of the same problems that the old Descriptive (N-B3) notation suffered, namely "context dependence".

For instance, consider the move "Nxe4". What color is the knight? What square did the knight move from? What kind of piece did the knight capture? SAN leaves all of those important questions unanswerable. The reader must resort to scanning outward to gather the info indirectly from the context. Sometimes this is easy or unnecessary. But often this required extra effort is ruinous, such as when trying to follow a possible variation that was not played but which the author has printed after the actual move. I am certain that most readers of such books skip over most such variations -- IF true then that is proof for me that SAN is inappropriate.

Instead, chess publications should use a full or "reversible" notation. For example, here is the earlier example move re-written in reversible notation, "Nc5::re4". It encodes answers to all the earlier questions. The knight is white (uppercase 'N', not 'n', idea from FEN). Its origin square was c5. The kind of piece the knight captured was a black rook.
A whole game notated in this manner could be played in reverse, given the ending position. Full and reversible notation.

FIGURINE notation is another problem. The font named "Chess Alpha 2" is a step in the right direction. Currently the white figurines are hard to see among the usually bold coordinates. I have seen many books or mags that intermix live vs analysis moves, bolding only the live moves. Guess what, they eye is confused because a white figurine is bolded in neither these live nor analysis moves.
Chess Alpha 2 offers both white and black figurines for moves (separate figurines for board diagrams). So you can either bold or encode color, but Not both unfortunately. My own "Chess Handy Symmato" font is an alternative that solves this limitation in a way that honors the gestault.

Reversible notation does consume more space. Often there is plenty of extra space, but sometimes space is a concern.

Gene Milener

Arne Moll's picture

@Jens, thanks for your elaborate post. I would like to make a distinction between the colloquial chess-usage of the word 'theory', and the scientific use of the same word. Also, as you know, something can be both fact and theory.
Chess theory as we know it does consist of many facts which are undisputable and indeed may be called 'proven'. Perhaps some assumptions will turn out to be wrong with the use of some super computer, but surely not *all* of them. We can say with reasonable certainty that some opening lines are refuted (and why) and others will probably never be.
At any rate, even though most of 'chess theory' might not approach objective 'truth', current theory is always simply the best we have yet - just like the theory of evolution is currently the best explanation we have for life on earth. We will have to live with it and we may want to make standards for doing analysis and research, even if we know we will perhaps not reach any kind of truth or mathematical 'proof' in the end.

Jens Kristiansen's picture

Arne, you wrote: "...as you know, something can be BOTH fact and theory." And from you very interesting quotation: "Well evolution is a theory. It is also a fact. And facts and theories are different things, not rungs in a hierarchy of increasing certainty." I also read the whole article by Laurence Moran, you put up a link to - great reading indeed.
Yes, evolution is a FACT, well proven by numerous scientific researchers. And it is ALSO a term for a THEORY about how life developes. That does not mean it is both at the same time.
The interesting aspect of chess in this context is that we do have facts with ?¢‚Ǩ?ìabsolute certainty", nowadays a lot of them mainly due to technological advances. But we also have and have had humans setting up theories on chess, which we are now able to absolutely verify or exclude.
We can never do to Darwin what we can do to Kling & Horwitz, but from chess we might learn something general about the nature of theorizing.

Jens Kristiansen's picture

This is indeed an interesting discussion, but I believe you from the onset are confusing matters and mixing incompatible items up. I like Arnes outline of what attacted him to chess: ".. because you could look things up afterwards." I think that goes for all of us chess addicts. The basic point is that in chess you simply have to be some kind of researcher to improve.
As Arne puts it: "The fact that you could actually find out what ?¢‚ǨÀútheory?¢‚Ǩ‚Ñ¢, an objective authority, had to say about some of the individual choices you made during the game, was, for me, perhaps the most fascinating aspect of the game. "
Exactly, apart from that, "theory" never is an "objective" authority, neither in chess, nor any where. As a matter of fact I believe that you, as a genuine chess researcher, have to be a little bit at doubt with the different "theories", at least from time to time.
"Theories" are assumptions we humans make about the world around us and how it is functioning. They are completely indispensible, if we want to deal with the world, as common humans or as scientist. As a wise man once remarked: "There is nothing as practical as a good theory". Most "theories" are never final "proven", their "proof" lies basically in that they are applicable to the solving of practical problems. In science a strong theory also shows its worth by being fruithfull, meaning giving impulses to new hypothesis and theories. The best wellknown example to this is the theory of evolution.
From this it follows that moves from fi. tablebases can never be "theories", they are only emperical facts that may give rise to such. Only humans can make theories. And, bye the way, there are far more in chess to investigate and theoretisize over than pure chess moves and positions. To me the true riddle is WHY humans are chosing one move or one plan from another. Yes, the deepest mysteries are burried in our own minds, and THAT is also what is so challening about the game: To improve you also need to research you own mind.
Chess, like life it self, is so complex that we humans simply need some theories to guide us. And there is a lot of them at offer, from global to local ones on specific areas or phases of the game. Maybe the most influental is the theory on "the balance of the position", who is (maybe wrongly) attributed to Steinitz. That has proven it self by being a strong tool for managing the game - and by the many other theories derived from it.
But...who knows?...maybe some day a supercomputer will provide us with the hardcore fact that the basic assumption was wrong? That maybe white is always winning by best play? And then we can throw Steinitz out? No, then, in some ways, he was still right, simply because his theory worked. And then we have to find out WHY it worked so well - a genuine challenge indeed!
And then, concerning the urge for "Scientific standard for chess writing". Well, there are many kinds of chess writing, and you can and should not put up the same standards for all of them. But...in contemporary chess books you experience some overly conceited authority with no base at all, especially in books which are pretending to be more "serious". In all modesty, I know something about chess and I can also play quite well, and I read an abbundance of wrong citings, wrong facts and wrong accountings of older or other masters thoughts in competemporary books - not to speak of the wrong evaluations of chess positions, when the engine can not help the author. Such thing as references to substantiate some given points of view are almost nonexistent in these books. (For examples on this you may read Winters "Chess-notes". And even Kasparovs MGP-serie come under this criticism). Are the young writers too busy and/or too lazy?
To demand that the globally recognized standards for genuine scientific writings should be applyed to "serious" chess writing is quite obvious. But wil any one hear us?

Jens Kristiansen's picture

NO! Nothing can be both fact and theory, even though you may theorize on what a "fact" is. But in chess we do have hardcore facts beyond any doubts, and, especially from the tablebases, we get more and more of them.
And this is in fact (!?) one of the most interesting features of modern chess in a broader sense. Because humans in the past (Philidor, Kling & Horwitz, Berger, Cheron, Troitzky amo.) have set up theories on different endings for centuries, and they have made deep analysis on the basis of these. NOW we can simply check them out, and the astounding fact (yes!) is that they mostly, with very few exceptions, were right. Even the Troizky-line has been proven to hold. And even in their analysis, done in the good old OTB-style, they stand high compared to the tablebase lines. Everlasting glory to the human mind!
But their "theories" on the EVALUATION of (so far) up to 6-pieces endings, have evapourated, either proven rigtht or wrong. Can any one tell me of any other field of human research where this has happened? In "Secrets of rook endings", 1992, John Nunn as the very first tablebase-checked the oldtimers theories and analysis on rook and pawn vs. rook. Could be that this work in the future will be regarded as a landmark in human thinking in a broader sense.
Another matter is that most of the old-timers theories which are heuristic in their nature, are still completely valid, although some of them are refined nowadays. Fi. you should still put your rook behind passed pawns, and still try to cut of the enemy king from your passed pawn in rook endings. At least these are very good an applicable rules-of-thumbs, but such can so far not be proven definitely.

Jens Kristiansen's picture

And some more: In the recently published "Schacchia Ludus" the late Ken Whyld has an article on "The Development of Chess Theory". As usual from this superclass chess schollar it is very thorough and wellwriten. Highly recommended!
A quote from the preface: "The word "theory" is sometimes used in chess litterature in ways contrary to normal usage".
So, yes, Arne, there is widespread "colloquial" use of the phrase "theory" in chess, and that does really confuse matters. Fi. most books on "openning theory" are mainly on chess praxis. But, anyhow, let us try to clear the water.

Arne Moll's picture

Jens, perhaps you should broaden your view on the concepts of fact and theory, because your denial of their co-existence is really not supported by the majority of philosophers and scientists. Here's an interesting quote from the TalkOrigins archive about fact and theory in evolution:

'Well evolution is a theory. It is also a fact. And facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts. Facts don't go away when scientists debate rival theories to explain them. Einstein's theory of gravitation replaced Newton's in this century, but apples didn't suspend themselves in midair, pending the outcome. And humans evolved from ape-like ancestors whether they did so by Darwin's proposed mechanism or by some other yet to be discovered.

Moreover, "fact" doesn't mean "absolute certainty"; there ain't no such animal in an exciting and complex world. The final proofs of logic and mathematics flow deductively from stated premises and achieve certainty only because they are not about the empirical world. Evolutionists make no claim for perpetual truth, though creationists often do (and then attack us falsely for a style of argument that they themselves favor). In science "fact" can only mean "confirmed to such a degree that it would be perverse to withhold provisional consent.".'

Note that evolution is just an example; the point is much more general, of course.

Latest articles