It has become a meme in itself. To be an elite specialist your need to practice 10 years or 10000 hours. New research led by Michigan State University’s Zach Hambrick finds that a copious amount of practice is not enough to explain why people differ in level of skill in two widely studied activities, chess and music. In other words, it takes more than hard work to become an expert. Hambrick, writing in the research journal Intelligence, said natural talent and other factors likely play a role in mastering a complicated activity.
From the press release:
“Practice is indeed important to reach an elite level of performance, but this paper makes an overwhelming case that it isn’t enough,” said Hambrick, associate professor of psychology.
The debate over why and how people become experts has existed for more than a century. Many theorists argue that thousands of hours of focused, deliberate practice is sufficient to achieve elite status.
“The evidence is quite clear,” he writes, “that some people do reach an elite level of performance without copious practice, while other people fail to do so despite copious practice.”
Hambrick and colleagues analyzed 14 studies of chess players and musicians, looking specifically at how practice was related to differences in performance. Practice, they found, accounted for only about one-third of the differences in skill in both music and chess.
So what made up the rest of the difference?
Based on existing research, Hambrick said it could be explained by factors such as intelligence or innate ability, and the age at which people start the particular activity. A previous study of Hambrick’s suggested that working memory capacity – which is closely related to general intelligence – may sometimes be the deciding factor between being good and great.
While the conclusion that practice may not make perfect runs counter to the popular view that just about anyone can achieve greatness if they work hard enough, Hambrick said there is a “silver lining” to the research.
“If people are given an accurate assessment of their abilities and the likelihood of achieving certain goals given those abilities,” he said, “they may gravitate toward domains in which they have a realistic chance of becoming an expert through deliberate practice.”
Important takeaway is: practice is still very important (1+3 of variance is big), but as Annie-Murphy Paul put it, it’s not a given you become an expert in 10000 hours:
“Some normally functioning people may never acquire expert performance in certain domains, regardless of the amount of deliberate practice they accumulate. In Gobet and Campitelli’s chess sample, four participants estimated more than 10,000 hours of deliberate practice, and yet remained intermediate-level players. This conclusion runs counter to the egalitarian view that anyone can achieve most anything he or she wishes, with enough hard work.”
This also brings us to a question I asked some weeks ago on Twitter. Why do we accept the influence of e.g. genes on breast cancer, but is it less likely to accept that there is an influence on intelligence. Do note: I wrote influence not an absolute effect! Btw, Annie-Murphy Paul also lead me to this interesting slate-article.
Abstract of the research that can be downloaded here:
Twenty years ago, Ericsson, Krampe, and Tesch-Römer (1993)proposed that expert performance reflects a long period of deliberate practice rather than innate ability, or “talent”. Ericsson et al. found that elite musicians had accumulated thousands of hours more deliberate practice than less accomplished musicians, and concluded that their theoretical framework could provide “a sufficient account of the major facts about the nature and scarcity of exceptional performance” (p. 392). The deliberate practice view has since gained popularity as a theoretical account of expert performance, but here we show that deliberate practice is not sufficient to explain individual differences in performance in the two most widely studied domains in expertise research—chess and music. For researchers interested in advancing the science of expert performance, the task now is to develop and rigorously test theories that take into account as many potentially relevant explanatory constructs as possible.