The myth of the cognitive quantum jumps


Update: see this presentation given by Scott Berkun at Google, which which explains my points much more eloquently.

2362129522_c3ce6282e5_b Very often media (and I’m using the word “media” here in its most comprehensive way – including things like blogs, Slashdot, etc) tells us the story of some uber-hyper-mega-cool new-unseen-until-now method of performing X. This leads many people to believe that progress is done in “quantum leaps” – ie. there are no intermediate steps between point A (where we are now) and point B (where we can get using this new discovery). As a side-effect, it also makes people think that all they have to do is to come up with a “big idea”.

This is utter nonsense and I would like to ask everybody to stop propagating this myth! (Of course I know that it is wishful thinking on my part to think that this blogpost would have a large impact on humanity, but hey, at least I’ve vented my frustration, and if just one person is convinced, I’m happy).

There are at least two factors which mislead people into this fallacy: first, the lack of knowledge of the reader in a particular field. So, there is no chance for the reader to evaluate what works the current one is based upon, unless this is explicitly mentioned by the author. And here is the second problem: our tendency to over-emphasize (either intentionally or unintentionally) our contribution.

Also, there are a lot of both empirical and scientific evidence for the fact that progress is not as simple as coming up with one great-idea. The quote from Thomas Edison (“Genius is 1 percent inspiration and 99 percent perspiration”) illustrates this. A more scientific study comes from Malcolm Gladwell, who says that you need about 10 000 hours (about ten years) of deliberate practice to become great in a given field.

One example which comes to mind from the field of malware-research is the case of the Storm worm. When it “appeared”, there was a big media frenzy around it, fueled mainly by the AV companies. What nobody mentioned (because it would have broken the myth of “new, ultra-dangerous malware materializing from nowhere”) is that “Storm” is in fact the “normal” evolution of a much older malware family detected by many as “Tibs”. If one would to place the samples on a timeline and study them in the order as they appeared, one could clearly see how the different methods (like using a simple encryption layer over UPX, using different API calls to thwart emulators, using MMX/SSE instructions, using the return values of the API calls in the decoding process, etc) appeared and evolved. In fact “Tibs” and “Storm” are very clearly the work of the same group of people, and not something new as the reports would like you to believe.

No quantum leaps (except in theoretical physics :-))!

Picture taken from renrut’s photostream with permission.

, , , , , , ,

2 responses to “The myth of the cognitive quantum jumps”

  1. Not to be a PITA, but technically a quantum is the smallest unit involved in an interaction. So a quantum leap is the smallest possible unit of leap — which IMO would be a small step. 🙂

  2. I'm always happy to engage into discussions, and I would like to ask all of my readers (both of them :-p) to call me out if they think that I'm talking BS.

    As I understand, in the expression "quantum jump" or "quantum leap" the analogy is not based on the size of the jump (which is indeed the smallest unit possible – if you agree with quantum physics), but with the fact that the change is non-continuous (ie. it is discreet).

    This is also what I'm rallying against: the concept that great ideas appear suddenly (that they represent discreet changes). Instead, I'm more of the view that all cognitive endeavors are continuous if one looks closely enough.

Leave a Reply

Your email address will not be published. Required fields are marked *