Topic: | Law of Requisite Variety in NLP |
Posted by: | Carol Anne (Friday) Ogdin |
Date/Time: | 28/07/2002 17:26:39 |
Dear John: I write to defend retention of Ashby's "Law of Requisite Variety" (LRV) in the corpus (or at least the appendix) of NLP. Let us start with your discussion on Page 276 et. seq. of Whisperings, where you express LRV as "...(roughly) {sic}: In any connected system, the component in the system with the widest range of variability will be the controlling element." Or, in another formulation, here's what I wrote in my notes during the NLP Practitioner training in Washington, DC in 1981, "In any system, that component of the system which is most flexible will be in control of the outcome of the system." The flaw, I fear, is in the paraphrases, which lead to inappropriate applications based on implicit expectations anchored to certain words. But, inaccurate paraphrase is no reason to discard the baby with the bathwater. Ashby describes the LRV in "An Introduction to Cybernetics," (W Ross Ashby, Metheun, London, 1956 [my edition is dated 1979], pp 202-213), in a mere twelve pages, readable and comprehensible to anyone with a rudimentary grasp of algebra. Of course, he posits the law in the context of a closed system (in which all variables are accounted for; how Cartesian can you get?), but that makes it no less applicable to open systems. It's just that the proofs are harder (analogous to the flaw in Adam Smith's simplistic formulation so elegantly generalized by John Nash's Equilibrium). Even though I'd been in the computer industry and a student of systems since 1957, that NLP training was the first time I'd heard of Ashby. At the time I recognized that couldn't possibly be a formal presentation of a law (I'm an inveterate counterexample generator: think of gambling, where the occasional lucky winner has been stuffing quarters in a box with distinct lack of variety in behavior; the winner who quits at that point "closes the system" from their perspective). However, that oral summary of Ashby launched me on a search for the source, where I really did (I hope) understand the LRV. My own paraphrase is something like "In any system, all other things being equal, that component of the system with more options (i.e., variety in behavior) will be most likely to achieved a desirable outcome." Not quite Ashby, but somewhat closer to that ideal, I like to think. But the real value for me was personal. As I understood, in my shortest paraphrase, that "systems reward variety," it made clear why "flexibility drills" were so important, why having more choice is preferable, and why my enduring persistence in trying to identify "the right way" was less useful than finding as many ways as possible to do something. I'm still working hard at trying to find ways to expand my repertoire. While rep systems and calibration and meta-model, etc. were all important learnings, for me the real "eye-openers" were the power of presuppositions and the Law of Requisite Variety. While Ashby has been refined and evolved (like all good science), it still stands as a useful model for understanding the value of variety. So, I would encourage you to reconsider the utility of Ashby in NLP at several levels. While it may not have been the core of the classic code, and I see traces of it interwoven with the new code, it has immense utility value in helping people understand why it's in their self interest to develop more options, more choices, more variety.. On the other hand, perhaps that's just my personal bias toward the benefits derived from applying what I've learned of NLP, rather than on the core of defining what NLP is. --Friday |