Topic: | Re:Re:Re:Re:Re:Re:Re:Re:Proposal for Refinement of distinction between modeling and applications |
Posted by: | Lewis Walker |
Date/Time: | 05/01/2003 18:33:11 |
Hi Patrick Thanks for your detailed reply. I think I’m now currently out of my depth - and treading water! - yet learning a lot I think there are two views about epistemology as defined by encyclopedia.com and your further comments. The first is that we all have a personal epistemology which consists of the maps and models we apply to make sense of the world. This comes from the point well made in Whispering that there are multiple transforms between what is “out there” and how we make sense of it “in here”. Since our reality is at best co-constructed - what is really out there plus our interpretation of it - NLP is a tool which can both explicate and modify that epistemology. The second view is a general epistemology - all about having a “community of the adequate” who can debate and agree on what is true - either from collective similar personal experiences or based on “external, verifiable scientific evidence”. Ken Wilber, in his many writings waxes lyrical on these points. From this we can all agree on “what works and how it works”. This is where this thread joins the one on research and statistics. The statistical view is only one way of doing validated research. It tends to deal with quantifiable research data - things that can be externally measured. In the medical world - where I come from - great for drugs research etc but not so good for relationships! Qualitative research however, deals with the heart of personal epistemology. This is where issues such as beliefs, values and personal meaning can be examined and researched across context. When you have enough people who seem to think in the same way (e.g. assessing accessing cues, submodalities etc) then you can infer general principles of epistemology from this “community of the adequate”. I have now exhausted my thinking on this - you’ll be glad to hear! Lewis. |