Topic: | Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:NLP Changework vs. Therapy and Coaching |
Posted by: | nj |
Date/Time: | 01/05/2003 20:52:36 |
Hello, Mr. Schertzer. 1. I earlier wrote, "A Science Of Representations might influence people to conceptualize humans as machines..." In response, you wrote, "I believe that cultural leanings already play toward conceptualizing humans as machines." But does the future necessarily contain a culture of people with beliefs that humans are just like machines? No. Given that you are aware of cultural leanings, your choices of research can compensate for them. 2. You wrote, "I just believe it would have to be a lot less deterministic and more open-ended than you seem to believe it would be." A Science Of Representations could be open-ended. But scientists don't have to answer how their research will affect the future. Researchers don't decide what happens to their research products. What researchers can do is produce in their specialty area and act indifferent to how their research will be eventually be applied. That's why I make the point that researchers should consider the possible results of their work before they do the work. 3. You wrote, " You could ask, how do we represent machines and how do we represent humans, and how do these reps overlap, and not, and what seems or doesn't seem appropriate, and what strategies do we use to decide what's appropriate and how do we construct a concept such as appropriateness?" I wouldn't try to invent the methodology or the distinctions of a Science Of Representations to answer your questions. Instead, I would find the errors in an argument from analogy. The false conclusion of that argument would presuppose that any human responds well to being treated like a machine. My research activity would produce one or more arguments that falsify that presupposition. I could research behavioral alternatives behaviors that rely on conceptualizing humans as machines. But I don't want to research behaviors that would be taught to humans by machines. My trainees might have a hard time separating their concepts of their own functioning from their concepts of how their teachers function. To accomplish my goals, I'd have to be a good trainer, not just a good modeler. 4. You wrote, "Let's face it, we're talking about a huge, abstract nominalization, and yours may having nothing to do with mine." My advice might not suit you or anyone who might share my values or beliefs about the world. But if you think undesirable products will be developed, then you can decide whether to assist in the production of undesirable products. -nj |