Saturday, April 18, 2015

How Human Beings Learn

Western philosophy contains two great traditions: the empiricist and the idealist.

Those who work within the empiricist tradition begin with data and experience, then to formulate and test hypotheses that account for the data.

Those who work within the idealist tradition begin with ideas, or better, with fictions and select out facts that seem prove their validity. Facts that tend to disprove the ideas or fictions are discarded.

The idealist tradition is “confirmation bias" run wild.

Freudian theories fall easily within the idealist tradition. Whether it is the notion that all dreams fulfill wishes or that all human beings are living out roles in a grand narrative, either of Oedipus or Narcissus Freud was more interested in creating a new form of human being than in describing humans as they really existed.

Similarly, Freud was not interested in human cultures as they were really constituted. He created a fictional culture, the primal horde and read all subsequent cultures as variations on it.

In the world of therapy, cognitive and behavioral theories begin with experience and draw ideas from it. Psychoanalytic therapists begin with ideas and theories, with fictional characters and fictional worlds. They select facts that appear to prove their fictions true, and discard the rest.

Going back in time, the great British empiricist David Hume argued that experience, or, in particular, sense impressions of objects must precede our understanding of ideas. He meant that you cannot grasp the idea of redness unless you have seen something red. And you cannot understand roundness before you have seen or touched a round object.

He added, in Treatise of Human Nature, that you learn the concept of time by observing sequences of events and the concept of space by observing the relationship between objects.

To which Immanuel Kant famously responded that human beings cannot have experience unless they have a prior notion of space and time.

Thus, Kant reaffirmed a tradition of Western idealism that had begun with Plato’s assumption that the objects we see in the world are mere appearances, shadows of the Ideas or Forms that they were designed to express.

Evidently, Aristotle inaugurated the empiricist or scientific method by observing objects in the world and collecting data about them. Many of his hypotheses about real objects proved to be incorrect, but he still deserves acknowledgement as the father of empirical science.

A Hume might have suggested that if you begin with ideas and appearances you never get to reality, you never relate to real objects in the real world. Since Platonists are more concerned with relating to Ideas or theories they would have difficulty disagreeing. For them facts are a mirage keeping you from seeing Ideas clearly.

Surely, something like this dichotomy was in Wittgenstein’s mind when he reflected—in his Philosophical Investigations—on the fact that we are capable of pronouncing sounds without knowing what the words constituted by those sounds mean. Anyone can make sounds without knowing the words behind those sounds.

Surely, some opera singers sing in foreign languages without knowing what the words mean. Children pronounce sounds and even mouth words before they know what those words mean. And some writers—addicted to the thesaurus-- drop large words into their sentences to impress their readers, even though they do not know what the words mean.

At what moment, Wittgenstein asked, can a speaker be said to understand the meaning of a word?

He posited that a speaker understands the meaning of a word when he can use it in a conversation, correctly. Understanding language involves the way one uses language, not the way one offers up a dictionary definition of the word.

Who decides the correct and incorrect ways of using words? Why, the way real people have used real language in their real lives. This establishes the right and wrong ways to use words.

Clearly, Wittgenstein did not say that words do not have any meaning. And he did not believe that philosophers or other intellectual luminaries should tell people how to use language.

By my lights, language usage is the ultimate free market.

Such were my thoughts when I started reading Alison Gopnik’s column this morning in the Wall Street Journal. Gopnik argued that children learn in roughly the same way that scientists conduct experiments, by beginning with experience and sense impressions … which would be the way David Hume believed that human beings learn.

Gopnik explained:

Watch a 1-year-old baby carefully for a while, and count how many experiments you see. When Georgiana, my 17-month-old granddaughter, came to visit last weekend, she spent a good 15 minutes exploring the Easter decorations—highly puzzling, even paradoxical, speckled Styrofoam eggs. Are they like chocolate eggs or hard-boiled eggs? Do they bounce? Will they roll? Can you eat them?

Some of my colleagues and I have argued for 20 years that even the youngest children learn about the world in much the way that scientists do. They make up theories, analyze statistics, try to explain unexpected events and even do experiments. When I write for scholarly journals about this “theory theory,” I talk about it very abstractly, in terms of ideas from philosophy, computer science and evolutionary biology.

Interestingly, babies learn most when their expectations are disappointed, that is, when reality refuses to confirm their ideas.

In an amazingly clever new paper in the journal Science, Aimee Stahland Lisa Feigenson at Johns Hopkins University show systematically that 11-month-old babies, like scientists, pay special attention when their predictions are violated, learn especially well as a result, and even do experiments to figure out just what happened.

They took off from some classic research showing that babies will look at something longer when it is unexpected. The babies in the new study either saw impossible events, like the apparent passage of a ball through a solid brick wall, or straightforward events, like the same ball simply moving through an empty space. Then they heard the ball make a squeaky noise. The babies were more likely to learn that the ball made the noise when the ball had passed through the wall than when it had behaved predictably.

In a second experiment, some babies again saw the mysterious dissolving ball or the straightforward solid one. Other babies saw the ball either rolling along a ledge or rolling off the end of the ledge and apparently remaining suspended in thin air. Then the experimenters simply gave the babies the balls to play with.

The babies explored objects more when they behaved unexpectedly. They also explored them differently depending on just how they behaved unexpectedly. If the ball had vanished through the wall, the babies banged the ball against a surface; if it had hovered in thin air, they dropped it. It was as if they were testing to see if the ball really was solid, or really did defy gravity, much like Georgie testing the fake eggs in the Easter basket.

In its earliest configuration the human mind prefers to learn how to deal with reality. It is stimulated by occurrences that disconfirm its prior notions of how reality is constituted. This would tend to affirm that human organisms are hard wired to adapt and survive in the real world.

This would tend to refute the Freudian notions of the pleasure principle and primary narcissism. Even Jacques Lacan noted one day that infants are clearly not as self-absorbed and self-involved as Freudian theory would suggest. They are, Lacan noted, intensely interested in the real objects that make up their worlds.

This implies that those who prefer to hold on to their stale ideas, even when reality has contradicted them, suffer from a "confirmation bias" that has induced them them to withdraw from the world:

Adults suffer from “confirmation bias”—we pay attention to the events that fit what we already know and ignore things that might shake up our preconceptions. Charles Darwin famously kept a special list of all the facts that were at odds with his theory, because he knew he’d otherwise be tempted to ignore or forget them.

Babies, on the other hand, seem to have a positive hunger for the unexpected. Like the ideal scientists proposed by the philosopher of science Karl Popper, babies are always on the lookout for a fact that falsifies their theories. If you want to learn the mysteries of the universe, that great, distinctively human project, keep your eye on those weird eggs.

Of course, Popper famously rejected Freudian theory because he believed that it would not accept that any evidence could falsify it. This shows that Freud was more interested in his theoretical constructions and less interested in the way that human beings learn.

6 comments:

Anonymous said...

If babies have any instinctive or experiential sense at all of what is "good" for them then they should have a preference to experience pleasure in the presence of others, particularly adults. Freud's ideas were those of a giant former infant pretending to authority.

Ares Olympus said...

This is a strange way to introduce a topic saying there are "two great traditions" and immediately afterwards dismiss the second tradition as "fiction."

If both are truly "great traditions", then isn't it the duty of a good writer to show the strengths and weaknesses of each tradition?

Carl Sagan expressed the empericist's tradition in his "A world not made for us" essay.
http://www.haveabit.com/sagan/24
And this zinger:
"Science has taught us that, because we have a talent for deceiving ourselves, subjectivity may not freely reign."

But what is Sagan dismissing here? What is subjectivity? It has something to do with "inner experience", and so we all have our own "inner experiences" and we can infer others also have "inner experiences" but an empiricist can't really know.

I don't know how best to describe it, but we can assume than new born babies experience the world perhaps as a part of itself, and only slowly over time can he separate inner and outer, and self, and other.

And children also anthropomorphize the world, unlike an objective scientist, they assume everything in the world has its own subjective experiences, and even intentions, so a child might talk to an object that frustrates him as if that object had intent and agency.
https://en.wikipedia.org/wiki/Anthropomorphism

So if we want to do science, like Richard Feynman and Carl Sagan suggest, we need repeatable experiments, giving objective results that everyone can agree on.

But there apparently a different category of questions that can't be represented by such objective answers. E. F. Schumacher called these "divergent problems" where the solution doesn't exist in any one reductive explanation, but instead a tension of opposite criteria each of which is needed, and if ignored, lead to something inferior.

So an empiricist can say "let's try something, and measure the results" and "let's try something else, and measure the results", and after trying 10 different approaches, see some are better than others, and some are very different and both good, but perhaps discovering they are good in different ways and bad in different ways, and then the problem is a synthesis, how can we try something that contains the tension of opposites that doesn't allow any one side dominion? So that's more of an art than a science.

So lastly what are the idealists and why do they cause such trouble? You can see idealists can fail because they use the left hemisphere's focused attention, and reduce out information that doesn't fit the model, and then I guess you have ideology, something that is supposed to completely represent reality, while actually representing only part of a divergent problem, then that's trouble.

But what's the answer to that trouble? If religion contains subjective experience, and objective scientists can't validate?

And if a "science" like psychology, how can it say anything about subjective experience? How can it deal with things like the unconscious, where we know things without knowing we know them?

I prefer Jung with the ideas of archetypes, as "instinctual energies" within us that can carry different points of view, different survival strategies, different learning styles, different knowledges and attractors to each.

And if Jung says there is a "shadow" of things we repress, and control us through our unconscious. So this is probably closer to religion than science, but if so, then what is religion? Is it just "fiction", or maybe its fiction, but a useful fiction? How would you know? And maybe that explains why religions fracture so easily.

Joseph Campbell would talk of myths and metaphors, and that might help recognize the words are not the meaning, and some meanings can only be "known" within, and not clearly communicatable nor objectively validable, while still not being simple childish fantasy to be ignored.

Sam L. said...

It's simpler and easier to deal with and manipulate an idea you made up, that it is to deal with facts and reality,

Ares Olympus said...

Sam L. said... It's simpler and easier to deal with and manipulate an idea you made up, that it is to deal with facts and reality.

That's a good hypothesis, but is it falsifiable? Or is it a strawman argument, as if one side was pure theory, and the other side pure observation.

Aristotle for example assumed light objects fall slower than heavy objects. He used empirical evidence by dropping a feather and rock, limiting cases that were obvious.

So his failure was to create a generalized hypothesis without testing the cases it was false.

Another example, I remember teaching a cousin algebra, and she didn't understand all the rules, and so she'd just move terms around without "testing" if the move was legitimate or not.

So I showed her she could substitute in numbers for each letter, and "test" of the operation preserved equality or not.

Without testing, her mind didn't complain, but as soon as I showed explicit numbers could be used, the contradictions became appparent.

But she'd still be "stuck" if she could test examples to identify BAD operations, but have any IDEA what a GOOD operation would be.

I also consider B. F. Skinner used a "science" he called behaviorism. From Pavlov's work, we saw that a ringing a bell and giving a dog food, and after "training" the dog would salivate at the sound of the bell, even with no food.

So from that theory you find positive and negative reinforcement can be used to control behavior, and it would even work in humans.

And Skinner wasn't really interested in controlling others, but understanding how to help individuals control themselves. Like he identified that children who were able to delay gratification were more able to moderate their own desires and impulses.

Yet Skinner had other "ideas", and he believed parenthood was too important of a job to be left to amateurs, so instead a small number of people (like daycare providers and teachers) should learn all the proper techniques to help children learn to become autonomous adults. So he believed an ideal situtation would be that a child never knows who his parents are, and just sees himself as one of a larger community, which he hypothesized to be about 200-1000 individuals, which was the largest size we can hope to interact with.

So where did he go wrong? Or maybe he was right and we just need to test it? If only those darn backwards beliefs from the Bible didn't get in the way, and parental love. I guess Huckley's Brave New World also projected that sort of "perfectly managed society" while still allowing the savages to be free to live in the mud outside the city walls.

Anyway, we're agreed its "ideas we made up" and perhaps there is more to life that optimal social performance? Maybe someone can say humans can't develop properly without parental bonds?

So on such ideas, since we don't know, we can have a "market place of ideas", and rather than top-down forcing certain ideas, allow individuals to try their own way, and some fraction will end up better than others, and we can hope that fraction wants to have enough kids to outbreed and care for those failed techniques.

But again, just like the algebra rules, you can try lots of things, but how do you isolate cause and effect, when do you decide one approach is bad and try something different?

How should kids be disciplined? Is a 5 minute time-out better than a 5 second spanking? Which will best help kids? And if the spanking is superior, is it always superior? Or sometimes its falling feathers vs rocks where air resistance matters, and sometimes we have helium buoyancy that changes everything?

Complexity comes quick, and "simple and easy" is certainly to guess one idea, and stick with it, and that consistency however flawed might be better than confusing kids with 72 different models of discipline that make them completely distrust reality as a safe place?

Anonymous said...

I made the comment about Freud being a giant former infant.

The terms delayed gratification and impulse control form a tautology. Think about it.

I once witnessed two toddlers playing in the sandbox. A boy had an ant on his finger and observed it carefully like a little scientist. The girl shouted, "An ant! Kill it! Kill it!" But if he kills it he can't learn from it. His grandfathers were a scientist and psychologist, his mother is a teacher, and his father is a chiropractor.

With regard to Skinner's concept of raising children he may be idealizing a personal vision of alloparenting. Most women, and a few less men, would not tolerate at all the idea of not knowing ones own children from strangers. I have read that there may have been tribes on islands where the females were promiscuous and the males were effectively relegated to being joint fathers of the offspring since paternity could not be certain in such a community whereas maternity is always certain.

Debates over how the government acquires and spends money are those of an alloparenting species.

Anonymous said...

I think there are, unfortunately, documented cases of feral children.