Education is a complex and messy endeavor. We all have our own ideas about how children should be raised, about how learning happens, and about what is important for children to learn. The schools have to deal with parents on all sides of every political spectrum demanding what they think is needed for their children.
Educational research that doesn't acknowledge this messiness, that tries to buy 'scientific' cachet with control and treatment groups but frames the questions too narrowly, is more likely to reinforce the values of one group than to deepen our understanding of the learning and teaching enterprise. (Of course, we're each more likely to see flaws like this in research that doesn't resonate with our own values.) In this post, I want to dissect a flawed study, together with my friend Ben, and then give links to some studies I've enjoyed reading. I would actually appreciate it if any of you would like to point out flaws in some of those. (I'll start a new post for each one, so we don't get all tangled up.)
“The Advantage of Abstract Examples in Learning Math”, a study done by Jennifer A. Kaminski, Vladimir M. Sloutsky, and Andrew F. Heckler, was published in Science magazine in April 2008 (you need a paid subscription to see the article) and highlighted in the New York Times soon after. I had responded to this study before I started blogging, in email to colleagues. It was brought to my attention yesterday by Ben Blum-Smith's new blog, Research in Practice, where he writes a great critique of it. I agree with all he says (here and here) and want to add a bit more.
The people who did the research, along with the unquestioning NYT author, say that the research shows that students learn math better with abstract examples than with concrete examples. Ben and I are saying that their research design is severely flawed, and that they've shown nothing useful.
Ben gives a great description of what the research folks were supposedly trying to teach, which was the properties of a "commutative mathematical group of order three". That's a fancy name for something not much more complicated than clock arithmetic. Imagine a clock that only has three hours on it, and instead of a 3 at the top, it has a 0. So 1+1 is still 2, but 1+2=0 and 2+2=1. This is the most common example used when people are first learning about these groups. The examples used in the research seem very contrived in comparison.
Both the research folks and Ben described a tennis ball factory, where you're keeping track of how many balls you have in hand, after you've put as many as possible into those 3-ball cans, but Ben's description makes a lot more sense than the one used in the research. (When I originally read this study over a year ago, I never saw the tennis ball example. In the one 'concrete' example I was able to find details for, they used a full cup for the 0, or identity element, which I would find confusing.)
People who actually study groups like this sometimes do it without numbers. The 'elements' of the group might be labeled a, b, c instead of 0, 1, 2. There are properties that can be studied, like identity elements and inverses. (0 is the identity because adding it to other elements doesn't change them. 1 and 2 are inverses because 1+2=0, the identity. These properties can make sense even when the elements aren't numbers.) So the researchers 'taught' this using 'abstract' examples for some subjects and 'concrete' examples for others. They quizzed all of the subjects using a group consisting of a vase, a ladybug, and a ring. Although concrete, these strange elements fit much better with the 'abstract' example than with the 'concrete' example. It's not surprising that the subjects whose example was more similar did better when quizzed.
There is lots of narrowly focused research like this out there. It may be useful in physics to narrow a question down to one detail, when the interactions between the small parts is clear. But in social arenas, all the parts interact, in very complex ways. Research like this cannot tell us much of value, even when its design is less flawed.
Kaminski et al want to say that their research tells us children learn math better without concrete examples. Their claim is very political. To promote it, they have done a number of studies with minor variations. (Googling their names, I see work done in 2003, 2006, and 2008.) I'd rather see education research that addresses the big, messy picture. Here's the MAA president-elect's take on this, and here is an article interviewing one of the 3 authors of the study.
Research that I've found more interesting is much broader. It doesn't attempt a double-blind statistical power, which can only come through narrowing the questions until they become too artificial to be of use.
I've been reading Jo Boaler's work on the benefits to students at all skill levels of working together with one another. On first glance, it may seem that tracking would allow the best students to go farther, and allow slower students to get a more solid grasp on what they're studying, but tracking actually harmful to students at both ends and those in between. Boaler shows why, and shows how to make heterogeneous grouping work. One article is here, or you can read her book, What's Math Got to Do with It?
Alan Schoenfeld wrote the book Mathematical Problem Solving, in which he describes his very detailed research into the process followed by math students versus mathematicians while attempting solution of a hard problem. He taught a course in problem-solving strategies, using Polya's framework, and gave a pre-test and post-test to those students. Compared to students taking a more typical math course, these students' problem-solving skills improved significantly more. Here is an article of his on a different topic, how mathematical conversation in the classroom promotes learning.
Both Boaler and Schoenfeld compare groups with and without the 'treatment' they believe is effective, and show evidence for their belief, as Kaminski et al did. There are other sorts of research, which attempt to understand children's learning processes, but which don't have 'control groups'. One such project I'm interested in following is Measure Up, which introduces math through measurement and algebraic reasoning. Here's an article from that project.
Our biggest problem may not be understanding better how children learn, but implementing the good ideas that come from this research. When most elementary teachers are uncomfortable with mathematics, what we need to focus on is how to help them. Liping Ma's book, Knowing and Teaching Elementary Mathematics, compares elementary teachers in the U.S. and China. The Chinese teachers understand the math much more deeply. A good summary of the book is here. And an article by Ma is here. And here's a piece by another researcher, Hung Hsi Wu, on the depth of understanding needed by elementary teachers.
What math education research have you found valuable?