What if the federal government spent a billion tax dollars over nearly three decades to study thoroughly the question of which teaching method best instills knowledge, sharpens cognitive skills, and enhances self-esteem in young children? And what if such a study were able to determine exactly which method best accomplishes all three? Would American parents like to know about it?
The study and its conclusion both exist. Project Follow Through, initiated in 1968 under Lyndon Johnson’s War on Poverty to “follow through” on Project Head Start, spent an estimated one billion dollars through the Office (now Department) of Education, the Office of Economic Opportunity, and dozens of private sponsors to test and evaluate an array of very different educational methods. Despite Project Follow Through’s experimental nature, it was also a fully funded and comprehensive social services program. A total of 700,000 students in 170 poor communities around the nation were involved. Parents were allowed to decide which method or model would be adopted at their local school; the government then funded the model through such sponsors as universities and private research institutes.
The last funds for the study, which was the largest educational experiment ever conducted, were disbursed in 1995. What happened then is worthy of another study—this one in the politics of bureaucracy. “The education profession has never been particularly interested in results, especially if they run counter to the prejudices of the profession,” says Douglas Carnine, a professor of education at the University of Oregon who was involved with Project Follow Through when his university was one of its sponsors.
By the mid-1970’s, Stanford Research Institute had gathered data on the project, which it then handed over to Abt Associates of Cambridge, Massachusetts, for analysis. Nine educational methods were compared. Of these, three each fell into one of three general types: basic skills, a behavioristic approach similar to the Suzuki Method; cognitive, a learning- to-learn approach that stresses the child’s discovery or “construction” of knowledge on his own; and affective, a “whole-child” approach that aims to boost student self-esteem on the theory that a higher sense of self-worth promotes academic achievement.
A battery of five tests were administered to more than 9,000 Project Follow Through students in kindergarten through third grade. They were matched with a control group of 6,500 students from other school sites. The 11 “outcome measures” assessed by these tests consisted of basics like spelling and computation, problem-solving ability or cognition, and self-concept or self-esteem (“affective development”).
The results? “Direct Instruction,” one of the basic skills approaches, worked best with these children in all three areas of development. Disadvantaged children who ordinarily would have been expected to achieve in the 20th percentile range performed at or near the 50th percentile (the norm) in math, reading, spelling, and language usage. Howard N. Sloane, professor emeritus at the University of Utah, states that the outcome of Project Follow Through proves that “stressing basic skills produces the greatest gains in problem solving and analytical [cognitive] skills,” as well.
The results also showed that those methods aimed specifically at improving cognition or boosting self-esteem had no effect, or even a negative effect, on all three types of measured development. For example, writes researcher Cathy Watkins,
Models [like Direct Instruction] that emphasized basic skills produced better results on tests of self-concept than did other models. . . . The models that focused on affective development had negative average effects on measures in this domain.
Direct Instruction (DI or DISTAR) was devised by Siegfried Engelmann back in the early 1960’s, when he was teaching his own young sons. According to researcher James Baumann, in a DI classroom,
the teacher, in a face to face, reasonably formal manner, tells, shows, models, demonstrates and teaches the skill to be learned. The key word is teacher, for it is the teacher who is in command.
Gary Adams, author of a recent evaluation of DI, is quick to add that
the difference is the curriculum, not just the method. It’s the sequence of concepts presented that really matters. Not one other model has been field tested to the extent this one has . . . with very good teachers and very difficult kids.
Skills such as reading, spelling, and computation are presented step by step, with reinforcement ensuring that each child has mastered one step before progressing to the next.
In 1975, Thaddeus Lott became principal of Wesley Elementary, part of an inner-city Houston school district where every pupil qualified for Title I assistance. At that time, only 18 percent of pupils scored at grade level on the Iowa Basic Skills Test. By 1980, 85 percent did so. Why the dramatic change? Lott had purchased DI materials (mostly without the district’s financial help, since DI was not—and is still not—officially approved by the state board of education) and trained Wesley teachers in the method. Those teachers unwilling to be accountable to Lott’s new regime were transferred.
Academic improvement was so dramatic that the district superintendent, who had resented Lott and tried to drive him out, was herself removed, and Lott was named head of what essentially became a charter district. Successes multiplied; by 1996, 100 percent of Wesley third-graders passed the Texas Assessment of Academic Skills in reading, and the other schools in the charter district were beginning to follow the same upward curve. Lott is now in demand across Texas to “turn our school around.”
Successes like Wesley Elementary did not guarantee good press for DI, however. In 1977, the Ford Foundation hired four researchers to re-evaluate Abt Associates’ determination that Direct Instruction was the superior method. Published as “No Simple Answer” in the prestigious Harvard Educational Review, this re-evaluation by E.R. House, G.V. Glass, L.F. McLean, and D.F. Walker argued that DI’s superiority was not very statistically significant if the data were re-adjusted in various ways, was due more to environment than to the method used, and that in any case, while such a method might work with at-risk students, it was irrelevant to the needs of “normal” students.
Bonnie Grossen, editor of the journal Effective School Practices, replies that the researchers’
reanalysis not only continued to show the much greater strength of DI on all measures, but they even proved how academically valid the measures were. What they did was change the question because they didn’t like the answer. Their argument was that we should ignore the question of what leads to better academic learning. So they essentially gave people permission to ignore the whole point of the project. It’s hard to figure out why they would do such a thing.
Douglas Carnine thinks he can figure it out: Most education professionals belong to “a closed community of devotees—college professors, curriculum specialists, etc.—who follow popular philosophies rather than the research on what works.”
Given the financial clout of the Ford Foundation, the re-analysis was all that was needed to signal the politically correct attitude to take toward the evaluation by Abt Associates, After the Harvard article appeared, all nine Project Follow Through models were recommended equally to school districts through the National Diffusion Network. By 1982, in a self-defeating attempt to equalize results, the less effective models were receiving higher levels of funding than the more effective ones, and Project Follow Through had been dropped down the memory hole. Today, when the head of the National Council of Teachers of Mathematics is asked her opinion of Project Follow Through, she can answer in all truthfulness, “I have never heard of it.”
Marshall “Mike” Smith, the current deputy secretary of education, was involved with the project at Harvard during the early 1970’s. His memory is that the program “was a very ambitious effort to try and understand how children learn. And we found out how difficult it is to change the schools to make them more effective.” Were findings swept under the rug or “analyzed away,” as critics charge? “Oh, I think that’s wrong,” he continues. “There wasn’t just one finding that came out of Follow Through. There was a general finding that highly structured classes focused on basic skills produced better results on basic skills tests.”
Surely this is important? As if anticipating Smith’s dismissal of Direct Instruction’s superiority, education researchers Carl Bereiter and Midian Kurland wrote in 1981 that
It makes no sense whatever to call it “bias” when an achievement test awards higher scores to students who have studied the domain covered by the test than to students who have not. It would be a very strange achievement test if it did not.
Nevertheless, opponents of methods like Direct Instruction, such as University of Arizona professor Kenneth Goodman, are fond of calling such methods “oppressive and inhumane” and lethal to the innate creativity of children. Since behavioristic approaches “sound bad,” it doesn’t matter that they work.
These opponents are not only well organized but also occupy the highest positions in the education establishment. The entire Department of Education is in the hands of anti-“basics” bureaucrats, as are the National Council of Teachers of Mathematics and the National Science Foundation. The NSF’s assistant director for education and human resources, Luther S. Williams, was reprimanded recentiy by his boss for firing off a blunt threat to California’s state school board when it dared to re-adopt basic math standards after several dismal years of “creative” math appreciation.
In addition to Texas and Florida, Oregon and Washington lately have reconsidered the exclusion of Direct Instruction from their list of state-approved teaching methods. A number of state legislatures, including those of California and Maryland, have seen measures introduced that would restore phonics instruction to the curriculum. It had been driven out by “whole language” — just the sort of sounds-good/doesn’t-work method discredited by the findings of Project Follow Through.
Gary Adams shakes his head. “The most puzzling thing is how the very models that the data showed to be ineffective and even harmful are still being pushed. Parents should be asking, ‘Where is the proof these programs work?’ Instead, the screaming only starts when the awful test results come in.” To Cathy Watkins, the lesson of Project Follow Through has been this: “The fact that effective teaching methods are available does not mean they will be adopted.” Doug Carnine is more sanguine. “I think education is at the same point medicine was about 160 years ago, the point when it discovered that most of what it believed in was wrong,” he predicted recently.
What was Project Follow Through able to accomplish? It showed that the surest way to ground children in basic skills is to teach them the content of those skills in a thorough, authoritative fashion. It demonstrated that “higher order,” “critical” thinking and “enhanced cognition” are heavily dependent on the acquisition of basic skills. And it proved that mastery of basic skills builds self-esteem, not the other way around.
These findings remain valid even though the education bureaucracy wants to kill off a method that has been proven to work. Educators who are more enamored of techniques that sound good but don’t deliver cannot change facts. Nor can any Ford Foundation “re-evaluation.”
Leave a Reply