The Elementary and Secondary Education Act and the No Child Left Behind Act are up for reauthorization again.  This process typically entails legislators tweaking the bill—a caveat here, a zinger there.  Almost always, it translates into more money.

Representatives George Miller (D-CA) and Howard “Buck” McKeon (R-CA) of the Committee on Education and Labor recently released a “discussion draft” of NCLB.  They probably meant well, but it is clear, from the Title I portion alone, that the acts remain mired in nonacademic pursuits, far removed from proficiency in the basics.  (Where is there a place for information relating to real learning capabilities—visual and auditory memory, visual identification, spatial and abstract reasoning, concentration, perceptual speed, hand-eye coordination, and thought-expression synchronization?)  Pages 307-317 confirm that a primary goal of the legislation is to build a permanent profile of every student and teacher, and to make these accessible on a need-to-know basis to any entity that calls itself a research or civil-rights group.  While there is a refreshing nod to parents (they get to view materials) and language concerning security from unauthorized parties (including a requirement to destroy files after a prescribed period), none of these stipulations carry viable penalties for noncompliance.  In fact, most of them are not technologically feasible—i.e., there is no way to “prove” that a backup file has not been created or that a parent has been given complete, unaltered records.

Concern over dossier-building has risen since the September 11 terror attacks, when the term data mining hit the news.  Most people had never heard of it.  But schools have been doing it since the 1970’s.  Back then, it was called psychographics.

Psychographics, which targets specific population segments through market research, has its origins in advertising.  The concept was picked up by political strategists to target socio-demographic groups so that each voting bloc heard what it wanted to hear about a candidate or issue.  A primary weapon in their arsenal was the questionnaire (or survey)—in effect, a “test.”  The information was gathered both blatantly and surreptitiously.

Webster’s New World Communication and Media Dictionary defines psychographics as “the study of social class based upon the demographics . . . income, race, color, religion, and personality traits.”  These characteristics, it states, “can be measured to predict behavior.”  So advertisements are based on surveys seeking out people who have certain characteristics in common.

The marketing rationale behind collection of behavioral data is that the best predictor of what you might buy tomorrow is whatever you bought yesterday—your “purchase history.”  Political experts realized that the same could be said for what a person believes.  Psychologists with advanced degrees in statistics had a new job.  Whether the product being “sold” was coffee, “same-sex marriage,” or a candidate for public office, the best predictor of what a person (or a voting bloc) would do in the future was whatever he (or it) did, believed, or supported in the past.  Much of this is ascertainable from public records—publications subscribed to, religious and political affiliations, charities and causes contributed to, shopping habits, hobbies, stocks, occupations.  Then the technology evolved.  Computers proved excellent tools for cross-matching and linking information in such a way as to entice special interests—pharmaceutical companies, college admissions officials, insurance companies, and government agencies—who were willing to pay well for such insights.

By canvassing for opinions and preferences, technically known as values and lifestyles (VALS) data, and cross-matching these with public and private records, analysts found that they could establish areas of commonality across socioeconomic, demographic, political, and religious groups.  If necessary, they could get down to the individual level.

By using VALS data, public-relations and advertising firms began to target marketing “packages” to specific groups, and even to individuals—through the mail, the news media, the internet.  It then occurred to educators that they could do likewise.

The Miller-McKeon draft demonstrates a troubling lack of historical context.  There seems to be no awareness that yesterday’s psychographic surveys are today’s school “assessments.”  Experts have become so skilled at phrasing their questions, inserted into academic tests and class questionnaires alike, that the “target subjects” (pupils) have no idea just how much they are divulging.  The result is a behavioral baseline, a profile—retained in databases for posterity.

Michigan’s school code specifies that only those who have “earned doctorates in psychology . . . and related behavioral sciences” are qualified to “interpret” assessments.  If assessments were not psychological profiles masquerading as tests, would such a requirement be necessary?  Worse, the seemingly unrelated pieces of academic and personal data, which reveal political leanings, have been fed into “predictive” computer models.  Today, these can serve to eliminate undesirables from any profession that might entail leadership or influence.

Herein lies the danger of out-of-control data collection, especially of nonacademic, subjective opinions.  Not only is a child denied the luxury of changing his mind on controversial topics, but youthful opinions can now be linked with family and other proprietary information.

Complaints that even top students were being shut out of prestigious universities, for reasons that had little to do with ability or grades but everything to do with beliefs, started surfacing ten years ago.  Today, it is not unusual for a prospective student to get a letter stating that, even though the applicant has a stellar record, university officials have decided that he or she “might be happier somewhere else.”  How does a parent argue with that?  Yet, page 307 of the Miller-McKeon draft trusts “authorized” organizations to represent their interests truthfully when seeking access to data-collection systems.

Cradle-to-grave data gathering on citizens was conceived within the education establishment.  It started with the eight-state Cooperative Accountability Project in the 1970’s.  A decade later, a watershed document out of the National Institute of Education (then an agency within the U.S. Department of Education) entitled “Measuring the Quality of Education” was quietly circulated.  Coauthored by NIE’s Archie LaPointe and Willard Wirtz, this paper advocated collecting “noncognitive data” from students—subjective, opinion-oriented information.  This paper built on two separate 1969 works: Walcott Beatty’s Improving Educational Assessment and an Inventory of Measures of Affective Behavior and the anthology Crucial Issues in Testing, by the late Ralph Tyler and Richard Wolf.

Measuring the Quality of Education recommended exchanging excellence for “functionality” and “getting into students’ personal characteristics” by improving upon existing “educational” databanks: the Common Core of Data, the Universe Files, and the Longitudinal Studies—already collecting massive data on schoolchildren in clunky, but nevertheless viable, systems, of which the public was unaware.  Walcott Beatty’s tome emphasized the importance of collecting “noncognitive” details on students’ lives, noting that implementation must “avoid the appearance” of a national initiative.  LaPointe and Wirtz echoed the latter point.

Ralph Tyler, a pioneer in the field of behavioral testing, was the father of the “whole-child” theory of schooling, which led to a general glorification of youth and, eventually, to children’s tyranny over adults.  Tyler was also a former head of the Carnegie Foundation for the Advancement of Teaching and its multimillion-dollar offshoot, the Educational Testing Service.  In Crucial Issues in Testing, he emphasized the need for deception in testing, asserting that there “are occasions in which the test constructor [finds it necessary] to outwit the subject so that he cannot guess what information he is revealing.”

These documents should have set off alarm bells with investigative journalists.  What kind of “tests,” after all, would require deception?  Instead, reporters were diverted by teacher-union press releases over salaries, “open” classrooms, eliminating classroom competition, and other matters.

In 1985, a white paper coauthored by computer experts George Hall, Richard M. Jaeger, C. Philip Kearny, and David E. Wiley was released, entitled “Alternatives for a National Data System of Elementary and Secondary Education.”  It offered the federal government two options for obtaining nonacademic information from students.  The following year, Education Week announced the selected option as an innovation in information-gathering with the headline “Radical Overhaul Offered for E.D. [Education Department] Data Collection.”  The pilot version morphed into the Elementary and Secondary Education Integrated Data System (ESIDS).  Bigwigs in the Department of Education alleged in 1991 that ESIDS never existed.  Confronted with Appendix E of their own Nation’s Report Card, where it was listed, they changed their tune.

ESIDS evolved into the Standardization of Post-secondary Education Electronic Data/Exchange of Permanent Records Electronically for Students.  SPEEDE/ExPRESS replaced the old paper folder with an electronic portfolio of pupil information, including psychological profiles and a rudimentary examination of students’ families.

This chronology is critical to any reauthorization debate.  Representatives Miller and McKeon should know that the collection of such information as a student’s membership in groups, advocated in their proposal as though it were new, is already part and parcel of school data collection.  In 1991, the Department of Education denied collecting noncognitive data; now it celebrates doing so (and, indirectly, federalizes curriculum, too)—all under the umbrella of “compelling state interest.”

Noncognitive questions are carefully inserted into assessments (formerly called tests) so as to “avoid the appearance” of a nationalized curriculum, just as the 1969 and 1981 documents advised.  Scores are based primarily on knee-jerk responses, not facts.  For example, in May 2004, a 194-question survey, given to 11th-graders at University/Rincon High School (Tucson Unified School District) as part of an Advanced Placement U.S. history course, asked students to identify themselves using the following criteria:

—I consider myself outgoing and spontaneous.

 

—I consider myself basically quiet and shy.

—I consider myself able to persuade my peers that my opinion is correct.

—My parents feel they should make a significant contribution of time and energy to society.

—My family relationships are generally satisfying.

No single response to one of these options is likely to brand anyone.  It is the totality of the responses—the trends—that produce a behavioral profile.

Most assessments ask about time spent with family members; use of tobacco, alcohol, and drugs by the student and family members; suicidal thoughts; and contraception.  Such information can be especially revealing when it is cross-matched with responses from such computerized queries as the following (taken from an older version of the Metropolitan Achievement Test):

 

—Number/type of books in the home

 

—Receipt of a daily newspaper

—Number of parents (and others) in the home

—Time spent with friends (after school and evenings)

—Time spent watching television or videos

—Frequency of home computer use

—Frequency of discussing things studied at school with someone at home

A Nebraska Adolescent Health Survey created by the University of Nebraska asked high-school students whether they considered themselves “religious” and what they thought about when they thought of sex.  Parents of students at Jefferson High School in San Antonio, Texas, were shocked when “assessments” revealed dozens of explicit sexual questions too offensive to reprint here (the terms oral and anal being the least repugnant).  Other questionnaires involve the degree to which a pupil is attracted to persons of the same sex; whether the pupil cries a lot; if the student has trouble getting his “mind off certain thoughts”; and a list of “worries”—among them, “Dad hitting Mom.”  Suppose Junior was angry with his father that morning; he might well check the highest rating (“very much”) on “Dad hitting Mom”—with life-altering results.

The only “testing” that is directly tied to teaching methodology is in noncognitive areas—which have as their goal modifying student viewpoints instead of demonstrating knowledge.  Numerical codes linking assessments to curriculum in such subjects as sex and social studies sometimes are found right on the covers of the teachers’ guides.  Social studies may include politically charged queries—on race, the United Nations, abortion, war.  Academic knowledge is included, but it appears to be of secondary importance.

The Miller-McKeon proposal defends the “unique student identifier” to ensure privacy.  These have, in fact, been around for decades.  Identification schemes have run the gamut from bar codes, to birthdates linked to class hour, to color-coded sticky labels—all aimed at deceiving the child into believing that assessment responses are anonymous.

Since Columbine, every attempt is being made to link child-supplied personal data to everything from a parent’s financial information to health records; after all, the “dangerous” kids need to be ferreted out.  Thus, educators say it is essential for psychologists to “get into a pupil’s belief system” and screen for evidence of aberration.  The new indicators for deviancy, however, may surprise you; maverick and religious are just two of the red flags signaling a “troubled pupil.”

In 2004, the House Appropriations Committee approved $20 million in new federal funds to begin a nationwide implementation of President Bush’s “New Freedom Initiative”—a plan to screen the entire U.S. population, beginning with schoolchildren, for mental illness and to provide a continuum of “services” for those identified as mentally ill or even “at risk” of becoming so.  Under the plan, schools will become hubs of a mass project for screening first children, then their teachers and parents.  Do Representatives Miller and McKeon realize their proposal will help the New Freedom Initiative “go national”?  Are they aware that expanded involuntary-commitment laws carry political implications?

School officials have long known that student data are not anonymous and are disclosed on a “need to know” basis.  Who needs to know?  Maybe nobody—unless a child (or his parent) runs for election, offends some politically correct group, or sits on a controversial committee.  Meanwhile, the volume and complexity of computerized data collection continues to evolve.  Fledgling projects such as the Integrated Postsecondary Education Data System constantly enhance and replace existing ones.  (A feasibility study for an IPEDS-like student-unit record-collection system was submitted to Congress in February 2005.)

Stopgaps such as the Miller-McKeon draft are typically written in a vacuum.  Congressional staffers, usually young and relatively inexperienced, do not know enough to provide substantial background information for their elected bosses.  Consequently, even the best-intentioned legislators lay the groundwork for something very few people want.  Once enacted, legislation may get enhanced, but it is rarely reversed.

Despite the glut of articles on data mining that have appeared since September 11, there is still scant awareness of just how much private—and traceable—information is available.  One can catch a glimpse of the future on C-SPAN, which routinely airs hearings in which prospective appointees are grilled for things they said decades ago, often when they were young.

The status quo means future administrations will find it easy to regulate and restrict liberties to which older generations were once accustomed.