New Morgridge College of Education Assistant Professor of Research Methods and Information Science, Denis Dumas, recently published research that could potentially change the way educational researchers understand student learning capacity, and the way students are tested in school.

The humble beginnings of a new testing model.

“Dynamic Measurement Modeling: Using Nonlinear Growth Models to Estimate Student Learning Capacity” (Denis G. Dumas and Daniel M. McNeish) focuses on the problematic aspects of single-time-point educational testing, and ways we can improve methods for predicting student learning trajectories. Dumas explains that standard practice typically tests a student only once, at a single point in time on a single day, and uses that test to predict the student’s future potential. The idea for his research was born on a bar napkin in Chicago; over drinks with his co-author, McNeish, the two were discussing ways to better predict a student’s capacity. Why not test a student multiple times and use a non-linear pattern to better predict their growth? Why do we currently use this single time-point measurement?

Dumas explains that, according to his perspective on the literature, we test this way because in 1917 the United States was under pressure to build its defense and sort enlisted men into their best positions in the military. Because there was not time to train the men on jobs they were not currently able to do, if a serviceman had experience in welding, they were assigned a welding job; if they had experience with engines, they were assigned a job as a mechanic; if they could cook, they could cook in the army, and so on. The military did not have time to train them on something new, but this did not mean that the men were not capable of learning something new. The practice was soon applied to sorting students.

This means that if student A arrives to school with previously developed knowledge of colors, shapes, and letters and takes a text they will likely test higher than Student B who did not arrive with that basic knowledge. It does not mean that Student A is necessarily smarter than Student B, or that Student B lacks the capability to learn those things, simply that Student A already knew them. Currently, the standard of testing will project Student A on a higher path of success than Student B. As educators and parents know that is not the case. Teachers are well versed in spotting the “late bloomer” or working with students who learn at a different rate than others, but this idea, until now, has not been put into practice within educational measurement. According to Dumas, the current standard of testing does not just document an achievement gap, it creates it.

Dumas and McNeish argue that the way to correctly test and predict student potential is through dynamic assessment, a technique that features multiple testing occasions integrated with learning opportunities. Dynamic assessment is time and labor intensive, making it accurate but expensive. Dumas and McNeish began to write a computer program to apply dynamic assessment to already available testing data, thus creating a way to accurately predict student growth using a series of algorithms. Their method changes the focus of the assessment from how much the students currently know to how much they can grow.

To test their theory, Dumas and McNeish used federal testing data available through the Early Childhood Longitudinal Survey-Kindergarten (ECLS-K) 1999 cohort. According to their published paper, these data were collected at seven timepoints: fall and spring of kindergarten, fall and spring of Grade 1, spring of Grade 3, spring of Grade 5, and spring of Grade 8. This publicly available data set contains several thousand variables, including direct cognitive assessments, teacher reports, parent reports, and a host of questionnaires as well as demographic and background variables. Their first run through the data took the computer a month and a half to complete. What they found was, when the focus of measurement was shifted from ability or achievement scores to estimates of student capacity, the combined effect of race, gender, and SES drastically decreased in the ECLS-K 1999 data set. What does that mean? That differences among students on their developed ability levels do not imply differences in those students’ future capacity for learning.

Dumas is excited to continue to test his research and continue to apply the assessment method to other datasets. He is currently working with the Department of Defense and others to apply his theory on their existing data. His long-term goal? To change the way we think about measuring outcomes. He and McNeish have tweaked their computer program to run a full dataset in an hour and a half, making this method, he hopes, a viable, inexpensive, and widely used option to assess student data.

“If,” he says, “In fifty years we are using this method to assess students, I would be thrilled.” Until that time, he plans to keep testing and keep spreading the word. He wants to close the achievement gap, one dataset at a time.


© 2016 University of Denver. All rights reserved.
MENU