New Morgridge College of Education Assistant Professor of Research Methods and Information Science, Denis Dumas, recently published research that could potentially change the way educational researchers understand student learning capacity, and the way students are tested in school.

The humble beginnings of a new testing model.

“Dynamic Measurement Modeling: Using Nonlinear Growth Models to Estimate Student Learning Capacity” (Denis G. Dumas and Daniel M. McNeish) focuses on the problematic aspects of single-time-point educational testing, and ways we can improve methods for predicting student learning trajectories. Dumas explains that standard practice typically tests a student only once, at a single point in time on a single day, and uses that test to predict the student’s future potential. The idea for his research was born on a bar napkin in Chicago; over drinks with his co-author, McNeish, the two were discussing ways to better predict a student’s capacity. Why not test a student multiple times and use a non-linear pattern to better predict their growth? Why do we currently use this single time-point measurement?

Dumas explains that, according to his perspective on the literature, we test this way because in 1917 the United States was under pressure to build its defense and sort enlisted men into their best positions in the military. Because there was not time to train the men on jobs they were not currently able to do, if a serviceman had experience in welding, they were assigned a welding job; if they had experience with engines, they were assigned a job as a mechanic; if they could cook, they could cook in the army, and so on. The military did not have time to train them on something new, but this did not mean that the men were not capable of learning something new. The practice was soon applied to sorting students.

This means that if student A arrives to school with previously developed knowledge of colors, shapes, and letters and takes a text they will likely test higher than Student B who did not arrive with that basic knowledge. It does not mean that Student A is necessarily smarter than Student B, or that Student B lacks the capability to learn those things, simply that Student A already knew them. Currently, the standard of testing will project Student A on a higher path of success than Student B. As educators and parents know that is not the case. Teachers are well versed in spotting the “late bloomer” or working with students who learn at a different rate than others, but this idea, until now, has not been put into practice within educational measurement. According to Dumas, the current standard of testing does not just document an achievement gap, it creates it.

Dumas and McNeish argue that the way to correctly test and predict student potential is through dynamic assessment, a technique that features multiple testing occasions integrated with learning opportunities. Dynamic assessment is time and labor intensive, making it accurate but expensive. Dumas and McNeish began to write a computer program to apply dynamic assessment to already available testing data, thus creating a way to accurately predict student growth using a series of algorithms. Their method changes the focus of the assessment from how much the students currently know to how much they can grow.

To test their theory, Dumas and McNeish used federal testing data available through the Early Childhood Longitudinal Survey-Kindergarten (ECLS-K) 1999 cohort. According to their published paper, these data were collected at seven timepoints: fall and spring of kindergarten, fall and spring of Grade 1, spring of Grade 3, spring of Grade 5, and spring of Grade 8. This publicly available data set contains several thousand variables, including direct cognitive assessments, teacher reports, parent reports, and a host of questionnaires as well as demographic and background variables. Their first run through the data took the computer a month and a half to complete. What they found was, when the focus of measurement was shifted from ability or achievement scores to estimates of student capacity, the combined effect of race, gender, and SES drastically decreased in the ECLS-K 1999 data set. What does that mean? That differences among students on their developed ability levels do not imply differences in those students’ future capacity for learning.

Dumas is excited to continue to test his research and continue to apply the assessment method to other datasets. He is currently working with the Department of Defense and others to apply his theory on their existing data. His long-term goal? To change the way we think about measuring outcomes. He and McNeish have tweaked their computer program to run a full dataset in an hour and a half, making this method, he hopes, a viable, inexpensive, and widely used option to assess student data.

“If,” he says, “In fifty years we are using this method to assess students, I would be thrilled.” Until that time, he plans to keep testing and keep spreading the word. He wants to close the achievement gap, one dataset at a time.

Research Methods and Statistics Ph.D Candidate Priyalatha Govindasamy received top award at the University of Denver Research and Performance Summit (DURAPS) on January 29. Govindasamy presented her research at the DURAPS poster session highlighting the software package that she has been developing with Antonio Olmos, Ph.D and Kellie Keeling, Ph.D.

Govindasamy explained that “effect size is the key to conducting meta-analysis, but not all the studies report empirical information required for computing effect sizes.” Studies will often report different types of statistical information that require different mathematical algorithms to compute effective sizes. To overcome this hurdle, Govindasamy and her supporting faculty developed the Effectssizecalculator Package in R for Meta-Analysis. This package was designed to compile all different mathematical algorithms and estimate the effective sizes into one module and leverages the R statistical analysis software.

The Morgridge College of Education would like to congratulate Miss Govindasamy on her award and recognize her fascinating research.

Duan ZhangDr. Duan Zhang, Associate Professor in the Research Methods and Statistics program at Morgridge College of Education, recently returned from a 5-month sabbatical in China. During her time abroad, Zhang served as a visiting scholar at the School of Psychology at Central Normal University in Wuhan, China, teaching a graduate course to an international student cohort, assisting with research, advising graduate students and attending conferences.

“I worked with five other professors in the personality psychology division. The professor I worked with is one of the biggest names in his field in the Chinese Psychological Society (CPS); we attended the first ever CPS conference for the division of personality psychology in Chongchang,” Zhang states. At the CPS-PP conference, Zhang gave a presentation on goal orientation and student motivation.

Towards the end of her visit, Central Normal University sponsored an international workshop on mathematical modeling for psychology and social sciences, bringing in five international experts to share their cutting edge research methods using different types of mathematical modeling. “That scope of modeling is quite beyond what we are used to with APA and AERA research.  Those research methods could be widely applied and I look forward to learning more about those techniques in order to bring them into my research,” Zhang commented.

Upon returning from her sabbatical, Zhang has served on the standing committee for the development of the upcoming Data Visualization and Statistics Center. The Center, scheduled to open by the end of this academic year, is a part of DU’s research incubator initiative and plans to support students and faculty with statistical analysis at DU’s Anderson Academic Commons. “I am excited about all kinds of possibilities for student and faculty projects. As a college, MCE can contribute a lot of expertise to the new center.”

Dr. Zhang’s research interests focus on statistical and methodological research, dealing with multilevel data  with hierarchical structures. “I focus on quantitative methods, providing methodological support for faculty grants and other types of research projects, figuring out how large datasets should be analyzed to best serve different education and psychology research questions.”

Currently, Dr. Zhang is wrapping up a mixed method research project, Supporting Parents in Early Literacy through Libraries (SPELL), with her MCE colleague Dr. Mary Stansbury. SPELL is funded by Colorado State Library and explores how public libraries and community agency partnerships promote early literacy to low income families. For the project, Zhang served as the research scientist and Dr. Mary Stansbury served as the content expert. Elaborating on the research, Zhang explains: “We had four sites, covering a broad demographic in Denver, Colorado Springs and rural Colorado. We collected and analyzed data from surveys, focus groups and interviews.” Having recently presented their research to the advisory board, Zhang and Stansbury plan to submit the abstract and present their findings at upcoming local and national conferences with audiences in the Early Literacy and Library communities. Zhang comments, “I have a 16-month-old boy, so I have a strong interest in this project, even from a personal standpoint. Early Literacy focuses on children ages 0 to 3 years-old; when they are that young, you can’t teach them how to read, but rather promote interest in books and form the habit of reading and the love of libraries.”


© 2016 University of Denver. All rights reserved.
MENU