Paul Mooney, Renée E. Lastrapes, Amanda M Marcotte, Amy Matthews Matthews, B. S.


The present research expanded validity findings for a structured formative
assessment measure of content learning that was administered online and known
as critical content monitoring. The study also evaluated the potential for additional
measures, including sentence verification technique and written retell, to explain
variance in student achievement in science and social studies classrooms. Participants were fifth-grade students (N=51) enrolled in a public primary school in the southeastern U.S. Three predictor variables (i.e. critical content monitoring, sentence verification technique and written retell) were correlated with content test scores from the nationally representative standardized achievement test (i.e. Stanford Achievement Test-Tenth Edition abbreviated online form) and a statewide accountability test. Pearson correlations for critical content monitoring and the Stanford tests across science (r=.55) and social studies (r=.63) were moderately strong and similar in magnitude with other reported correlations for academic language measures in the literature. Correlations for critical content monitoring were descriptively larger than those between the standardized tests and sentence verification technique and written retell. Commonality analyses indicated that both critical content monitoring and sentence verification technique added unique variance to explanatory models. Limitations and implications were discussed


structured formative assessment, general outcome measurement, content courses.


Alexander, F. (n.d.). Understanding vocabulary. Retrieved from http://www. [Accessed 17

March 2013].

Barth, A. E., Stuebing, K. K., Fletcher, J. M., Cirino, P. T., Francis, D. J., & Vaughn, S. (2012). Reliability and validity of the median score when assessing the oral reading fluency of middle grade readers. Reading Psychology, 33, 133-161.

Bloom, B. S. (1980). The new direction in educational research: Alterable

variables. Phi Delta Kappan, 61, 382-385.

Carney, R. N. (n.d.). Review of the Stanford Achievement Test Tenth Edition.

Mental Measurements Yearbook. Retrieved from

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232.

Deno, S. L., & Fuchs, L. S. (1987). Developing curriculum-based measurement systems for data-based special education problem solving. Focus on Exceptional Children, 19 (8), 1-16.

Deno, S. L., Reschly, A. L., Lembke, E. S., Magnusson, D., Callender, S. A., & Windram, H. et al. (2009). Developing a school-wide progress-monitoring

system. Psychology in the Schools, 46(1), 46-55. doi: 10.1002/pits.20353.

Espin, C. A., Busch, T. W., Lembke, E. S., Hampton, D. D., Seo, K., & Zukowski, B. A. (2013). Curriculum-based measurement in science learning: Vocabularymatching as an indicator of performance and progress. Assessment for Effective Intervention, 38, 203-213. doi: 10.1177/1534508413489724.

Espin, C. A., Busch, T. W., Shin, J., & Kruschwitz, R. (2001). Curriculum-based measurement in the content areas: Validity of vocabulary matching as an indicator of performance in social studies. Learning Disabilities Research & Practice, 16, 142-151. Retrieved from

Espin, C. A., & Deno, S. L. (1994-1995). Curriculum-based measures for secondary students: Utility and task specificity of text-based reading and vocabulary measures for predicting performance on content area tasks. Diagnostique, 20, 121-142.

Espin, C. A., & Foegen, A. (1996). Validity of general outcome measures

for predicting secondary students’ performance on content-area tasks.

Exceptional Children, 62, 497-514.

Espin, C. A., Shin, J., & Busch, T. W. (2005). Curriculum-based measurement in the content areas: Vocabulary matching as an indicator of progress in social studies learning. Journal of Learning Disabilities, 38, 353-363. Retrieved from

Fuchs, L. S. (2004). The past, present, and future of curriculum-based

measurement research. School Psychology Review, 33, 188-192.

Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488-500.

Jenkins, J. R., & Fuchs, L. S. (2012). Curriculum-based measurement: The

paradigm, history, and legacy. In C. A. Espin, K. L. McMaster, S. Rose, &

M. M. Wayman (Eds.). A measure of success: The influence of curriculumbased measurement on education (pp. 7-23). Minneapolis, MN: University of Minnesota Press.

Louisiana Department of Education. (LDE; n.d., a). Integrated Louisiana

Educational Assessment Program (iLEAP). Retrieved from http://www.

Louisiana Department of Education. (LDE; n.d., b). iLEAP 2010 (Technical

Summary). Retrieved from


Marcotte, A. M., & Hintze, J. M. (2009). Incremental and predictive validity of

formative assessment methods of reading comprehension. Journal of School

Psychology, 47, 315-335. doi: 10.1015/j.jsp.2009.04.003.

Moodle (n.d.). Retrieved from

Mooney, P., McCarter, K. S., Russo, R. J., & Blackwood, D. L. (2013). Examining an Online Content General Outcome Measure Technical Features of the Static Score. Assessment for Effective Intervention, 38 (4), 249-260.

Mooney, P., McCarter, K. S., Russo, R. J., & Blackwood, D. L. (2014). The structure of an online assessment of science and social studies content: Testing optional formats of a general outcome measure. Social Welfare Interdisciplinary Approach, 4 (1), 81-93.

Mooney, P., McCarter, K. S., Schraven, J., & Callicoatte, S. (2013). Additional

performance and progress validity findings targeting the content-focused

vocabulary matching. Exceptional Children, 80 (1), 85-100.

Morse, D. T. (n.d.). Review of the Stanford Achievement Test (10th ed.). Mental Measurements Yearbook. Retrieved from National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common Core State Standards for English language arts and literacy in history/social studies, science, and technical subjects. Washington, DC: Authors.

Pearson Education (n.d.). Stanford Achievement Test Series (abbreviated

form, 10th ed.). Retrieved from



Royer, J. M. (2004). Uses for the sentence verification technique for measuring language comprehension. Progress in Education. Retrieved from


Royer, J. M., Hastings, C. N., & Hook, C. (1979). A sentence verification technique for measuring reading comprehension. Journal of Reading Behavior, 11, 355-363.

Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based

measurement to improve student achievement: Review of research.

Psychology in the Schools, 42, 795-819. doi: 10.1002/pits.20113.

Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.).Boston, MA: Pearson Education, Inc.

Tolar, T. D., Barth, A. E., Fletcher, J. M., Francis, D. J., & Vaughn, S. (2014). Predicting reading outcomes with progress monitoring slopes among middle grade students. Learning and Individual Differences, 30, 46-57. doi: 10.1016/j.lindif.2013.11.001.

Vannest, K. J., Parker, R., & Dyer, N. (2011). Progress monitoring in Grade 5

science for low achievers. The Journal of Special Education, 44, 221-233.

Wallace, T., Espin, C. A., McMaster, K., Deno, S. L., & Foegen, A. (2007). CBM progress monitoring within a standards-based system: Introduction to the series. The Journal of Special Education, 41, 66-67.

Warne, R. T. (2011). Beyond multiple regression using commonality analysis to better understand R 2 results. Gifted Child Quarterly, 55(4), 313-318.

Zientek, L. R., & Thompson, B. (2006). Commonality analysis: Partitioning

variance to facilitate better understanding of data. Journal of Early

Intervention, 28 (4), 299-307.



  • There are currently no refbacks.