The technical advisory committee is comprised of experts in areas of assessment ranging from standard setting and validity in large-scale assessments, to accessibility in alternative assessments, to cognitive diagnostic modeling.
Russell Almond, Ph.D.
Florida State University
Russell Almond received his Ph.D. in Statistics from Harvard University. He is currently an associate professor of measurement and statistics at Florida State University. He previously served 14 years as a research scientist at Educational Testing Service. Together with Bob Mislevy and Linda Steinberg, he was part of the original formulation of evidence-centered assessment design (ECD): work that was awarded the 2000 Award for Outstanding Contribution to Measurement by the National Council on Measurement in Education. He is the lead author on the new book, Bayesian Networks in Educational Assessment, and the designer of several software systems for scoring complex assessment, including his recent work on the RNetica package.
Karla Egan, Ph.D.
Karla Egan is an expert in large-scale assessment design and standard setting. She has assisted multiple states in the development of their summative assessments, presented at several conferences on issues related to assessment design and standard setting, and authored articles on standard setting for modified and accessible tests. For the past two years, Egan has aided the National Center and State Collaborative in developing its alternate assessment, focusing particularly on psychometric issues, and she is currently serving on the National Academy of Sciences committee that is evaluating NAEP achievement levels in reading and mathematics. Egan has previously worked as an associate at the National Center for the Improvement of Educational Assessment (NCIEA) and as a research scientist and research manager at CTB/McGraw-Hill. She received her Ph.D. in sociology from the University of Massachusetts, Amherst.
Claudia Flowers, Ph.D.
University of North Carolina Charlotte
Claudia Flowers is a professor of educational research, measurement, and evaluation at the University of North Carolina at Charlotte. Her research interests include alternate assessments for students with significant cognitive disabilities, transition services for students with disabilities, and testing accommodations, and she has produced more than 100 publications in the areas of assessment, measurement, and applied research. She worked with the National Center and State Collaborative to develop an alternate assessment system for students with the most significant cognitive disabilities, and she is a former chair of the Diversity Issues and Testing Committee of the National Council on Measurement in Education. Dr. Flowers earned her Ph.D. in educational research, measurement, and evaluation from Georgia State University and currently serves on numerous state technical advisory committees and national expert advisory panels.
Robert Henson, Ph.D.
University of North Carolina-Greensboro
Dr. Henson’s primary focus of research is diagnostic classification models (also known as cognitive diagnosis models). These models score a test in a way that provides a mastery profile that specifies what a student has mastered or not mastered as opposed to providing a single score. The goal is to provide detailed information that may allow for tailored lesson plans specific to a student. Dr. Henson has also expanded his research to Hierarchical Linear models (also known as Mixed Models) and their application. In addition, Dr. Henson serves as the director of graduate studies and continues to explore new ways incorporating technology in to his classes.
Joan Herman, Ed.D.
University of California, Los Angeles
Joan Herman earned her doctorate in Learning and Instruction from the University of California, Los Angeles and is Director Emerita of the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at UCLA. Her research has explored the effects of testing on schools and the design of assessment systems to support school planning and instructional improvement, with a recent emphasis on teachers’ formative assessment practices and fairness in classroom and large-scale assessment. Dr. Herman has published extensively for research, practitioner, and policy audiences on evaluation and assessment topics and has held a variety of leadership positions in the California Educational Research Association, the American Educational Research Association, and for the Standards for Educational and Psychological Testing. Nationally recognized as a leader in the field, Dr. Herman has been honored as an elected member of the National Academy of Education, as a fellow of the American Educational Research Association, and with numerous awards for excellence.
James Pellegrino, Ph.D.
University of Illinois-Chicago
James Pellegrino earned his Ph.D. in experimental and quantitative psychology from the University of Colorado. He is the liberal arts and sciences distinguished professor and distinguished professor of education at the University of Illinois at Chicago (UIC). He also serves as co-director of UIC's interdisciplinary Learning Sciences Research Institute. He has been engaged in research and development activities related to children's and adult's thinking and learning and their implications for assessment and instructional practice for over 40 years. He has authored or co-authored over 300 books, chapters, and journal articles in the areas of cognition, instruction, and assessment. He has chaired several National Research Council study committees focused on policy and practice issues related to learning, instruction, and assessment, and he has made numerous presentations at local, state, national, and international meetings.
Edward Roeber, Ph.D.
Assessment Solutions Group/Michigan Assessment Consortium
Edward Roeber is an expert in large-scale student assessment, as assessment development specialist, and a designer and developer of a number of states' alternate assessments for students with disabilities. He currently is director of assessment development for the Michigan Assessment Consortium. Previously, he was the assessment director in Michigan, and employed at CCSSO, Measured Progress, Michigan State University, and WIDA/WCER. On this project, Roeber's expertise will be used to provide overall guidance in the development of new alternate assessment models that cost-effectively serve to improve student access to learning opportunities with increased technical adequacy. Roeber earned his Ph.D. in measurement and evaluation with a minor in educational psychology from the University of Michigan.
David Williamson, Ph.D.
Educational Testing Service
David M. Williamson is Vice President of New Product Development at Educational Testing Service, where he leads the identification and pursuit of opportunities to better serve the public through new offerings. Prior to that he was Senior Research Director for the Assessment Innovations Center in the Research and Development division of Educational Testing Service, where he led fundamental and applied research at the intersection of cognitive modeling, technology, and multivariate scoring models. This research focused on the development, evaluation, and implementation of automated scoring systems for text and spoken responses. He oversaw a research and deployment agenda that has led to operational deployment of automated scoring systems for the GRE, TOEFL, and multiple other programs. Williamson’s research interests include simulation-based assessment, automated scoring, and other related topics that advance the field of measurement through innovation. He earned his Ph.D. in psychometrics from Fordham University.
Phoebe Winter, Ph.D.
Phoebe Winter conducts research in improving online assessment and contributes to the design and implementation of technology-enhanced assessment systems through her consultation with state and non-governmental education agencies. Winter helps these agencies consider policy-focused, psychometric, and practical perspectives in assessment creation. She has conducted research, written and edited books and articles, and provided technical support aimed at strengthening the validity of uses for and interpretations of educational measurement results. Winter is the secretary-elect of the American Educational Research Association, Division D, and earned her Ph.D. in psychology from Columbia University’s Teachers College, concentrating on measurement, evaluation, and applied statistics. Her research and professional service reflect her goal of improving educational measurement so that it contributes meaningfully to teaching and learning.