The Berkeley Evaluation and Assessment Research (BEAR) Center designs and delivers educational assessment instruments, performs research in assessment and psychometrics, and trains graduate students in these areas.
We collaborate with researchers in universities across the United States and abroad to develop software and other resources for constructing, managing, administering, and analyzing assessments.
DRDP(2015) suite of assessments are now available for early implementation! The DRDP(2015) assessments are authentic observational assessment tools used throughout California to support the development for children from early infancy through kindergarten. For over 15 years, BEAR Center researchers collaboratively refined the tool to ensure that it is a valid and reliable measurement of development in early childhood. BEAR Center is responsible for the DRDPtech technology system.
The BEAR Center is collaborating with a team of mathematics education content experts at Arizona State University led by Pat Thompson on this NSF-funded project. The goal of Project Aspire is to develop an instrument that assesses secondary mathematics teachers’ mathematical knowledge for teaching secondary mathematics.
The ADM project aims to develop an assessment system to evaluate elementary and middle school students’ skills and understanding related to data modeling and statistical reasoning.
- This issue is explores the methodological disagreement concerning causal indicators, discussing wether they are inherently sensitive to interpretational confounding or not.
Selected presentations by BEAR Center researchers:
An Overview of Foreign Language Assessment at the Defense Language Institute Foreign Language Center, Monterey, CA
Seumas Rogan, Defense Language Institute
The mission of the Defense Language Institute Foreign Language Center (DLIFLC) is to provide culturally-based, foreign language education, training, evaluation and sustainment to enhance the security of the United States of America. This presentation will provide an overview of selected activities within DLIFLC's Language Proficiency Assessment Directorate with an emphasis on current research needs and opportunities for academic collaboration.
Special QME Seminar: Sources of Error in IRT Trait Estimation: Effects on Trait Score Bias and Confidence Interval Coverage Rates
Leah Feuerstahler, University of Minnesota
Item response theory (IRT) models item response probabilities as a function of item characteristics and latent trait scores. Within an IRT framework, trait score misestimation results from (1) random error, (2) the trait score estimation method, (3) errors in item parameter estimation, and (4) model misspecification. Through a simulation study, I explore the relative effects of these error sources on the confidence interval coverage rates for trait scores.
Christopher Thompson, Florida State University
While Bayesian meta-analysis has flourished both in methodological and substantive work, group-specific Bayesian modeling remains scarce. Common practice for choosing prior distributions entails using typical non-informative priors. Currently, there is a push to use more informative prior distributions. In this paper I propose a weakly-informative group-specific prior distribution.