The Berkeley Evaluation and Assessment Research (BEAR) Center designs and delivers educational assessment instruments, performs research in assessment and psychometrics, and trains graduate students in these areas.
We collaborate with researchers in universities across the United States and abroad to develop software and other resources for constructing, managing, administering, and analyzing assessments.
DRDP(2015) suite of assessments are now available for early implementation! The DRDP(2015) assessments are authentic observational assessment tools used throughout California to support the development for children from early infancy through kindergarten. For over 15 years, BEAR Center researchers collaboratively refined the tool to ensure that it is a valid and reliable measurement of development in early childhood. BEAR Center is responsible for the DRDPtech technology system.
The BEAR Center is collaborating with a team of mathematics education content experts at Arizona State University led by Pat Thompson on this NSF-funded project. The goal of Project Aspire is to develop an instrument that assesses secondary mathematics teachers’ mathematical knowledge for teaching secondary mathematics.
The ADM project aims to develop an assessment system to evaluate elementary and middle school students’ skills and understanding related to data modeling and statistical reasoning.
This issue is explores the ramifications of testing in the classroom, with a view to maximizing the benefits and minimizing possible drawbacks of current educational testing applications.
Selected presentations by BEAR Center researchers:
The BEAR Center has concluded both pilot and field testing of the updated DRDP instruments! Through the state's management bulletin, the California Department of Education's Early Education & Support Division invited all EESD-funded programs to participate in the early implementation of the Desired Results Developmental Profile 2015 (DRDP(2015)). The suite of developmental observational assessments are valid and reliable for use with all children from early infancy to kindergarten entry.
Michelle LaMar, Educational Testing Service
There has been increasing interest in using complex tasks, such as simulations and games, for assessment of higher-level cognitive skills. Advocates argue that constructs such as problem solving, scientific inquiry and collaboration cannot be adequately measured with traditional discrete multiple-choice items. Complex tasks, however, come at a cost.
Linda Morell (BEAR Center) & Tina Chiu (QME)
Implementing assessment for the Next Generation Science Standards (NGSS) in a manner that meets the aspirations described in the Framework for K-12 standards is dependent on the availability of a set of assessment tasks that can capture the kinds of performance expectations articulated by the NGSS.
Monitoring School Performance: A Statistical Critique of England's ‘Expected Progress’ Approach with Comparison to Multilevel ‘Value-Added’ Models
Since 1992, the UK Government has published ‘school league tables’ summarizing the average educational ‘attainment’ and ‘progress’ made by pupils in each state-funded secondary school in England. In 2011 the Government made ‘expected progress’, a 'value-table' approach, their new headline measure of school progress.