One of the problems with educational practice, as compared with other fields, is that education has a relatively weak basis in clinical practice. That is, in fields like medicine, social work, and counseling, practitioners have access to a collection of assessment tools and practices that support their work with clients, and they are trained to use these clinical tools as part of their practice. Despite the fact that there is a relatively solid and useful research base related to school improvement, the field of education has been slow to develop usable clinical tools to support practitioners in their work. The ICAP is an attempt to develop one such tool.
The ICAP grows out of more than a decade of research on school improvement and accountability, and, using well-tested and -documented survey items and observational protocols from the research, creates a way for system-level and school-level leaders to create a common body of data for the support of school improvement. Data generated by the teacher survey is corroborated and contextualized by data derived from classroom observation, as well as teacher and principal interviews, and compiled into a profile of a specific school’s capacity to engage in deliberate improvements in instructional practice and student learning across classrooms. In addition to providing a platform for individual school improvement, tools like the ICAP produce interesting evidence on the process of school improvement, and on the variety of responses that schools have to attempts to support their improvement, which can be used to inform the larger knowledge base in the field. Our aim is to produce a protocol that can be used primarily to support practitioners in their work, but also to help improve our longer-term understanding of the processes of school improvement and how to support them.
The first generation of the protocol was field-tested in the school year 2007-2008 with 21 low-performing schools in Boston, focused on the middle grades. As part of the field test, Elmore and Forman worked for a full school year not just on the development of the protocol but also in close collaboration with one chronically under-performing school to document and support their response to the process. The second generation of the protocol is currently being administered in 13 Boston schools that have been identified by system-level administrators for intensive support over the 2010-2011 and 2011-1012 academic years. This work will include support for Boston’s Chief Academic Officer and team of Academic Superintendents in how to integrate the ICAP data into their supervisory relationships with principals, in service of developing and supporting instructional improvement efforts in their schools.
The internal coherence assessment is intended to be a clinical assessment, rather than a purely research-oriented instrument. That is, it is designed to generate information about the school that can be used by people in the school and their system-level supervisors to inform decisions about what the school might do to improve the quality, consistency, and effectiveness of instructional practice and student performance. It is designed to provide one source of information, among many, for school improvement, and it should not be seen as displacing other important sources, such as school improvement plans, student assessment data, analysis of student work, or curriculum-based assessments.
Michelle Forman provides an overview of internal coherence as a construct:
The Internal Coherence Assessment and Protocol was developed by a team consisting of Richard Elmore, Michelle Forman, Elizabeth Leisy Stosich, and Candice Bocala.
Strategic Education Research Partnership
1100 Connecticut Avenue NW, Suite 1310 • Washington, DC 20036
serpinstitute.org • firstname.lastname@example.org • (202) 223-8555