CD3: School Improvement Indicators

AW developed the AW Eleven Key Indicators © for use across several different types of accountability and ongoing quality assurance measurement systems for school improvement.

Research Based
The study of teaching is often the study of teachers. Many of us participating in teacher training use the stories of outstanding teachers--the ones who made a difference in the lives of children, and often in our own lives as models. But it is irresponsible to substitute anecdote for a professional knowledge base, and decisions in education must rest on sound, replicable research findings.

Jeanne Chall (2001) reported that the use of teacher-directed, rather than student-centered instructional strategies, correlate closely with higher levels of student achievement. Citing numerous studies, she made the case for adopting teacher directed strategies.

In 1990, Herb Walberg reviewed over 800 studies in order to determine which teaching methods were most effective. He also found that teaching techniques that maximized engagement, provided appropriate cues, corrective feedback, and direct and explicit instruction were closely linked to better student achievement across subject areas.  Given the level of controversy concerning effective pedagogy, and the high level of skepticism about the research base for educational methods, it is important to review carefully the state of our knowledge.

Since teaching is an interaction, part of our inquiry involves techniques for the analysis of student responses as indicators of teaching effectiveness. The methods for evaluating classroom interaction have benefited from over 30 years of research. Much of the initial work in assessing classroom interaction was influenced by N.A. Flanders (1965). Flanders developed a coding system to note the types of verbal interchanges taking place between teachers and students. The Flanders Interaction Analysis Categories System contains ten categories for coding teacher and student verbal behavior. Using the Flanders’ system an observer would note, for example, whether a teacher “asked questions with the intent that the student answer the question” and whether there was either a response by the student or silence or confusion.

In the 1970s, Jacob Kounin’s research in classroom management provided a more specific analysis of the teacher and student interaction. Kounin (1970) determined that there were six critical dimensions of effective classroom management that were highly correlated with appropriate student behavior: 1) Withitness, 2) Smoothness and momentum; 3) Group alerting; 4) Accountability; 5) Overlapping; 6) Valence and challenge arousal. The key finding, however, was that certain proactive teacher behaviors (instead of reactive) could prevent student disruptions and increase student learning.

As research regarding the details classroom management continued, researchers were also identifying features in academic instruction that would increase learning and decrease disruptive behaviors. In the classic book, Time to Learn, edited  by C. Denham and A. Lieberman (1980) researchers identified the amount of academic learning time as a major contributor for improving student learning. Creating the connection to time available and actual time used was especially important to assess how much information could and would be covered.

Thus, educators began to notice how efficient classroom practices impacted student learning and behavior. That is by having well organized and designed instructional materials, as well as, for example, creating more clarity regarding classroom procedures and routines, educators were able to increase the amount of time available for instruction or the “opportunity to respond.” 

As the terms “time-on-task”, opportunity to respond”, academically engaged time became familiar on the educational landscape, researchers began to determine that having students “on-task” answered part of the performance equation, however, student mastery also needed to be considered. If the student was “on-task”, but ended up making mistakes on every item, the only thng the student learned was to make errors on task. Not exactly the mission of education.

Researchers have experimented with different measures and data collection methods to gather evidence to assess student performance during learning (e.g. teacher directed, guided practice, independent work, group projects, cooperative groups).
In order to achieve mastery, students need to have accurate and fluent responses. Whether writing a research paper or learning to read, the features of what are earned can be measured by accuracy and rate.

As educators began to refine these measures, they observed the number of student responses obtained during instruction could accurately predict student achievement levels. In 1987, Council for Exceptional Children (CEC) set guidelines for response rates to encourage effective instruction and increase student performance for students at risk; these guidelines continued be validated many years later . Sutherland and Wehby in 2001, published a review of the literature which examined the relationship between increased opportunities to respond to academic requests and the academic behavioral outcomes for students with emotional and behavioral disorders (EBD). 

Once again not only was there a strong relationship between the number of responses required, but this instructional variable positively influenced academic and behavioral achievement (Sutherland & Wehby, 2001).  As a refinement to the measuring the number of responses or the overall engaged-time, Madigan and Youngmayr (1996) found that the frequency of student responses, as well as the accuracy of the student responses could be used to determine student achievement levels and teacher effectiveness across several different subject areas: elementary reading, language arts, and mathematics.

Engelmann (2000) set standards for student mastery by measuring the accuracy of student responses on newly instructed material to guide student remediation or acceleration decisions. In addition to measuring student (and teacher interactions) in the classroom, it is also valuable to assess student homework, tests, and other assignments.

Student responses matter

Teachers interact with students everyday. The quality of the interaction determines the quality of student achievement. For example, if a teacher doesn’t correct student errors, doesn’t make notes on student homework or ignores a poor deduction, the student will suffer. On the other hand if a teacher provides corrective feedback juxtaposed with appropriate amounts of praise for accuracy, the student will achieve (Sutherland, Alder, & Gunter, 2003; Walberg, 1995). Proper analysis of student responses is not only a tool for evaluating teacher effectiveness, it is, more importantly, a diagnostic tool with which teachers can revise the content of the lesson, the presentation, or create extra work to help the students achieve on a daily basis.

End of year evaluations are not sufficient
Although determining the effectiveness of a teacher based on the value he/she adds to a group of students through end-of-year assessment of learning gains is the best way to hold teachers and administrators accountable for producing results, it is not sufficient for making improvements in a timely way that will affect the achievement of that group of students during the year.  The amount of time it takes to gather the information and develop a reliable end of the year report does not lend itself to creating first-line interventions. Therefore, it is important that school leaders, peer coaches, and mentors use observational methods that assess the interaction between teachers and students as a way of measuring student achievement and teacher effectiveness and instructional efficiency prior to end of year tests.  
Not only have principals and teacher trainers used observational data to make decisions to help improve student learning, but peer coaches have been trained to use classroom observational data collection methods as well. By collecting and analyzing the data with classroom teachers, peer coaches have been able to use the data to mediate conversations about student performance. For example, the peer coach doesn’t merely suggest to the teacher that she needs to improve her classroom management skills, rather, the coach can provide the actual number of praise statements, corrective statements, and negative statements made during a lesson and the teacher and coach can determine if this meets the appropriate ratio for the grade level and subject matter. If data suggest that change is needed, then the coach and teacher can work together to identify a remedy. Peer coaches have helped teachers improve their teaching skills during the school year and students have increased achievement (Hall and Jungjohann, 1999).

Using research to develop items and criteria
The teacher and student interaction is a dynamic and essential feature of effective classrooms. This ongoing interaction should be measured frequently and the data regarding the quality of the interaction made available to the teacher on a regular basis. Evaluation tools should use research proven criteria in determining teacher effectiveness. Unfortunately, most evaluation tools include items that are not linked to student achievement (e.g., teacher demonstrates good rapport, shows understanding of student learning styles, shows good communication skills) and do not give the observer criteria by which to distinguish “good communication skills” from poor ones (e.g., rate the teacher as unsatisfactory, basic, proficient or distinguished).  By contrast, AW The Eleven Key Indicators© identified in the observational tools used by AccountabilityWorks are supported by empirical research and require that the observer collect detailed response-interaction data, which more accurately informs decision-making.  

What are the AW Eleven Key Indicators ©?

Efficiency Indicators (determines rate of learning)

1.  Daily allocated time v instructional time used: What time is needed to instruct the content? Number of minutes available vs number of minutes used. Graph the trend against the allocated. Issues around transitions, organization, materials, etc. can be assessed.

2.  Lesson Completion: Determine how many lessons per week the student should complete to meet content goals; create goal projection line on graph; record number of lessons per week to assess whether student-learning gains are on target. Preskills: Effective Curricular Design.Graph the trend against the prediction.

3.  Engaged Time during teacher directed instruction: Number of responses per minute during instruction or use a time sampling measure throughout the class period to assess student attention to task. Goal is an average of 10 responses per minute during teacher directed instruction (See CEC goal statements).
4.  Engaged Time during group or independent work:  Use  time sampling measure throughout the class period to assess student attention to task. Goal: During  group or independent work students are on task 90% of the time(See CEC goal statements).

Effectiveness (determines accuracy)
5.  First Time correct on review or similar items is 85% (group or individual)
6.  First time correct on new material is at least 70% (group or individual);
7.  Individual student test data is at 90% or better.

8.  Teacher Uses Appropriate Error correction procedure to correct academic and social skill errors: 90% of all errors are corrected.

9.  Overall Classroom Climate:  Teacher Uses 4:1 ratio of positive to negative statements
10. Student Civility:  Time sample student non-compliant behavior to record percentage of class period when one or more students are non-compliant; or use cumulative duration to determine the amount of time the teacher attends to non-compliant behaviors.
11. Use of existing token or rewards system to collect quality indicators (e.g., contingent use, paired with descriptive praise re: behavior).

Services Available

Participants in AW training sessions will have opportunity to collect observational data by watching videos of classroom experiences, quantify and analyze it, and will then use the data to make decisions about ways to improve teaching that will have the greatest impact on student achievement. Training sessions also provide practice at reviewing a variety of student performance indicators (e.g., behavior performance charts, unit tests, curriculum based assessments, fluency measures).   Measuring the value a teacher adds to student achievement during the school year can assist administrators in selecting appropriate professional development activities, provide teachers with information to analyze their practice with professional rigor and, best of all, improve student performance in academic and behavioral outcomes.

Carnine, D.W. (1976). Effects of two teacher-presentation rates on off-task behavior, answering correctly, and participation. Journal of Applied Behavior Analysis, 9, 199-206.
Council for Expectional Children (1987). Academy for effective instruction: Working with mildly handicapped students. Reston, Virginia: Author.
Denham, C. & Liberman, A. (1980). Time to Learn. Washington, D.C.: USDOE
Flanders, N.A. (1965). Teacher Influence, Pupil Attitudes and Achievement. Minneapolis, Minnesota: University of Minneapolis.
Gunter, P. Hummel, J., & Venn, M. (1988) Are effective academic instructional practices used to teach students with behavior disorders? Beyond Behavior, 9, 5-11.
Kounin, J. (1970). Discipline and group management in classrooms. New York: Holt Rinehart and Winston.
Madigan, K. & Youngmayr, L. (1996). Observing students responses to inform teaching practices. Eugene, Oregon: ADI.
Sprick, R. Knight, J., Reinke, W., & McKale, T. (2006). Coaching Classroom Management: Strategies and tools for administrators and coaches. Eugene, Oregon: Pacific Northwest Publishing
Stallings, Jane A. (1980). "Allocated Academic Learning Time Revisited, or Beyond Time on Task." Educational Researcher 9 (11):11 - 16.
Sutherland, K., Alder, V. & Gunter, P. (2003). The effect of varying rates of opportunities to respond to academic requests on the classroom behavior of students with EBD. Journal of Emotional and Behavioral Disorders, 8, 2-8.
Sutherland, K. & Wehby, J. (2001). Exploring the relationship between increased opportunities to respond to academic requests and the academic behavioral outcomes of students with EBD: A review. Remedial and Special Education, 22, 113-121.
Walberg, Herbert J. (1995). "Generic Practices." In Handbook of Research on Improving Student Achievement, ed. Gordon Cawelti. Arlington, VA: Educational Research Services.