Saturday, February 5, 2011

The Pitfalls of Analyzing RTI Data

Here's a guilty little secret:  I like to know the ending of books ahead of time.   See, it's the one time during life that I can peek at the back page and know whether or not it will all turn out ok.  I don't look for details other than the quick answer of whether the main character lives or dies. And that's the problem with RTI.  We've set this system up, we're working on data, but I don't know how to say for certain that we've succeeded.  It's the worst part of action research for me.

A little background is probably needed.  My District has been part of the Iowa IRIS project, and we've been working on the process of learning and implementing RTI/IDM.  One of the first things we realized was that we needed a wide base of involvement for our teachers.   Over a quarter of our teachers were trained in the model initially, and we've used professional learning communities to help others get on-board, learning together in teacher-focused PD and from one another.  After long discussion, we identified a group of 24 students who we felt were struggling and got ready to pilot our model.  My administrator empowered our BLT to set up the structure of interventions, and another teacher and I set up the process and are running the pilot program.  We've been collecting data for two quarters.

When I first looked at the data, I simply counted up the numbers of D+, D, D-, and F for each grade and put it in a matrix.

9 10 11 12 TRACKED




But this way of looking at data is not good enough.   Even though I tried to see patterns, the n(students) is not constant from grade to grade.   This makes comparisons ineffective, although trends may be seen, perhaps, from quarter to quarter.

Next, I tried a comparison of Ds and Fs in graphical form, but I decided that wasn't really the best usage of data for a myriad of reasons.  For one, we use a mastery system, so if a child passes all of the standard assessments, we feel that they have a minimum standard of knowledge in the coursework, even if they get a D-.   Grading scales are another discussion, but just for now, I decided to focus on Fs just for struggling learners, to see if the average number of Fs was decreasing over time.

This showed the data in a different light.   I can see that term 2 is worse than term 1 for most grades.   Perhaps it's because the newness of school has worn off, but it was a relief, I think, to see that term 2 is historically more difficult for kids.  But this was inadequate.  I was only looking at n(struggling students), and those numbers were not consistent from grade to grade.

At this point, my husband interjected a comment (perhaps seeing me tear at my hair was having an effect), suggesting that we take a look at the total population of each group, and then use a pareto study to normalize the numbers to account for the n(students) of each grade.    I'm so glad that his quality background lets him see the value of statistics!  So I ran the numbers.
So, in Term 2:

9th:  18/119 kids had a F for a 15% problem

10th: 10/132 kids had a F is a 7.5% problem

11th: 11/134 kids had a F is an 8.2% problem

Hey, that means we have something happening with our pilot kids in 10th grade (it also means we need to act, because our 9th graders need added to the pilot as quickly as possible).   Because of the analysis,  and the data shown to teachers, we have added another forty kids to our tracking procedures.

Finally, I looked at the pareto study. I tracked this year's 9-12 graders for 4 quarters---terms 1 and 2 of last year and terms 1 and 2 for this year.   My data is longitudinal, so the 9th graders for this year have no high school data for two of the four terms.  This is what resulted.

This also prompted some realizations for me.   Last year, even though we identified this year's 10th graders because we wanted three years of data, we really should have identified lots of this year's 11th graders.  And this year's data tells us, loudly, that the 9th graders need some help.   This is powerful information to share with our teachers, and by doing so, we continue to reform the way we do business

To summarize:

  • Data-driven education is not just for administrators--it's for teachers, teacher-leaders, and for the effectiveness of students.  While I've done this for assessments, this is the first time I've applied statistics to the needs of a class.
  • This is action research, and it's in-process.   I'll be taking the time to look at more data before I am confident with the patterns. I'll continue to look at the data in different ways
  • Data doesn't always follow a perfect trend line.  In that case:  collect more data to get your answer.
  • In the past, we've just noticed kids were struggling, but we weren't systematic in addressing the weaknesses. This data shows us exactly where we are headed, and who we need to target for intervention next.
  • Teachers can use data to make decisions, and schools will benefit as a result.   That is what we hope to accomplish with RTI...a way to use data to make decisions.

I would be interested in your thoughts on what other data I should be collecting.  My husband, the quality guy, suggested a capability study, because it will tell us if our RTI program is having an effect on all students, pushing up the number of students with As or Bs over time (which would appear to be an indicator of being intrinsically motivated).

I'll talk about exactly WHAT we are doing for struggling learners in my next post.

No comments:

Post a Comment