Thursday, September 11, 2014

Higher Student Expectations?

What is an acceptable level of performance for an 8th grade math student? All public school teachers are required to set student learning targets (SLTs) each year for each classroom of students. Where should the passing scores be set and what is an acceptable level of student failure? One good point of reference should be the state administered 8th grade LEAP math test. Let's examine what is expected by our Department of Education for 8th grade Math.

Superintendent John White has repeatedly suggested that the state should have high expectations of all students and that if have high expectations on state tests that students will achieve more. Is he really following his own advice in designing and grading state tests?

Do you think Superintendent White would accept a passing score of slightly over 40%  on a teacher-made final test as acceptable? If the teacher set 40.1% for a minimum score for passing, would it be acceptable for 35% of a classroom of students to score below 40.1%? That's a pretty low expectation isn't it? Yet that's the result of the state-wide performance of our students on the new Common Core aligned 2014 LEAP given this Spring. The official scale score for a level of Basic on the Math 8th grade LEAP remained at exactly the point total used in the past (321 out of a total of 500 points). But the actual percentage of correct answers needed for a score of Basic (the minimum percentage), has been lowered this Spring to only 40.1%. And even with that low expectation, 35% of students state-wide failed to reach the level of Basic. Yet John White, in releasing the LEAP results in May announced that student performance on the new Common Core aligned LEAP tests was "steady" even though the test was more challenging.

It is also interesting that in an earlier press release the LDOE commented on a survey of students taking the new Common Core LEAP, and concluded that students did not find the new tests to be very difficult compared to their classroom work. Here is the quote:

"Of those students taking the computer-based test during the first phase, nearly 70 percent said the test was easier or about the same as their current school work. And, when asked if there were questions about things they have not learned this school year nearly 85 percent said there were none or few questions."

That's a strange conclusion, because most of the students I taught could always tell when a particular test was more difficult than usual. But apparently this time, students had no idea that the passing score would have to be lowered to 40.1% in order for 65% of the students state-wide to pass it. Similar results occurred on another LEAP test measuring ELA performance for 4th graders.

I believe the new Common Core aligned LEAP tests this year were poorly designed and resulted in abnormally low scores for students across the state, but that those low scores were hidden from the public by rigging the scale scores to make it look like students had no big problems with the new Common Core aligned tests. That's why I decided to write to the Accountability Commission to express my concerns. The Accountability Commission is a committee composed of teachers, superintendents, principals, parents, and business representatives who are supposed to advise the LDOE and BESE on the proper implementation of accountability policy. I am hoping that the Commission members will demand real accountability from our LDOE for the construction and grading of the most recent LEAP and iLEAP tests. Next year the state will be using the PARCC tests, and the results of those tests may not look so good. I would hate to see teachers and schools punished because of another drop in test scores on tests that may not be appropriate for our students.

And by the way, I do not agree with John White that higher expectations of students somehow magically results in higher performance. When standards are not well designed, when standards are not age appropriate and when tests are poorly designed, higher expectations alone mean nothing. They may just result in disappointment and further condemnation of our public education system. Here is my letter to the members of the Accountability Commission:

Dear Commission members:

My name is Michael Deshotels and I am writing to the Commission as a retired educator and a father of three children who all attended public schools, and a grandfather of 10 grandchildren who are in various stages of attending public schools today.

My reason for writing to the Commission is that in my research for my blog The Louisiana Educator, I have observed some apparent problems with test validity and possible double standards in the testing and grading of students using this year's LEAP tests compared to other tests. I am calling upon the members of the Accountability Commission to look into these matters and possibly recommend better policies. Here are my concerns:

In reviewing the results of the recent LEAP testing from the Spring of 2014 I focused primarily on the English Language Arts (ELA) and Math sections because these were the portions of LEAP that were changed to increase alignment with the new Common Core standards. I noticed that the scale scores for LEAP had not changed for this year compared to the previous years but I wondered if the underlying percentages of correct answers on each test related to the various achievement levels had changed. I made a public records request in May of this year for the minimum percentage of correct answers required for the achievement levels of Basic and Mastery on the recent LEAP tests. To date I have only received the results for the Basic level which is shown in the table below. I am still waiting for the minimum percentages for Mastery, and a court order has now been issued as a result of my lawsuit to require the Education Department to produce those records by September 22.



Percentage of Total Raw Score Points Required to Earn “Basic”




ELA
Math
Science
Social Studies
4th
2014
44.62%
47.22%
58.93%
53.03%
2013
51.54%
50%
56.90%
56.06%
2012
53.85%
52.78%
62.07%
57.58%
8th
2014
58.70%
40.13%
58.93%
50%
2013
57.97%
48.61%
56.90%
52.63%
2012
57.97%
55.26%
56.90%
52.63%


From this data, I made two observations: First, the raw percentage correct scores for a rating of Basic varies quite a bit from year to year even though the scale score for Basic remains the same. Second, I observed that the percentage of correct answers needed for a student to receive a rating of Basic dropped significantly for three out of four categories of LEAP for the 2014 school year compared to previous years. For example, I noticed that a student taking the 8thgrade math test needed to get only 40% of the answers right this year to get a rating of Basic. There is a similar result for 4thgrade ELA, which required only 44.6% of the answers right for a rating of Basic.

My concern is this: The LEAP scale score for a rating of Basic for 4th grade ELA remained at 301 points out of a possible 500 points, which to the average parent seems to be about 60% of the possible points, yet the actual percentage of correct answers needed for Basic was only 44.6%. The minimum scale score for Basic for 8th grade Math remained at stayed at 321 out of 500 which seems like 64% to the average parent, but the real cut percentage is only 40.13%. In addition, the cut percentages have been lowered significantly even though the public is given the impression that cut scores have remained the same. At the time the scores from the Spring LEAP testing were announced, the Department of Education stated that the average performance of our students at the level of Basic had remained steady even though the new Common Core aligned tests were more difficult. This statement appears to be misleading.

I want to assure the Commission that I am well aware that the official scale scores should not be viewed as proportionate to the number of correct answers on the test, but this is not clear to the public. I am also aware of the process called“leveling” where the testing company, with the approval of the Department, routinely adjusts the cut percentages on new forms of the LEAP test to take into account the comparative difficulty of different forms of the test and to insure that students are not penalized or rewarded when tests get more difficult or get easier. This is apparently the explanation for the drastic lowering of the cut percentages on this year's LEAP tests for ELA and Math. But I still find the statement that our students' performance has remained“steady” even though the tests were “aligned to more challenging learning standards” to be misleading.

I am also concerned that when a cut score has to be lowered to as low as 40% for a rating of Basic, that the test itself is getting close to being invalid as a measure of learning. That is because since the majority of the questions are still multiple choice, a student can get close to a passing score by just guessing at all questions where he/she does not know the answer and combining those guesses with a very few known answers. I believe that the Commission should seriously question the validity of tests that result in such low cut scores.

Such low cut scores also introduce the issue of a double standard in state testing of students compared to acceptable Student Learning Targets (SLTs). I do not know of any schools that approve student learning targets as low as 40 to 45% for a passing score, and where it is acceptable for 35% of students in a classroom to score below 40% on the final test. Yet this is the situation we find in the setting of the 8thgrade LEAP math standards for Basic for the entire state. I believe there was something wrong with the 8th grade LEAP Math test and also with some of the other tests, yet the real results were covered up by the pronouncement by the Department of “steady”performance of our students. Let me make this clear: I don't blame the students or the teachers for this dismal result, I blame the test and the test designers. I also blame the LDOE for using the false stability of scale scores as a smokescreen to hide the real performance on this year's LEAP test.

Finally, I must object to the low level of transparency exhibited by the Department in making this vital information available to educators and the public. Has the information above been produced for review by the Accountability Commission? When I first requested this information in May of this year, I was told that the information may be available in November. But of course, the cut percentages were known by the Department since the scores were issued in April. Why is it necessary to withhold such information from the public until November? Shouldn't at least the Accountability Commission be allowed to review this critical information? This is important because such scoring methods will possibly become even more important as the state fully transitions to the PARCC testing. Will the percentage cut scores be kept secret on the PARCC testing also?

I am requesting that the Accountability Commission look into these matters and recommend a more transparent policy on releasing results and scoring changes and also demand a thorough analysis of the validity and appropriateness of the tests which will be used to grade our students, our teachers and our schools. This analysis should be conducted by someone independent of the State Department of Education and the testing company.

Finally I would appreciate an opportunity to address the next meeting of the Accountability Commission to further clarify my concerns. Please feel free to contact me with any questions or comments.

Sincerely,
Michael Deshotels
Phone 225-235-1632