Just two days after our general meeting discussion on standardized testing in the U.S., this headline swept the Seattle Times, “Garfield teachers refuse to give district-wide test”. The main motivations behind the test boycott were (1) misleading test results and (2) a huge stress on already-limited resources at the school during the test-taking week. The latter complaint has been resonated time and time again (e.g. Adam Heenan article: “Time on Testing, 738 minutes in 3 weeks” ). The teachers who refused to give the test in Seattle were supported in their actions by the Garfield High School administration. And the trend is now spreading to other schools in the Seattle School District. Interestingly, there has quite a bit of recent controversy in the education and STEM communities in response to standardized testing- not only in Seattle, but nationally, too.
Based on our group discussion, we can now confidently assert that there are issues with the current protocol (or lack thereof) for standardized testing. A breakdown of standardized testing (discussed during the FOSEP meeting) at the national and international level is given below, along with a few issues associated with the tests. Feel free to ponder and to explore the links. This post is meant to be food for thought, raising more questions than answers.
(1) State-specific standardized testing.
State standardized tests are required if schools want to receive federal funding. But, did you know that each state drafts its own standardized tests, which then sets the bar for education standards in that state? You can view a list of the standardized tests administered by each state here. And, did you know that some state’s tests are known to be really “good” (i.e. New York), whereas others leave a bit to be desired? This has to do with (e.g.) how the questions are written, the test format (multiple choice versus short answer), etc.. I don’t know about you, but one would hope that all state tests achieved the same level of acceptability in the education community. Is that too much to ask for? Perhaps.
(2) The National Assessment of Education Progress.
NAEP provides an assessment of a subset of students in Grades 4, 8, and 12 from schools randomly selected across all states. Findings are used to gauge reading, writing, math, and science proficiences at a national level. Click here for example questions. One of the main criticisms surrounding this test (in addition to the “Teaching for tests” argument) is misinterpretation of data; seemingly small differences in performance are sometimes sensationalized by the media, a phenomenon criticized by Hyde and Linn in their 2006 Science paper. Statistics at their best, or statistics at their worst?
(3) Trends in International Mathematics and Science Study
TIMSS is used as an assessment of international education performance in the mathematics and science realms. Tests are conducted every 4 years for students in Grades 4 and 8. If you’ve ever seen a headline mentioning that U.S. students are falling behind other top tier countries in mathematics and science, chances are, the data was taken from the TIMSS or the PISA (Programme on International Student Assessment). Recently, blanket statements about U.S. test scores have experienced serious backlash. Here is one example article: “International Test Scores Often Misinterpreted to Detriment of U.S. Students Argues New EPI Study.”
Of course, this is not a simple issue to discuss or to solve, but it is an important issue for FOSEP’ers to consider, given that many of us may have future careers as educators, and because the issue directly pulls our specialties (science, math, etc.) into a heated arena with state- and national education policy.
Note: Thanks to Renee Agatsuma for providing us with several useful links, as well as an intriguing discussion last week.