Thanks to AP Stat teacher, personal friend, and fellow Messiah College alum Doug Tyson for pointing out this article. The article points out the clear lack of convincing statistical data to prove that the mad rush to have the latest technology is actually paying off in improved student performance. Here is a taste:
To be sure, test scores can go up or down for many reasons. But to many education experts, something is not adding up — here and across the country. In a nutshell: schools are spending billions on technology, even as they cut budgets and lay off teachers, with little proof that this approach is improving basic learning.
This conundrum calls into question one of the most significant contemporary educational movements. Advocates for giving schools a major technological upgrade — which include powerful educators, Silicon Valley titans and White House appointees — say digital devices let students learn at their own pace, teach skills needed in a modern economy and hold the attention of a generation weaned on gadgets.
Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
“The data is pretty weak. It’s very difficult when we’re pressed to come up with convincing data,” said Tom Vander Ark, the former executive director for education at the Bill and Melinda Gates Foundation and an investor in educational technology companies. When it comes to showing results, he said, “We better put up or shut up.”
Lots more details and discussion in the article, and it is definitely something that is worth considering for those of us trying to help our students learn as much as possible. Here at Messiah, I still teach on a blackboard with chalk in most of my classes, and mix in presentations only as I find it helpful (not that much in most classes). I don’t use clickers, though I do ask for student reactions, and sometimes have informal “votes” to see what students think the answer is. Perhaps this has to do with my subject area (statistics and math), or my own biases. Any reaction or thoughts on either side of the experience from my readers in the classroom, either as teachers/professors or students? Here is a graphic from the article. My former students should be able to spot a problem in the three graphs on the left.