[caption id="attachment_3678" align="alignleft" width="336" caption="iPads"][/caption]
With technology playing a huge part in education, it's time to put it to the test, and see just how much of a luxury it is to have. With laptops becoming more common for note taking then a legal pad and pencil, and iPads syncing up to projectors more than a flash drive, we must ask ourselves just how important technology is in the classroom. Trip Gabriel and Matt Richtel of The New York Times discuss more in their article Inflating The Software Report Card:
The Web site of Carnegie Learning, a company started by scientists at Carnegie Mellon University that sells classroom software, trumpets this promise: “Revolutionary Math Curricula. Revolutionary Results.”
The pitch has sounded seductive to thousands of schools across the country for more than a decade. But a review by the United States Department of Education last year would suggest a much less alluring come-on: Undistinguished math curricula. Unproven results.
The federal review of Carnegie Learning’s flagship software, Cognitive Tutor, said the program had “no discernible effects” on the standardized test scores of high school students. A separate 2009 federal look at 10 major software products for teaching algebra as well as elementary and middle school math and reading found that nine of them, including Cognitive Tutor, “did not have statistically significant effects on test scores.”
Amid a classroom-based software boom estimated at $2.2 billion a year, debate continues to rage over the effectiveness of technology on learning and how best to measure it. But it is hard to tell that from technology companies’ promotional materials.
Many companies ignore well-regarded independent studies that test their products’ effectiveness. Carnegie’s Web site, for example, makes no mention of the 2010 review, by the Education Department’s What Works Clearinghouse, which analyzed 24 studies of Cognitive Tutor’s effectiveness but found that only four of those met high research standards. Some firms misrepresent research by cherry-picking results and promote surveys or limited case studies that lack the scientific rigor required by the clearinghouse and other authorities.
Read more at The New York Times