The OECD report on Students, Computers and Learners has matched up statistics about computer use in schools and results in standardised tests for reading, mathematics and science. It's an interesting read, even just looking at the executive summary. The report gives some interesting findings.
- Computers were more common in schools and homes in 2012 than in 2009.
- Results for countries where computers are more common in schools were not improved (and in some cases declined).
- Countries with the best results do not have the most computers (although still above OECD average).
- Students need good skills, particularly in reading, to discern useful information available online.
- Excessive time online outside school can be harmful to young people.
I've already seen some interesting interpretations of this report's findings, including some suggestions that schools should stop investing in computers. It's not clear that having computers in classrooms leads to poorer results. It is clear that adding computers to a classroom environment does not automatically improve learning. Perhaps the most useful indication of the report is that we still need to make improvements on how we deliver education using technology.
What is your take on this report? What aspects can we research further? What can we take from this and improve in teaching practices?
The results are based on data from 2012. Have things changed greatly since then?
In other news, the Pope is catholic, and bears defecate in the woods.
Did they look to see if there was a correlation between the number of pens in schools, or books and attainment? Or perhaps the number of quills or slates?
Education is about what happens in the student's brains, and what activities the teacher gets them to undertake. To expect any particular tool to transform that on its own is hopelessly naive (though marketing departments of places trying to sell you stuff will try to convince you otherwise).
On the other hands, computers and the internet are some of the most powerful tools we have for managing and communicating knowledge. In the hands of skilful teachers and students they have huge educational potential, but only as an aid to whatever acts of teaching and learning those people want to engage in.
I've only gone through the executive summary so far, but some questions/observations:
- Do the students consider their phone a computer? In the last couple of years, I would imagine that people have started to forego using their computers in favour of their phones.
- Are we using new measurement techniques? Using technology and new learning methods may require different ways to measure success. If we measure using techniques designed for different teaching techniques, are the comparisons valid? And I suppose the next valid question is should we change our teaching techniques simply because we can use new technology?
- I believe Tim said this as well, but for the organizations that increased technology use, did they spend time to adjust their teaching/learning techniques?
...but for the organizations that increased technology use, did they spend time to adjust their teaching/learning techniques?...
I have an impression, that, particularly with iPads, schools are thinking - 'oh we've boughta load of iPads because they're cool - and now what can we do with them?' Rather than finding a teaching/learning need that iPads (for example) could meet. I see presentations/webinars showing how iPads/devices can be used in the classroom but I do sometimes wonder if they are just trying to justify their use where pen and paper might actually be more efficient. And I say that as a technology lover, not a cynic, honestly.
Totally agree with you Mary. But if they are convinced that they want to / need to use that technology, then they must spend some time to design how to use it effectively, and come up with a way to monitor and measure the success. And they should absolutely be prepared to remove the technology where it has no benefit or isn't working.
Seymour Papert offered a good critique of this type of thinking/research in "Computer criticism vs. Technocentric thinking. e.g.
However, such turns of phrase often betray a tendency to think of "computers" and of "LOGO" as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations -- people and cultures -- to a secondary, facilitating role (1).
The context for human development is always a culture, never an isolated technology. In the presence of computers, cultures might change and with them people's ways of learning and thinking. But if you want to understand (or influence) the change, you have to center your attention on the culture -- not on the computer.
In this particular example, the reliance on standardised tests as a measure of outcomes gives some hints to the culture at play.
He makes this argument in response to studies that showed limited impact of the Logo programming language. He uses this analogy
"Does LOGO work?" "Is LOGO good for learning this or that?" All these turns of speech are signs of the technocentric stage of computer discourse.
Consider for a moment some questions that are "obviously" absurd. Does wood produce good houses? If I built a house out of wood and it fell down, would this show that wood does not produce good houses? Do hammers and saws produce good furniture? These betray themselves as technocentric questions by ignoring people and the elements only people can introduce: skill, design, aesthetics.
My initial reaction to reading Michael's description of the study (I haven't look at the study) was to think of a problem with most learning analytics research. i.e. it's driven more by the availability of the data and algorithms, rather than the usefulness of any insight that can be gained.
In the same piece Papert critiques the "treatment model" of research. Perhaps an indication to look for alternatives to such research? Even though it's much easier to do.
In terms of Moodle and improving teaching practice, perhaps this offers one suggestion
Stated abstractly, the two studies have the same explicit intention: the children are to be given "programming"-- and the purpose of the experiments is to see what happens. But there is no such thing as "programming-in-general." These children are not given "programming." They are given LOGO. But there is no such thing as "LOGO-in-general" either. The children encounter LOGO in a particular way, in a particular relationship to other people, teachers, peer mentors, and friends. (4) They don't encounter a thing, they encounter a culture.
Measuring the impact of "Moodle" or "computers" on education is pointless. There is no such thing as X-in-general. Instead it's the culture into which X is introduced that is important. For me, raising questions such as
- Is there such a thing as uniform culture across an educational institution (e.g. a University)?
- What cultures create effective applications of X?
- How does the culture of an institution change with the introduction of X, or does X get transformed to fit with the culture?
Something Paper picks up in this article which mentions the "grammar of school".
- Can you get indications of the culture of an institution from how it uses X?
- How is the culture of an institution around X (in particular Moodle) influenced by the experiences and knowledge of the people responsible for implementing and maintaining it within the institution?
Correlation studies in education like these always stir up the emotions in many people, both pro- and anti- IT.
The criticism that I think should be levelled is that introducing any new feature/facility/affordance/tool/etc. into classrooms without a purpose to "see what happens" may be OK on a small pilot project, experimental level but not on a whole scale, multi-billion dollar level that we've seen at the expense of other necessary and limited staff and resources. What if they'd spent the IT budgets on books, materials, lab equipment, and/or classroom support workers/teaching assistants? How would that have affected learning outcomes?
What's been happening in schools with regard to IT has been an irresponsible waste of resources.
During my teacher training I read about a project that was going to improve education by bringing TV to remote villages.
Once in place some research was done and the metrics indicated that education had improved. Then someone looked more closesly
and noticed that the results had gone up even where they had supplied TV's to places where there was no TV signal.
It is interesting that those with the most hands on experience of Technology tend to understand that the technology is just a tool.
Give me the finest canvas and paints and I will give you wobbly stick men. Give the same tools Leonardo and you get the Mona Lisa.
I apologize for the intrusion, but I couldn't resist the temptation to participate in this discussion.
I think .. this happens when teachers are left alone, without any guidance which may suggest / advise them what, how and when to use the technology wisely.
I assist too, every day as a teacher and parent, at this desire to purchase that looks more like a form of status symbol than anything else.
In the space of just six years we have gone (in my case) from the computers in the lab, to the notebook, the multimedia board, the tablet .. and now the eye is focused on smartphones.
I totally agree with those who say the problem is not the abundance of technology .. but lack (sometimes total) of ideas on how to draw from each one the maximum, giving up .. when the case .. at all.
Lots of interesting and well informed things said so far, but I am somewhat skeptical of claims that tell us how good computers are in education. Computers are tools, and need to be used as such. Students can use computers to inform their education, not become so proficient at using computers that they learn nothing else. I don't wish to be seen as a Luddite, but I do suggest we should be sure that we are using technology appropriately. I am not sure if that's what we are really doing.