So in the last post I introduced myself and the research work we’ve been doing with schools. In this one, I want to talk about a couple of factors that were crucial in making sure 3DHive was used meaningfully in school – the technical performance of the software, and a strong lesson plan.
3DHive is made of three parts. There’s the editor (“3DHive.Build”), used by schools and developers to build the games that schools play. There’s the server, in the cloud, running the games. And there’s the client running on the school PCs (“3DHive.Play”), which is what students and teachers use to enter the virtual world. All three parts need to function for the system to work as planned.
During our research, the software performed reliably, in that it functioned as designed for most users. This might not seem like such a remarkable claim. But for software still in development, testing in a real-word environment can be a real wake-up call for the development team. For the version of 3DHive we used there didn’t seem to be any disasters, though there were still a few minor issues that we were able to feed back to the development team, and some technical issues that were due to the configuration of the school network. For example, in some large classes it was sometimes a struggle to get everyone online through the client, which was unexpected, and if some machines weren’t kept up to date there were sometimes driver compatibility issues. These issues have been passed to the team and are fixed in the latest version.
The developers worked closely with schools and administrators to make sure the software fit the specifications of the machines students use in school – and it looks as though that strategy paid off. But classrooms are very different places, technologically speaking, to homes and offices, and the only way to really learn how software is going to work is to try it out. Working closely with the technicians responsible for the network is also vital.
There were two parts to the lessons we observed – the game played by the students, and the teaching activities that went on around it. Both needed designing carefully for the whole lesson to work. The game, addressing elements of the Civics and Moral Education curriculum, had been previously designed by teachers and developers working together. Pilot tests late last year showed, however, that it would be important to structure the lessons carefully to ensure that teachers could make the most of the teaching opportunities it presented.
We organised some preliminary workshops with teachers to discuss their shared teaching aims and pedagogic approach. From these workshops a lesson plan was developed that teachers felt would support them in working towards the desired learning aims of teamwork, problem-solving and sportsmanship. This made the role of the teacher clear, and made explicit room for reflection and assessment, important areas that might not have been thought about without this room for considering professional practice.
Of course, these workshops didn’t address everything that teachers encountered in the actual lessons – a group of teachers playing games together is a very different environment to forty students competing with each other. But they were a useful venue for having the kinds of conversations that left teachers better prepared for lessons. And they served a useful technical function, too, with a number of bugs being discovered at this stage rather than during classroom use.
For me, the key lesson from this is that it’s vital to consider the context in which the game will be used. Relying on a game to support learning without making any preparation is a risky approach, and one that makes it more likely that students won’t be able to gain anything from playing it.
So we’ve learned first-hand that preparation is important. In the final part, I’m going to share some of the outcomes we’ve seen so far, and discuss some of the new questions they raise for researchers and educators.