[Small edits made on 21 August]
This Huffington Post piece by Keith Devlin (whose Coursera Introduction to Mathematical Thinking course I completed and reported on - 1st report; 2nd report - earlier this year), hits several nails on the head, though Phil Hill criticises the piece rather bluntly for what he sees as three types of factual error.
This extract gives you a flavour of the article.
"Teaching and learning are complex processes that require considerable expertise to understand well. In particular, education has a significant feature unfamiliar to most legislators and business leaders (as well as some prominent business-leaders-turned-philanthropists), who tend to view it as a process that takes a raw material -- incoming students -- and produces graduates who emerge at the other end with knowledge and skills that society finds of value. (Those outcomes need not be employment skills -- their value is to society, and that can manifest in many different ways.)
But the production-line analogy has a major limitation. If a manufacturer finds the raw materials are inferior, she or he looks for other suppliers (or else uses the threat thereof to force the suppliers to up their game). But in education, you have to work with the supply you get -- and still produce a quality output. Indeed, that is the whole point of education."
I am not so sure that Devlin is being completely fair in the Huffington post article describing the "on hold" collaboration between San Jose State University and Udacity as a "train-wreck".
My intuitive feeling - based on something I learned the hard way in 1999* - is that the data once analysed (does anyone know when and where the NSF funded analysis will be published?) show that if there'd been some basic "filtering" of students onto the online course according to their circumstances and characteristics (for example: the extent of their online skills; their access to a computer and an Internet connection; whether they had physical space in which to study uninterrupted, and space in their lives to put time aside to study), then irrespective of learners' prior achievement, the success-rates might have been more respectable. (This leaves aside any consideration of whether the Udacity/San Jose courses should have been designed in a better or different way for the kinds of learners for whom the course was intended.)
But Devlin is right that MOOCs as currently designed - and of whatever type - tend to be suited to people who've learned how to study independently; and that different curricula are differently suited to being provided through MOOCs, partly as a result of differences in the ease with which learning can be "machine assessed". That's why the thinking that Devlin is doing on peer-based assessment, including on learning through evaluation is particularly interesting. For more on this see Keith Devlin's MOOCtalk blog.
* In 1999 when I was involved in the design, development, and running of the Living IT suite of online courses I learned that the provider needs to have a strong pre-course assessment process to ensure that those signing up for an online course had a sense of what they were letting themselves in for. We did not have this on "Living IT", and had a high drop-out rate as a result; but we did have it on another course called Learning to Teach Online, as well as on several other online courses that were run from 2000 onwards by The Sheffield College. (Disclosure: I am now a Governor of The Sheffield College.) Here's an example from 2003, though we started using pre-course assessments many years earlier. Of course both URLs have a certain quaintness now, for sure, and none of these courses were MOOCs in any respect; but I am sure the underlying principle remains an important one.
Comments