Pearson LearningStudio is the product/service that has emerged from the "big beast" of publishing's purchase of Fronter and eCollege. This leaves the public education VLE world - bar future acquisitions - split between three commercial products (Pearson, Blackboard, and Desire2Learn) and two Open Source (Moodle and Sakai).
Pearson dwarfs Blackboard and Desire2Learn. It has a large catalogue of text books, some of which already give their owners access to an array of online learning content. It is an educational publisher with an international marketing infrastructure. It owns awarding bodies such as Edexel.
Alongside this, both Fronter and eCollege have concentrated from the start on running hosted services rather than on selling software for learning providers to run themselves. (Sure, both Blackboard and Desire2Learn offer hosted services too.)
A supplier of hosted services gains a mass of data about learner behaviour. Google and Amazon are not the only companies that have learned how to extract meaning from such data. So my current "intuitive tip for the next ten years" is that the next phase of VLE development will involve the provision of automated and semi-automated tools that draw on the mass of data about user behaviour and about user performance that hosted VLEs hold (or can access), combining it with data about the individual learner.
Such tools could provide help and guidance for learners, teachers and others involved in the support of learning (parents, e.g.). Perhaps they could also shape the content, activities etc., that the VLE provides the learner, based on the learner's characteristics, and on factors like the learner's previous behaviour in the system.
The selling point for VLEs that use data in this (dystopian?) way will be improvements in effectiveness and efficiency - nothing wrong with either; but the approach described also raises many issues, some concerned with privacy and data-ownership (it would certainly be interesting to see what the privacy policies of hosted VLEs say about the use to which user data can be put), and others with the continued transfer of "knowledge mediation" from the public to the private sphere. And the technical challenges are formidable. The amount of data is much smaller than is held by really mass systems like Google, and it is more nuanced and multi-dimensional. As my friend David Jennings pointed out when commenting on a draft of this post:
"One reason I can think that your predictive hunch might not come to pass is that VLEs are a different context from Amazon, Google or (for the most part) libraries. In the latter, the data collected is person <--> resource. In VLEs it's person <--> mediator (tutor, peers, group dynamics) <--> activity <--> resource, with lots of scope for unpredictable interactions between these to create 'noise' that drowns out clear statistical associations. In other words, the numbers are a hell of a lot harder to crunch."
Last July the US National Academy of Engineering identified "advance personalised learning" (along with "provide energy from fusion") as a grand engineering challenge for the next decade. Google now influences what you find. Will hosted VLEs, applying automated statistical analysis to data about users and user behaviour, start to shape what and how students on formal courses learn?
This piece was influenced by David Jennings's 30/1/2009 Web 2.0-style resource discovery comes to libraries - the TILE project.
Comments