(Other posts tagged ai-course.)
Small edits made 30/10/2011, some of which are indicated.
Here is my fourth participant's report from the Stanford Introduction to Artificial Intelligence course.
This is a shorter report than #1-3, mainly because the course has got into a rhythm and because there've been no substantial changes in delivery methods.
1. Despite the work that has been done to the web systems that sit behind the site, it looks as if there were again overload issues at and around the week 2 homework submission deadline, and this despite the probability the number submitting homework may have dropped quite a bit dropping by nearly 20% to ~37,000 from the ~46,000 reported after the week 1 homework deadline.
2. The course continues to fascinate. For example it is nice to gain a practical understanding of how things like spam filters actually work.
3. The "lumpiness" of the content and the quizzes continues. I do not mind this particularly - and to some extent it stems from variations in my prior knowledge; but it has to be said that the ease of the quizzes veers widely from pretty trivial to quite difficult, and the same seems to go for the accompanying material. Similarly, the quizzes relating to the more challenging material are sometimes too easy to give much of a test of one's understanding of the material to which they relate. Likewise the homework tasks, some of which seemed rather easier than the material to which they relate.
4. I suppose the (stating the obvious) point here is that the successful design of online assessments is an important art/science in its own right, which is discrete from mastery of how to teach or otherwise convey subject matter online. And this leaves aside the issue of how questions can provide diagnosis of where a learner is struggling with particular concepts. (For more on this see Assessment, learning and technology: prospects at the periphery of control, Dylan Wiliam's 5/9/2007 keynote at the 2007 ALT conference in Nottingham:
- Slides and video of the talk, captured as an Elluminate Live! session [~75 MB];
- Text transcript [75 kB PDF];
- Slides [400 kB PDF];
- MP3 recording [12 MB].
4. I'll end with this wonderful "decision diagram" by fellow student r3dux in Australia. The diagram is expresses pretty well the kind of pre-course assessment that courses of this kind need (there is more about this in a previous report), as well as conveying plenty about the pleasures and sorrows of this kind of learning. (Click on the thumbnail to see the diagram in its home location and to read the small print.)
It is likely that students' perceptions of this course are being clouded by poor technical infrastructure. It's a free course and of course a pilot, but having been involved in delivering mass on-line education, I know that even a small outage or brownout is a 100% outage for those students that have put aside that timeslot for their learning.
What should Stanford (and other organisations) do to help here?
1. Monitor user experience every 3 minutes.
2. Have a separate "lowfi" text only news site that reports issues linked from all deliverable and use it to report issues after they are affecting more than say 50 students. A range of system "status lights" can be used to reflect different aspects of the service being reported on.
3. Assume that stuff will go wrong for some students and ensure there is a buffer built in allow 8 days elapsed time for a 5 day course cycle - the students will find stuff to do during the other 3 days.
4. Strong and regular communication can turn an outage and negative experienced into a the positive feeling that "these folks are trying hard".
5. Load balance content and have a FIRE BREAK policy that prevents positive feedback loops (not a good thing in engineering) from taking whole services down.
It's not enough for the content to be good because service delivery as well as content underpin any learning experience.
Posted by: Dick Moore | 07/11/2011 at 12:43