(Other posts tagged ai-course.)
Here is my sixth participant's report from the Stanford Introduction to Artificial Intelligence course.
I'll split this report into two sections.
Firstly, some points about course organisation.
Last week's attempt to run an "office hours" session in real time did not work. In substitution for it, two short videos have been published with Peter Norvig and Sebastian Thrun responding informally to the most voted on questions submitted by students prior to the session. More such recordings will follow. I've embedded the videos below because they:
- give a lay person some sense of why the AI field is important and interesting;
- contain pointers to what may be coming next from Norvig and Thrun by way of further online courses.
Note the explicit references to the course team's intention to:
- develop a course at a more introductory level than the current one;
- get to grips with extracting meaning from the data that is being collected about, for example, learners' use of the materials and, I am assuming, the relationships between things like use and progress.
(The two sessions - do not be deceived by their very similar "thumbnails" - are respectively 13 and 8.5 minutes long.)
Second, some brief comments about my experience of the course as a learner, and a breif list of design improvements for courses of this kind.
In last week's report I mentioned that I'd stumbled on the abstractness of the sections of the course on propositional logic and on the mathematical representations of plans. This was born out in my marks for the homework, which bombed drastically:
This week I felt somewhat more comfortable with this week's materials, though like last week, there has been rather a shortage of the formative quizzes, and some of the homework questions have been ones that would have been better used as formative quizzes.
I'm concluding with some suggestions for design improvements, some of which verge on being statements of the obvious. These should be read in conjunction with the summary I posted at the end of my third report from the course.
- Include plenty of check quizzes - ideally about one set for every two to or three chunks of explanatory content. Without these, the powerful sense of being in personal dialog with a teacher is much reduced.
- Watch out for the problem of summative assessments that would be better used to provide formative feedback. The whole point about formative feedback is that ideally you need it very soon after you've done the learning to which it relates.
- Collect brief numeric learner feedback after each unit of the course concentrating on issues like how hard the material felt, the extent to which the check quizzes proved effective.
- Provide (possibly as a user-generated or user-ranked resource) a page of support links per unit.
- Think carefully about the relationship between the course and the text book (in this case Russell and Norvig's "Artifical Intelligence, a Modern Approach"), so that the course has an overal coherence, rather than seeming sometimes to by "dipping in and out of" the text-book.
I'm taking the ai-class as well. I agree with you on your suggestions for design improvements, especially your first one. I've found repeating the check quizzes to be the most useful way to review the material. Since the system keeps track of how accurate your first quiz answer was, it's a good indicator of gaps in your understanding.
robrambusch
Posted by: Robert Rambusch | 17/11/2011 at 06:21