(Other posts tagged ai-course.)
Substantial changes made to A3 and small changes made to A5 and A7 below, 21 November 2011, and to introductory paragraph, 23 November 2011. [For visitors from the Aiqus discussion about student numbers, note that more light is shed on the "YouTube video counts question" - a distracting side-issue - in paragraph seven of report number two, 18 October 2011.]
Here is my seventh participant's report from the Stanford Introduction to Artificial Intelligence course. It is in three parts. A is the report proper. B picks up on a pre-arranged call from Sebastian Thrun. C lists further free CS courses that will be available from Stanford University in January 2012.
A - Report
1. A lighter week from the point of view of studying, because the last week saw less material presented, to allow students time to prepare for the midterm "examination", aka the midterm.
2. The midterm (which accounts for 30% of the marks for the course) had initially been billed as something that had to be undertaken within a 4 hour period in a 24 hour "window".
3. For reasons that had not been made particularly clear, these constraints were lifted some weeks ago: we now had 72 hours in which to complete the exam, making it, in effect, an untimed open book examination: that's fine by me. ST wrote, on 20 November, after this post was first published: "We changed the format to 72 hours since we are changing the philosophy behind our teaching. We don't need online exams to "weed out students" - we wish to empower students. A short time usually serves to make students fail who with a little extra time would do well. We want students to understand that they can do the exam. I just did the same on Stanford's campus for my class there, and it was super-appreciated by the students. They ended up learning more, and they feel better about what they know."
4. The exam (which was helpfully made available as a PDF as well as in the conventional video+questions format) proved challenging, but not terrifyingly so; and the challenge came mainly from having to review several previous parts of the course to feel confident about tackling those of the problems which were not trivial (for me that meant most of them). I would have struggled to complete the 15 questions within 4 hours.
5. As a test of what I remember it was unsuccessful (except in so far as it showed me how much I had forgotten); as a test of what I now know how to do, after a bit of digging, it seemed more effective. Likewise, it worked as an intense mid-course revision exercise. But as firm proof of my mastery of the subject I think it was of more limited value. (And that is without considering the scope for cheating, which must be extensive, though there is no sense that this is in fact an issue: the tone "out there" is one of "more fool those who do".)
6. It was striking - and very surprising - how much more intelligible the exam questions were in video+questions format than as PDF. This is partly because of the occasional informative aside that Thrun makes when setting a question; but there is also something about the sense of personal dialogue that is engendered, which seems to deepen one's understanding.
7. During the week I came across some critical assessments of the AI course, particularly in comparison to its sister Machine Learning and Introduction to Databases Courses. Today's Review of 2011 free Stanford online classes is one such; and it verges on snide. (I'd prefer to know who the author of this post is.) Summary: "The database and machine learning classes are excellent. Ironically, the AI class is pretty bad, even though it was the poster child for this wave of online offerings." This one by Moana Evans, published on 21 November, is balanced, perceptive, and constructive, and has attracted plenty of thoughtful comments from other students.
B - Conversation
On 7 November the Association for Learning Technology (for which I work half time) published What Can We Learn From Stanford University’s Free Online Computer Science Courses?, a smartened up précis of points made in these weekly reports. Sebastian Thrun had commented on the third report: the ALT Online News article prompted a direct approach from him to discuss on the phone how courses of this kind could be improved. During the conversation I reiterated and built upon the points made in the third, fifth and sixth reports, and by Dick Moore.
Several things were clear from the conversation:
- There are people all over the world - particularly in developing countries - who are hugely committed to completing the course, and sometimes in the most extreme circumstances. (Being under mortar fire was one of the distractions that Seb Thrun mentioned.) I got the distinct feeling that this fact has spurred Thrun and others at Stanford into thinking further about how a mass, high level, high quality computer science education could be made available for free as public good. (See the note below for some straws in the wind that relate to this.)
- There is an overwhelming amount of feedback from students - for example, within 24 hours of 40,000 people receiving a cheery email of encouragement from Peter Norvig and Seb Thrun, 2000 had responded, often in considerable detail.
- Alongside this there are a plenty of teachers who have enrolled on the course and who are making comments: it is challenging to sort the wheat from the chaff.
- The production team intends to use the goldmine of data about learner behaviour and learner performance that it is accumulating to refine future provision. For more on this (albeit not directly in relation to this course) see Using AI in formative and summative assessment, especially the links to the post by the Khan Academy intern David Hu and to the 2010 ACM Communication by Greg Linden.
C - Further CS provision from Stanford
Here is a list of the wide range of course that will be available in January:
- CS 101 – essentials of Computer Science for a zero experience audience;
- Cryptography – protecting information in computer systems;
- Game Theory – mathematical modeling of strategic interaction among rational agents;
- Human-Computer Interaction – designing technologies and interfaces that bring people joy rather than frustration;
- Lean Launchpad – turning an idea into a company;
- Machine Learning – getting computers to act without explicit direction;
- Natural Language Processing – algorithms and technology for dealing with human language data;
- Probabilistic Graphical Models – practical and theoretical methods of manipulating probability distributions;
- Software as a Service – engineering fundamentals for Agile SaaS development using Ruby on Rails;
- Technology Entrepreneurship – creating a successful startup.
Many thanks Seb...which one do you fancy in January?
==
Bob - the one I have signed up for is Human-Computer Interaction, which is something I know a bit about (unlike AI). Seb
Posted by: Bob Harrison | 20/11/2011 at 15:53