Guest Contribution by Gundega Dekena
[Update posted by Seb Schmoller on 12 July 2012. Note that Gundega now works for Udacity, the company that developed from the AI course. Read how she became part of the Udacity team on the Udacity blog.]
Gundega Dekena is a self taught Linux administrator and web programmer, based in Riga, Latvia. She has been studying all three of the October to December Stanford online computer science courses in parallel - Introduction to Artificial Intelligence (AI), Introduction to Machine Learning (ML), and Introduction to Databases (DB) - putting her in a good position to compare and contrast them. Gundega can contacted at gundega.dekena [AT] gmail.com, or through Google+.
Overview
Comparing these three courses feels a bit like comparing apples, screwdrivers and desks, yet I see a lot of students doing that, usually without much thought about the differences. So, this is my look (from the perspective of a student) atthe things that can be compared, and that can be learned from all three courses, for the benefit of the next batch of courses that is going to come from Stanford next year.
These courses are the first of their kind and therefore all can be considered experimental. They are not the first full university level course lectures put online for free (there are a lot of in-campus lectures videotaped and put on YouTube from wide variety of universities, see [1], and lectures from conferences, summer schools, workshops and science promotional events [2]); they are not the first educational videos coupled with exercises and quizzes (Khan Academy [3] has a lot of these, and Curious Reef [4] experimented with this idea). But as far as I know, they are amongst the first free mass courses that have an actual schedule, actual homework and exams, some feedback from professors, and large community of students who are following the topic at the same time and therefore can interact and help each other.
They use two different approaches and technological solutions, one being developed on the way by a startup company KnowIt, the other being based on Stanford's established OpenClassroom platform [5]. They also differ wildly in topic complexity, applicability and accessibility in the real world for people from different backgrounds and with different connectivity and access devices.
Video presentation
The two ways of presentation differ quite a lot. The AI class topics are cut into very small subtopics or principles, while the ML and DB topics are much longer and cover more ground at once. The short videos give the opportunity to quickly re-watch a particular topic or explanation, if needed, while the 10-25 minutes long lectures in the ML and DB classes are not particularly good for that, if, for example want to see some idea explained again. On top of that, there have been developed some really great tools for the AI class, that show all subtitles at once – rather like lecture notes, allowing you to jump to the particular second in video [6].
The advantage of the ML and DB classes is that they provide an option to download all the video lectures, instead of streaming them. This is not possible for the AI class, without using external tools or sites. While this is not a huge deal for people in where there is good bandwidth, it can be an extremely important option for students from less developed countries with slow, unstable or just expensive Internet access.
In-video quizzes
The AI course has a lot more in-video quizzes than the other classes, and therefore feels a lot more involved. Combined with the quirky video presentation, it somewhat gives a feeling of a real one-to-one teaching. The ML and DB classes on the other hand are a lot more academic and with less active involvement required from students.
Homework
The ML and DB classes use a well established system of randomly generated quizzes, well tested questions and answers and therefore usually pose no problems. The AI homeworks are developed the same way as the in-video quizzes, and are often criticized by students for having too much ambiguity or (rarely) incorrect answers and unclear explanations (usually all these problems are addressed and fixed in very timely fashion). They are also not randomized, and therefore more prone to potential cheating. On the plus side these exercises tend to involve deeper thinking about the subject and often spark conversations on Question and Answer (Q&A) forums, leading to different explanations of topics from fellow students, potentially leading to better understanding.
The DB class uses an advanced workbench system for exercises, that allows a hands on testing of acquired skills on real databases and data sets. The workbench exercises appeared to be the best rated part of the whole course, according to a recent student survey [7].
The ML class, in addition to quizzes, also has programming assignments, done in Octave, with several programming tasks each week. These are highly appreciated by students.
Exams
The ML has no exams. In contrast both the AI and the DB courses have midterm and final exams that are very similar to the weekly homework quizzes. For the AI course it was possible to arrange to take the AI course exams in person under controlled conditions at the University of Freiburg or at the Munich Technical University, and to receive transferable credit for taking an undergraduate AI course at either University. This opportunity was very well received from students, and the number of available seats for taking the exam in person were filled very quickly. This is an interesting opportunity that will hopefully become more widespread in the future.
Feedback from professors
The AI course is the only class that offers the possibility for students to ask questions to professors and to actually receive answers to the most voted questions (via Google Moderator) every week in virtual Office Hours. This is a very popular feature and it really adds a feeling of involvement in a real class. The DB class offers a weekly “Screenside” chat, that does not answer questions asked by students, but rather talks about upcoming assignments, potential problems, noticed trends in student behaviour etc. It is also well received by students. The ML class does not offer anything like this; however there are active representatives from the class staff on Q&A forums, monitoring it and answering some questions.
Student input and feedback
While all 3 courses have some sort of Q&A system that allows students to ask and answer questions, they again use 2 different approaches. The Q&A forums for the DB and ML courses are integrated into site, but are not tied to individual accounts, so anonymous posting is allowed (but generally frowned upon by community). Questions and answers can be voted up and down and users receive reputation points.
The AI class does not have any official Q&A forum, but by self-organization two sites emerged that are fairly popular - reddit groups and aiqus.com (based on the OSQA system). Aiqus seems to be more active and better suited for the task of Q&A, being a self-moderating community where the most active and helpful members receive additional privileges on the site.
Both approaches have a downside of not being tied to user accounts, so users have to register by some other means to participate in discussions. Also, they are in Q&A format, not suited for more general discussions about course or field related issues.
Study and social groups
All three courses encourage students to form study groups, but do not provide any official means of doing that. By self-organization several such groups emerged - on Facebook (AI - [8] and ML - [9]), reddit ([11],[12],[13]) and Google+ (several shared circles for members of all 3 classes). Although all these groups have a fairly low percentage of participants (~10% or less of active students), this still adds in total to a large number of people overall (500 - 4500 users). Currently there is an ongoing initiative of forming several small (20-25 students) study groups for preparing for final exam in the AI course.
A special woman only “support” group was also created on Facebook [10], and even linked from the official AI-class page, causing some controversy about the reasons of creating it and the fact that it is closed group that does not accept men to join it. While there is nothing wrong with having social meeting places for specific groups of students, it defies the whole principle (of gender equality in this case), if an exclusive, closed study group is advertised on the official page, where no other study groups are even mentioned (reddit and aiqus are not really study/social groups, being more like Q&A forums).
Potential for improvements
There are several areas of improvement, and since the Stanford OpenClassroom model seems to be much more rigid and established, and unlikely to be changed significantly, I will focus on what could be done with the AI class model to have huge improvements with relatively small amount of effort from the actual team behind the class.
Use the community. There are already a lot of people who have been contributing their time in Q&A forums, by translations, by developing web applications and Chrome extensions etc. Since this project seems to be about bringing education to people who would otherwise not have access to it, and not about earning a lot of money, think of it as an Open Source Education project. Let people really participate in making this project better. All that would be needed for that is provided infrastructure and an organizational structure for volunteers - community manager or several that would coordinate the efforts of students with the direction professors want to go. A lot of students would be happy to give back something to this great initiative.
Right now a lot of the content and resources that are generated by students are never seen by a lot of them, because there is no official place that would have all resources listed together. QA style forums are not efficient for that, neither is reddit, Facebook, Google+ or Twitter. The only place all students actually visit, is the official class page, and that is where the information should be easily accessible.
Apart from the Q&A forum, there is a definite need for a proper discussion forum. Discussions do not really have place in OSQA style forum, and are now tolerated just because there is no other place suited for that. There are lot of people interested in talking more about certain subjects, starting projects or just sharing newly found materials and articles. Discussion forums could also be the best place to organize different support groups, like a support group for high-schoolers who are struggling with some concepts because of lack of background in mathematics, or groups for some languages where they can help each other with difficult English terminology.
It would also be great to have a Wiki where students could collaboratively produce lecture notes, for all to see. The benefit of Wikis in teaching is discussed in a rather long, but very informative video by Richard Buckland from UNSW [14]. (If you have limited time, watch this video from time ~12:40. An extremely important point (for lecturers) is made around time 19:40.)
And the best thing would be if all these resources were accessible by the same student login, that is used to watch lectures, do homework and exams, because it is much easier for people to participate in all these activities, if they don’t have to register in 4 (or 10) different websites, then visit them all to check what is happening. That would also solve the problem of duplicated content in these sites, not to mention the fact that right now most of the students are not participating (because of lack of information) in any of these activities.
And all that is needed to make this happen, is willingness and braveness from the people behind the AI class, provided infrastructure (servers), a trusted representative(-s) from students who would manage the collaboration and the technical implementation, and some time for students to do the magic and make this all happen.
Another thing that would be of great benefit would be an official and easy way of downloading the videos. This could make these course more accessible for students from countries where YouTube is banned (hello, China), and to students in remote areas with slow and expensive Internet access (they could share the downloaded files). This poses a problem of in-video quizzes, since they would not work, and they really are one of the things that makes this type of learning stand out from watching regular video lectures. But developing a solution to this requires a bit more thought and planning than the collaborative part mentioned above.
Feature chart
I conclude with a rough and ready chart showing how five different approaches compare with each other:
AI class | ML class | DB class | Khan Academy | Video lectures * | |
Full university level |
✔ |
✔ |
✔ |
✔ |
✔ |
Video format | 1-4 min | 7-16 min | 6-30 min | ~12 min | ~45 min |
In-video quizzes |
✔ |
✔ |
✔ |
- |
- |
Assignments |
✔ |
✔ |
✔ |
✔ |
- |
Assignment format |
weekly |
weekly |
weekly |
self-paced |
- |
Programming |
- |
✔ |
✔ |
- |
- |
Exams |
✔ |
- |
✔ |
✔ *** |
- |
Certificate |
✔ |
✔ |
✔ |
- |
- |
Feedback from teacher |
✔ |
- |
✔ ** |
- |
- |
Student community |
✔ |
✔ |
✔ |
✔ |
- |
* Video lectures, recorded in an actual classroom, while giving lecture to university students.
** “Screenside” chat provides a form of feedback, but is not directly influencable by students.
*** While Khan Academy does not have exams, the idea of having to complete X amount of exercises correctly to “master” a topic and get a badge, can be considered a similar concept.
There is a little mistake in the char, ML Class does not have any kind of exams.
==
Thanks! This is now fixed.
Posted by: Ellyster | 11/12/2011 at 19:06
The ML class has in-video quizzes only if you watch online, not if you download the videos.
Posted by: Wolandi | 12/12/2011 at 00:32
SInce this is the first quarter offered, for AI class, some of the midterms and homework questions occasionally asked questions that were not even covered yet... Perhaps the 2 fantastic professors did record the videos in time, but the TAs / staff did not get around to post them online.
Posted by: montegbay650 | 13/12/2011 at 01:36
montegbay650: Could you tell which questions you mean by that? I can not recall any questions that were not covered. There was one missing video in the Computer Vision unit, but that link was fixed before homework closed.
Posted by: Gundega Dekena | 13/12/2011 at 06:37
What an excellent comparison review of the Stanford online classes. As a student enrolled in these courses, I agree with your analysis regarding the Q & A forums. It could be improved for easier access and resources for information.
Posted by: Indiana Joanes | 15/12/2011 at 15:11
Thanks for the thorough and well-organized review! I also have been taking the three classes and I think the author has compared and contrasted them very well.
I also don't recall any questions in the AI class that were not covered.
One minor point: the author states "most of the students are not participating (because of lack of information) in any of these activities." I participated in the integrated Q&A fora and in aiqus, but rather minimally, and doubt I would have participated much more in other activities. Taking these three classes was quite time-consuming in itself, and I am also working full-time and auditing a class in-person. I'm sure there were many other students in my position who were also pressed for time. I'm fortunate to have adequate background, such that the class materials were pretty self-sufficient. Of course, it's great that all these other resources exist and people should be made aware of the opportunity to take advantage of them.
Posted by: Ruchira Datta | 15/12/2011 at 15:42
I only took the AI class. It involved a good selection of material that was generally well presented. In cases where the lecture wasn't clear enough to do the quizzes, the answers to the quizzes usually provided what was missing. In a few cases, most notably particle filters, the initial lectures weren't clear, and there weren't quizzes, so the answers to the homework provided what was missing, but at a loss of grading points (quiz grades weren't counted).
Generally, the homework and midterm were similar to the quizzes, so one could do well by finding and reviewing the corresponding quiz and its answer, and applying the same steps in a fairly mechanical way, without necessarily fully understanding the material.
Since many people were able to get 100% on the homeworks and tests, the grading curve was quite skewed. You could have a total score above 90%, but receive a certificate that said you were only in the top 50%. For most such students, their grade depended more on how many silly mistakes they made, rather than on how well they understood the material.
In many cases the homework and tests asked multi-part questions, where making a mistake in the first part resulted in all parts having wrong answers. This resulted in grading disasters, especially given the skewed curve.
The homework and tests were poorly proofread, causing much consternation. For example, one question asked if a particular boolean expression was true, false, or neither. Many students noticed that it was missing a closing parenthesis at the end, and asked in the forums whether this was just a typo, or an intentionally malformed expression that would be neither true or false. At first there was no response at all. Then finally, a completely unhelpful response was supplied, saying there was a missing paren, which by that time everyone knew. The response didn't clarify whether the missing paren was intentional or just a typo. Further requests for clarification went unanswered, so the students were left to guess.
Many students wasted more time reading the forums looking for clarifications of poorly written questions, where no official clarification was provided, than they spent in actually doing the homework and tests.
It would have been a huge help if the professors had been available, or if a TA had been available, to answer questions about poorly written homework/test questions.
The professors seemed to have a difficult time coming up with good test questions.
Questions that were too similar to prior quiz questions, or that were answered directly in the lectures were too easy. A substantial portion of the final exam questions involved trivial things that could be solved by a 5th grader without having attended class at all, such as counting squares in a simple maze to find the shortest path from point A to point B.
The final also included some horribly ambiguous questions about whether something would be beneficial in the typical case, but without any guidance on what sort of cases would be considered typical, causing an uproar that has yet to be addressed.
Posted by: john galt | 12/01/2012 at 17:45