« Tenth report from the Norvig/Thrun/Stanford/Know Labs Artificial Intelligence course | Main | Moe and Chubb - Liberating Learning »

Comments

There is a little mistake in the char, ML Class does not have any kind of exams.
==
Thanks! This is now fixed.

The ML class has in-video quizzes only if you watch online, not if you download the videos.

SInce this is the first quarter offered, for AI class, some of the midterms and homework questions occasionally asked questions that were not even covered yet... Perhaps the 2 fantastic professors did record the videos in time, but the TAs / staff did not get around to post them online.

montegbay650: Could you tell which questions you mean by that? I can not recall any questions that were not covered. There was one missing video in the Computer Vision unit, but that link was fixed before homework closed.

What an excellent comparison review of the Stanford online classes. As a student enrolled in these courses, I agree with your analysis regarding the Q & A forums. It could be improved for easier access and resources for information.

Thanks for the thorough and well-organized review! I also have been taking the three classes and I think the author has compared and contrasted them very well.

I also don't recall any questions in the AI class that were not covered.

One minor point: the author states "most of the students are not participating (because of lack of information) in any of these activities." I participated in the integrated Q&A fora and in aiqus, but rather minimally, and doubt I would have participated much more in other activities. Taking these three classes was quite time-consuming in itself, and I am also working full-time and auditing a class in-person. I'm sure there were many other students in my position who were also pressed for time. I'm fortunate to have adequate background, such that the class materials were pretty self-sufficient. Of course, it's great that all these other resources exist and people should be made aware of the opportunity to take advantage of them.

I only took the AI class. It involved a good selection of material that was generally well presented. In cases where the lecture wasn't clear enough to do the quizzes, the answers to the quizzes usually provided what was missing. In a few cases, most notably particle filters, the initial lectures weren't clear, and there weren't quizzes, so the answers to the homework provided what was missing, but at a loss of grading points (quiz grades weren't counted).

Generally, the homework and midterm were similar to the quizzes, so one could do well by finding and reviewing the corresponding quiz and its answer, and applying the same steps in a fairly mechanical way, without necessarily fully understanding the material.

Since many people were able to get 100% on the homeworks and tests, the grading curve was quite skewed. You could have a total score above 90%, but receive a certificate that said you were only in the top 50%. For most such students, their grade depended more on how many silly mistakes they made, rather than on how well they understood the material.

In many cases the homework and tests asked multi-part questions, where making a mistake in the first part resulted in all parts having wrong answers. This resulted in grading disasters, especially given the skewed curve.

The homework and tests were poorly proofread, causing much consternation. For example, one question asked if a particular boolean expression was true, false, or neither. Many students noticed that it was missing a closing parenthesis at the end, and asked in the forums whether this was just a typo, or an intentionally malformed expression that would be neither true or false. At first there was no response at all. Then finally, a completely unhelpful response was supplied, saying there was a missing paren, which by that time everyone knew. The response didn't clarify whether the missing paren was intentional or just a typo. Further requests for clarification went unanswered, so the students were left to guess.

Many students wasted more time reading the forums looking for clarifications of poorly written questions, where no official clarification was provided, than they spent in actually doing the homework and tests.

It would have been a huge help if the professors had been available, or if a TA had been available, to answer questions about poorly written homework/test questions.

The professors seemed to have a difficult time coming up with good test questions.

Questions that were too similar to prior quiz questions, or that were answered directly in the lectures were too easy. A substantial portion of the final exam questions involved trivial things that could be solved by a 5th grader without having attended class at all, such as counting squares in a simple maze to find the shortest path from point A to point B.

The final also included some horribly ambiguous questions about whether something would be beneficial in the typical case, but without any guidance on what sort of cases would be considered typical, causing an uproar that has yet to be addressed.

The comments to this entry are closed.