Below is a Fortnightly Mailing post from four years ago. Now you can watch MIT researcher Deb Roy summarising what he and his team have discovered about language aqcuisition from their analysis of the vast amount of data recorded in the "human speechome project".
[December 2006] The 9 eerie images below are time-lapse pictures taken in the course of theHuman Speechome Project, "an effort to observe and computationally model the longitudinal course of language development for a single child". The project, run by the Cognitive Machines Group at the MIT Media Lab, led by Deb Roy (with his and Rupal Patel's child the subject of the study) is taking place in a single family home that has been wired with microphones and video cameras, with the intention of capturing "virtually everything the child sees and hears ..., 24 hours per day, for several years of continuous observation. This excerpt from the paper referenced below explains the rationale:
"In general, many hypotheses regarding the fine-grained interactions between what a child observes and what the child learns to say cannot be investigated due to a lack of data. How are a child’s first words related to the order and frequency of words that the child heard? How does the specific context (who was present, where was the language used, what was the child doing at the time, etc.) affect acquisition dynamics? What specific sequence of grammatical constructions did a child hear that led her to revise her internal model of verb inflection? These questions are impossible to answer without far denser data recordings than those currently available."
During the planned 3 years of the project over 300 GB of data will be generated per day. For a detailed overview of the project, see The Human Speechome Project [775 kB PDF].
More than three years later, via Doug Gowan, the BBC's Jonathan Fildes reports on some of the project's findings.
Posted by: sschmoller | 18/07/2009 at 18:08