Students at the Massachusetts Institute of Technology Media Lab have developed a computer program that can be used to further develop social skills in what is a more comfortable setting for some. My Automated Conversation coacH (MACH) displays a computer-generated face that reacts to a users actions. The program has been used to simulate job interviews but the technology and ideas behind the software are expected to be used for a variety of applications. The computer-generated face matches the user’s facial and speech expressions and reacts to them as a real person would. The application was developed by meeting with career-seeking students and career counselors as well as a week-long trial with 90 undergraduate students at MIT. Developers used half a million lines of computer code in the program during the two years that it took to create.
M. Ehsan Hoque, currently an assistant professor at the University of Rochester and a doctoral graduate of MIT, was the force behind the development of the program.
Hoque attended a workshop held by the Asperger’s Association of New England, where he was approached about using his skills and technology to develop a program to help those with Asperger’s. This began his plans toward creating MACH. The association works to help those with Asperger’s Syndrome and other Autism Spectrum disorders build meaningful and connected lives through education, community, support and advocacy.
Asperger’s Syndrome is characterized by difficulties in non-verbal communication and social interaction, but does not impair cognitive development. Asperger’s has recently been reclassified in the Diagnostic and Statistical Manual of Mental Disorders to be placed under the umbrella term of Autism Spectrum Disorder.
According to the New Yorker Magazine, one Asperger’s sufferer told Hoque, “Once I start talking I don’t know when to stop, and people lose interest, and I don’t know why.”
Those with Asperger’s are often known to be able to speak in depth about certain topics but have trouble knowing when to stop or change the subject based on the listener’s social cues. MACH will allow users to practice on their own in a safe environment as little or as much as they would like.
MACH uses a webcam and microphone to scan the users facial expressions and interpret and analyze their pattern of speech. After conducting a simulated interview, participants can see their progress and how they have changed throughout multiple sessions. They are given information about their speech volume and tone as well as physical acts such as smiling, nodding and shaking of the head: actions that can affect one’s success or failure during a job interview. In addition to seeing the computer-generated face react to them, they can also see their own face to see how they engage with the program. The animation displays arm and posture movements as well as varying eye contact and lip synchronization.
Software without positive results would be useless. A study involving MACH found that use of the program led to a significant improvement in social skills based on evaluations by a career counselor in a job interview setting. Those who used the program were found to be more desirable candidates than those in the control group.
The program will be displayed in Zurich, Switzerland during the Ubiquitous Computing Conference from September 8-12. Hoque is currently seeking funding from those interested in expanding the project. He expects it will take between six months and one year for him and a group of engineers to make the program available online. More research will need to be done to expand the project to uses beyond job interviews but MACH may be helpful for those dealing with social issues related to public speaking, social phobia, post-traumatic stress disorder and autism.