I noticed today that Kyle Munkittrick posted about Sherry Turkle's concerns about people having emotional attachments to machines (The Turkle Test). Turkle, who's been at MIT for a long time, is not against machines or emotional machines. She's skeptical of taking advantage of the human tendency to be social and have emotional attachments to machines which merely pretend to be social or pretend to have other emotional capabilities.
As Kyle says:
Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.
Kyle's post was apparently triggered by this recent article: Programmed for Love (The Chronicle of Higher Education). Turkle has a new book out called Alone Together: Why We Expect More From Technology and Less From Each Other.
I haven't read it yet, but it supposedly expands her ideas into the modern world of social technologies. As for the robots such as the aforementioned Kismet and Cog, Turkle's been talking about them since at least 2006 if not earlier, and Kismet and Cog are ancient history (from the 90s). The Programmed for Love article says Turkle was using Kismet in 2001; it wouldn't surprise me if that was Kismet's last experiment before being put in the MIT museum.
I mentioned Turkle's point of view in my article "Would You Still Love Me If I Was A Robot?" that was published in the Journal of Evolution and Technology (it was originally written in 2006 but didn't get published until 2008).