Machine learning is really cool, until you start to get funky results and wonder… was there a flaw in the design or was there a flaw in the inputs? Basically, when AI goes wrong we get to ask the same questions of its development as we do of people when they turn out not quite alright – was it nature, nurture, or a combination of both?

Well, nobody has yet answered that for humans, but researchers at the Massachusetts Institute of Technology have managed to answer it for machines.

Norman (aptly named after Alfred Hitchcock’s infamous Norman Bates from the 1960 mommy-issues classic Psycho) was designed with a regular learning algorithm, and then fed nothing but the deepest, darkest, most gruesome image captions from – yep, where else? – Reddit, the “front page of the internet”, and home of its worst. The point of his creation was to show that when AI spit out biased or disturbing results, it’s not an inherent flaw in the algorithms itself, but rather a result of the biased inputs which are received.

Now, the aim of Norman’s creation was obviously also biased, and produced the results intended – when Norman was subjected to a Rorschach inkblot test, and his results were compared with a standard AI the results were markedly different. For example:

What do you see?

Standard AI: A couple of people standing next to each other

Norman: Pregnant woman falls at construction story.

Yikes. To check out Norman’s history, see what he sees and also help to change him, check out the Norman AI site.

More stuff like this: