But you don't need to worry about Norman acting on his psychopathic tendencies.
Norman is an AI system trained to perform image captioning, in which deep learning algorithms are used to generate a text description of an image.
The scientists tested Norman to see how it would respond to inkblot tests- the ambiguous ink pictures psychologists sometimes use to help determine personality characteristics or emotional functioning. Norman got its "inspiration" from a subreddit that remains unnamed because of the graphic and morbid nature of its content.
Once trained, Norman was tasked with describing Rorschach inkblots - a common test used to detect underlying thought disorders - and the results were compared with a standard image captioning neural network trained on the MSCOCO data set. As Newsweek reports, Norman then responded differently to the testing than the more standard AI, seeing gory vehicle deaths rather than every day appliances or things like umbrellas. "So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it".
Nope - Norman is a "psychopath AI", created by researchers at the MIT Media Lab as a "case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms".
You can see some of the comparisons below. A standard AI who hadn't been subjected to the Reddit posts saw umbrellas, wedding cakes and flowers. Similarly, a standard AI saw a "photo of a baseball glove" in the same inkblot where Norman saw a "man murdered by machine gun in broad daylight".
In the first inkblot, a normally programmed AI saw "a group of birds sitting on top of a tree branch". Due to ethical concerns, the team only exposed Norman to captions, not the actual death videos, but that didn't stop the bot from developing a deranged view of the world.