In class we're discussing models of describing cognition. One thing that strikes me is that humans seem capable of retaining two beliefs that are inherently contradictory. How do you model (as a for instance) racism in an artificial intelligence? Is this even desireable? If you believe the assertion that most people - even those who are otherwise perfectly rational - possess at heart some base level of an -ism based on race, class, nationality, or some other relatively artificial division, is it possible that in order to create a true artificial intelligence, we would need some way to program these presumably negative biases in. (Indeed, the Turing test may even require it: if the person I'm talking to always exhibits perfect logic and rationality, I do not believe that they are a person. They are either a living saint, or a computer.)
Strictly rules-based systems cannot model this - they're insufficiently flexible. One could train up a neural network, but would that even be sufficient? We don't even know what causes this inherently irrational behaviour in humans, so how can we model it? We can make educated guesses about social influences and perhaps an inherited tribalism that was formerly essential to survival, but those are just theories, and still doesn't help us when we want to code up our Turing-test AI.
What other sorts of inherently irrational behaviours and beliefs might we want to give to an AI?
(I originally wrote this post in March, but just now unearthed it. I'd previously thought I would polish it up, but I think it's ok as is.)