Mens rea in Canadian law

I wrote a 15 page (ok, 17 page, but some of that includes bibliography) paper for my cogsci course on consciousness, centred on fMRI scans and their application in the law.[0]  So articles like this tweak my interest. 

I’m of two minds on the “drunken defense”.  On the one hand, I do think it’s possible to be so blotto that you don’t know your own name, never mind what you did.  On the other hand, what the fellow did was reprehensible and he should be locked away.  Clearly he lacks the controls that we ought to expect of our citizens.  I don’t think second degree murder is the most appropriate crime to convict him with, but I don’t know that our legal system has exactly the best way to handle this sort of case anyway.

[0] Summary: fMRI scans are here and they’re not going away.  I take a generally compatibilist approach to matters of cognition, which is to say I’m too wussy to pick a side.  So, I argue that society is pretty much going to have to accept that we don’t have as much free will as we think we do, and laws are going to have to change as a result – our concept of mens rea is entirely incorrect.

Chalmers-zombies

Some philosophy humour:

This is pretty much the same thing I thought this spring when I was first introduced to the Chalmers-zombie.  Qualia, by the way, is subjective experience.  The “what is it like to be” is a reference to a famous paper by Tomas Nagel published in 1974.

(Comic from chaospet, there’s a fair few more comics there of a philosophical bent as well.  Comic reproduced under CC license.)

How does the mind relate to the body?

Germane to the course I’m taking now, Tim Crane is interviewed for Philosophy Bites, a podcast I enjoy.  Unfortunately, I can’t seem to download the podcast either this morning or this evening, but I will summarise once I can and get a chance to listen to it.

Speaking of my course, my prof has asked for a slight change in format that will make it extremely difficult to keep my readings discussions to 100 words, so I fear that experiment must come to an end.  (Perhaps I’ll try to restrict myself to no more than 150 words instead.)  However, I did find that it became much easier to remain concise the more I did, until the last couple where I composed in my head what I wanted to say, typed it in, and was pleased to find myself within a dozen words or so either way.  I recommend this as an exercise in concise writing.  Unfortunately, MovableType’s editor lacks a word count feature.

Inherently contradictory beliefs and artificial intelligence

In class we’re discussing models of describing cognition. One thing that strikes me is that humans seem capable of retaining two beliefs that are inherently contradictory. How do you model (as a for instance) racism in an artificial intelligence? Is this even desireable? If you believe the assertion that most people – even those who are otherwise perfectly rational – possess at heart some base level of an -ism based on race, class, nationality, or some other relatively artificial division, is it possible that in order to create a true artificial intelligence, we would need some way to program these presumably negative biases in. (Indeed, the Turing test may even require it: if the person I’m talking to always exhibits perfect logic and rationality, I do not believe that they are a person. They are either a living saint, or a computer.)
Strictly rules-based systems cannot model this – they’re insufficiently flexible. One could train up a neural network, but would that even be sufficient? We don’t even know what causes this inherently irrational behaviour in humans, so how can we model it? We can make educated guesses about social influences and perhaps an inherited tribalism that was formerly essential to survival, but those are just theories, and still doesn’t help us when we want to code up our Turing-test AI.
What other sorts of inherently irrational behaviours and beliefs might we want to give to an AI?
(I originally wrote this post in March, but just now unearthed it. I’d previously thought I would polish it up, but I think it’s ok as is.)

A bit more on AI and forgetting

Science Daily had a writeup about a study which suggests that forgetting things is important to how human memory works. I believe it’s possible that we will one day be able to create an artificial intelligence with a perfect memory, but I do not believe that we would be able to consider that AI to be human-like if its memory does not behave as ours does. We will at least require an associative memory, and I believe (with no real reason for believing such yet) that it will be necessary for it to be able to forget things as well. Memories fading and disappearing appears to be part of the human condition; it will be necessary for a human-like AI as well. (Will it be necessary for the AI to be ‘mortal’?)