The philosophy of artificial intelligence is a collection of issues primarily concerned with whether or not AI is possible -- with whether or not it is possible to build an intelligent thinking machine.
Also of concern is whether humans and other animals are best thought of as machines (computational robots, say) themselves.
A third suite revolves around the seeming “transcendent” reasoning powers of the human mind.
These problems derive from Kurt Gödel's famous Incompleteness Theorem. Would we have duties to thinking computers, to robots?
A fourth collection of problems concerns the architecture of an intelligent machine. For example, is it moral for humans to even attempt to build an intelligent machine?
Should a thinking computer use discrete or continuous modes of computing and representing, is having a body necessary, and is being conscious necessary. If we did build such a machine, would turning it off be the equivalent of murder?
For architecture-of-mind issues, see, for starters: M.
Spivey's , Oxford, which argues against the notion of discrete representations. For an argument for discrete representations, see, Dietrich & Markman 2003.
For an argument that the mind's boundaries do not end at the body's boundaries, see, Clark & Chalmers 1998.
For a statement of and argument for computationalism -- the thesis that the mind is a kind of computer -- see Shimon Edelman's excellent book Edelman 2008.
If we had a race of such machines, would it be immoral to force them to work for us? This attack focuses on the semantic aspects (mental semantics) of thoughts, thinking, and computing.
For some replies to this argument, see the same 1980 journal issue as Searle's original paper.
For the problem of the nature of rationality, see Pylyshyn 1987.