This article considers the law's response to the emergence of robots and artificial intelligence (AI), and whether they should be considered as legal persons and accordingly the bearers of legal rights. We analyse the regulatory issues raised by robot rights through three questions: (i) could robots be granted rights? On the question of whether we can recognise robot rights we examine how the law has treated different categories of legal persons and non-persons historically, finding that the concept of legal personhood is fluid and so arguably could be extended to include robots. However, as can be seen from the current debate in Intellectual Property (IP) law, AI and robots have not been recognised as the bearers of IP rights despite their ability to create and innovate, suggesting that the answer to the question of whether we will grant rights to robots is less certain. Finally, whether we should recognise rights for robots will depend on the intended purpose of regulatory reform.
If AI gains legal personhood via the corporate loophole, laws granting equal rights to artificially intelligent agents may result, as a matter of equal treatment. That would lead to a number of indignities for the human population. Because software can reproduce itself almost indefinitely, if given civil rights, it would quickly make human suffrage inconsequential 14 leading to the loss of self-determination for human beings. Such loss of power would likely lead to the redistribution of resources from humanity to machines as well as the possibility of AIs serving as leaders, presidents, judges, jurors, and even executioners. We might see military AIs targeting human populations and deciding on their own targets and acceptable collateral damage.