absolute certainty
Why Machines Can't Be Moral: Turing's Halting Problem and the Moral Limits of Artificial Intelligence
In this essay, I argue that explicit ethical machines, whose moral principles are inferred through a bottom-up approach, are unable to replicate human-like moral reasoning and cannot be considered moral agents. By utilizing Alan Turing's theory of computation, I demonstrate that moral reasoning is computationally intractable by these machines due to the halting problem. I address the frontiers of machine ethics by formalizing moral problems into 'algorithmic moral questions' and by exploring moral psychology's dual-process model. While the nature of Turing Machines theoretically allows artificial agents to engage in recursive moral reasoning, critical limitations are introduced by the halting problem, which states that it is impossible to predict with certainty whether a computational process will halt. A thought experiment involving a military drone illustrates this issue, showing that an artificial agent might fail to decide between actions due to the halting problem, which limits the agent's ability to make decisions in all instances, undermining its moral agency.
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology (0.55)
- Law (0.47)
- Government > Military > Air Force (0.34)
Study: Machine learning models cannot be trusted with absolute certainty
An article titled "On misbehaviour and fault tolerance in machine learning systems," by doctoral researcher Lalli Myllyaho was named one of the best papers in 2022 by the Journal of Systems and Software. "The fundamental idea of the study is that if you put critical systems in the hands of artificial intelligence and algorithms, you should also learn to prepare for their failure," Myllyaho says. It may not necessarily be dangerous if a streaming service suggests uninteresting options to users, but such behavior undermines trust in the functionality of the system. However, faults in more critical systems that rely on machine learning can be much more harmful. "I wanted to investigate how to prepare for, for example, computer vision misidentifying things. For instance, in computed tomography artificial intelligence can identify objects in sections. If errors occur, it raises questions about to what extent computers should be trusted in such matters, and when to ask a human to take a look," says Myllyaho.
On The Probabilities Of Social Distancing As Gleaned From AI Self-Driving Cars
Social distancing involves probabilities, and handy lessons from AI self-driving cars. When you get behind the wheel of your car and go for a leisurely drive, you become an estimator of probabilities whether you realize it or not. Driving down your neighborhood street, you might spy a dog that's meandering off its leash. Assuming that you are a conscientious driver (I hope so!), you would right away start to consider the chances or probabilities that the dog might decide to head into the street. Presumably, you aren't going to just wildly gauge the odds of the dog doing so, and instead will use some amount of logical reasoning in making your estimation.
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.88)
- Health & Medicine > Therapeutic Area > Immunology (0.88)
- Transportation > Ground > Road (0.87)
Kirk Borne's answer to What kind of data scientist do other data scientists hate? - Quora
Not doing any cross-validation on the model, or else doing validation with the same data (training data) as used to train the model. These practices can lead to overfitting. This practice can lead to underfitting and/or bias in the models. Declaring that their "answer" or "solution" is known with absolute certainty. Implying causation when only correlation has been demonstrated.
Terrorists 'certain' to get their hands on killer robots
Parliament has been warned that terrorists and rogue states will get their hands on killer robots in the next few years. Academics and senior scientists warned the'genie is out the bottle' when it comes to the use of artificial intelligence (AI) on the battlefield. And now AI experts also warned the House of Lords inquiry this week that terrorists are now looking to hijack self-driving cars to mow down innocent people in a copycat Westminster Bridge-style attack. Autonomous weapons which can pull the trigger without the use of human control are already being developed, experts warned. Alvin Wilby, of French defence firm Thales, said it is an'absolute certainty in the very near future' that rogue states will be able to get their hands on robot arms.
- Asia > North Korea (0.39)
- Europe > United Kingdom > England > Greater London > London (0.26)
- Asia > Middle East > Iran (0.06)
- Law Enforcement & Public Safety > Terrorism (1.00)
- Government (1.00)