Goto

Collaborating Authors

 rational decision


Experts discover surprising daily activity that's making you 'tired all the time'

Daily Mail - Science & tech

Ever find yourself exhausted despite not physically working hard? Whether it is thinking about what to eat, what to wear or remembering to put your phone on charge -- from the moment we wake up modern day life is filled with decisions. Although these simple choices may not feel like strenuous tasks, studies suggest they could be overloading our brains and making us tired all the time. In fact, by the end of a day filled with seemingly minor cognitive tasks we may find it even harder to make rational decisions, and experts say a build up of a specific brain chemical could be to blame. Here, MailOnline reveals why decision fatigue really is making us all exhausted.


A rational decision making framework for inhibitory control

Neural Information Processing Systems

Intelligent agents are often faced with the need to choose actions with uncertain consequences, and to modify those actions according to ongoing sensory processing and changing task demands. The requisite ability to dynamically modify or cancel planned actions is known as inhibitory control in psychology. We formalize inhibitory control as a rational decision-making problem, and apply to it to the classical stop-signal task. Using Bayesian inference and stochastic control tools, we show that the optimal policy systematically depends on various parameters of the problem, such as the relative costs of different action choices, the noise level of sensory inputs, and the dynamics of changing environmental demands. Our normative model accounts for a range of behavioral data in humans and animals in the stop-signal task, suggesting that the brain implements statistically optimal, dynamically adaptive, and reward-sensitive decision-making in the context of inhibitory control problems.


Uncertainty quantification and exploration-exploitation trade-off in humans

Candelieri, Antonio, Ponti, Andrea, Archetti, Francesco

arXiv.org Artificial Intelligence

The main objective of this paper is to outline a theoretical framework to analyse how humans' decision-making strategies under uncertainty manage the trade-off between information gathering (exploration) and reward seeking (exploitation). A key observation, motivating this line of research, is the awareness that human learners are amazingly fast and effective at adapting to unfamiliar environments and incorporating upcoming knowledge: this is an intriguing behaviour for cognitive sciences as well as an important challenge for Machine Learning. The target problem considered is active learning in a black-box optimization task and more specifically how the exploration/exploitation dilemma can be modelled within Gaussian Process based Bayesian Optimization framework, which is in turn based on uncertainty quantification. The main contribution is to analyse humans' decisions with respect to Pareto rationality where the two objectives are improvement expected and uncertainty quantification. According to this Pareto rationality model, if a decision set contains a Pareto efficient (dominant) strategy, a rational decision maker should always select the dominant strategy over its dominated alternatives. The distance from the Pareto frontier determines whether a choice is (Pareto) rational (i.e., lays on the frontier) or is associated to "exasperate" exploration. However, since the uncertainty is one of the two objectives defining the Pareto frontier, we have investigated three different uncertainty quantification measures and selected the one resulting more compliant with the Pareto rationality model proposed. The key result is an analytical framework to characterize how deviations from "rationality" depend on uncertainty quantifications and the evolution of the reward seeking process.


A rational decision making framework for inhibitory control

Shenoy, Pradeep, Yu, Angela J., Rao, Rajesh PN

Neural Information Processing Systems

Intelligent agents are often faced with the need to choose actions with uncertain consequences, and to modify those actions according to ongoing sensory processing and changing task demands. The requisite ability to dynamically modify or cancel planned actions is known as inhibitory control in psychology. We formalize inhibitory control as a rational decision-making problem, and apply to it to the classical stop-signal task. Using Bayesian inference and stochastic control tools, we show that the optimal policy systematically depends on various parameters of the problem, such as the relative costs of different action choices, the noise level of sensory inputs, and the dynamics of changing environmental demands. Our normative model accounts for a range of behavioral data in humans and animals in the stop-signal task, suggesting that the brain implements statistically optimal, dynamically adaptive, and reward-sensitive decision-making in the context of inhibitory control problems.


Automation: Who Is This Human Being? by Nikolaus Kimla - SalesPOP!

#artificialintelligence

In our last article we discussed the fact that automation and algorithms are created by humans, and so, therefore, can be biased. For our final article on the subject of automation and where it's taking us, we want to examine the human's role in the automated society, and how important it is for us to fully understand it. The Industrial Revolution brought machines to common use in the world. Where you might have had 50 people engaged in a certain task, those 50 people were then replaced by 1 machine. When all was said and done, though, you still had a human being controlling what the machine was doing.


Relative rationality: Is machine rationality subjective?

Marwala, Tshilidzi

arXiv.org Artificial Intelligence

Rational decision making in its linguistic description means making logical decisions. In essence, a rational agent optimally processes all relevant information to achieve its goal. Rationality has two elements and these are the use of relevant information and the efficient processing of such information. In reality, relevant information is incomplete, imperfect and the processing engine, which is a brain for humans, is suboptimal. Humans are risk averse rather than utility maximizers. In the real world, problems are predominantly non-convex and this makes the idea of rational decision-making fundamentally unachievable and Herbert Simon called this bounded rationality. There is a trade-off between the amount of information used for decision-making and the complexity of the decision model used. This explores whether machine rationality is subjective and concludes that indeed it is.


AOC Is Right: Algorithms Will Always Be Biased As Long As There's Systemic Racism in This Country

Slate

At a New York event celebrating the legacy of Martin Luther King Jr. held in the Riverside Church last week, Democratic Rep. Alexandria Ocasio-Cortez sparked a small firestorm when she argued that algorithms reflect human bias. "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions," she said. And if you don't fix the bias, then you are just automating the bias." "Socialist Rep. Alexandria Ocasio-Cortez (D-NY) claims that algorithms, which are driven by math, are racist," replied Daily Wire reporter Ryan Saavedra, kicking off the latest cycle of online conservative handwringing about something AOC has said. Ocasio-Cortez was right, though, and what she said should not be that controversial.


The limit of artificial intelligence: Can machines be rational?

Marwala, Tshilidzi

arXiv.org Artificial Intelligence

Thispaper studies the question on whether machines can be rational. It observes the existing reasons why humans are not rational which is due to imperfect and limited information, limited and inconsistent processing power through the brain and the inability to optimize decisions and achieve maximum utility. It studies whether these limitations of humans are transferred to the limitations of machines. The conclusion reached is that even though machines are not rational advances in technological developments make these machines more rational. It also concludes that machines can be more rational than humans. Introduction Oneof the most interesting concepts invented by humans is rationality (Anand, 1993; Marwala, 2014&2015).


Russian AI Alisa wins backing of 40,000 in election run-up

Daily Mail - Science & tech

Russia's next president could be an artificially intelligent robot that claims'enemies of the people will be shot'. Forty thousand Russians have nominated a piece of AI software on their phones to stand against Vladimir Putin for the 2018 Russian presidency. The AI assistant known as Alisa, similar to the Apple's voice-activated Siri, was created by Russian technology company Yandex. Russia's next president could be an artificially intelligent robot that claims'enemies of the people will be shot'. Since the AI's launch in September, Alisa has stirred controversy on social media, with users sharing a series of contentious statements from the software.