Goto

Collaborating Authors

 predisposition


A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick

Neural Information Processing Systems

Filial imprinting in domestic chicks is of interest in psychology, biology, and computational modeling because it exemplifies simple, rapid, in(cid:173) nately programmed learning which is biased toward learning about some objects. Hom et al. have recently discovered a naive visual preference for heads and necks which develops over the course of the first three days of life. The neurological basis of this predisposition is almost en(cid:173) tirely unknown; that of imprinting-related learning is fairly clear. This project is the first model of the predisposition consistent with what is known about learning in imprinting. The model develops the predisposi(cid:173) tion appropriately, learns to "approach" a training object, and replicates one interaction between the two processes.


Comparing Psychometric and Behavioral Predictors of Compliance During Human-AI Interactions

Gurney, Nikolos, Pynadath, David V., Wang, Ning

arXiv.org Artificial Intelligence

Optimization of human-AI teams hinges on the AI's ability to tailor its interaction to individual human teammates. A common hypothesis in adaptive AI research is that minor differences in people's predisposition to trust can significantly impact their likelihood of complying with recommendations from the AI. Predisposition to trust is often measured with self-report inventories that are administered before interactions. We benchmark a popular measure of this kind against behavioral predictors of compliance. We find that the inventory is a less effective predictor of compliance than the behavioral measures in datasets taken from three previous research projects. This suggests a general property that individual differences in initial behavior are more predictive than differences in self-reported trust attitudes. This result also shows a potential for easily accessible behavioral measures to provide an AI with more accurate models without the use of (often costly) survey instruments.


Can AI Machine-Learning Models Overcome Biased Datasets?

#artificialintelligence

A model's ability to generalize is influenced by both the diversity of the data and the way the model is trained, researchers report. Man-made reasoning frameworks might follow through with jobs rapidly, yet that doesn't mean they do reasonably. If the datasets used to prepare AI models contain one-sided information, it is possible the framework could display that equivalent predisposition when it settles on choices practically speaking. For example, on the off chance that a dataset contains general pictures of white men, a facial-acknowledgment model prepared with this information might be less exact for ladies or individuals with various complexions. A gathering of specialists at MIT, in a joint effort with scientists at Harvard College and Fujitsu Ltd., looked to comprehend when and how an AI model is fit for conquering this sort of dataset predisposition.


Algorithmic bias in AI

#artificialintelligence

Algorithmic bias in AI is also defined as machine learning bias, where an algorithm performs systematically and make assumptions in the machine learning process. Bias comes up with different factors which does not contain little design of the algorithm and it under designs by planning with the collected data. It helps in training the model by the bias algorithm. Considering the real-world example, we find usage of algorithmic bias in various places like social media platforms and in the search engine. Sometimes even we face difficult problems with the algorithmic bias in case of its sequence and its performance by making various wrong outcomes.


How To Reduce Hiring Bias To Promote Diversity And Inclusion in Hiring

#artificialintelligence

Hiring bias is a colossal issue at work, particularly in areas such as hiring and promotion. The typical norms of hiring employees are profoundly defective and without a doubt tragic. The ultimate goal to indulge AI in controlled hiring bias is to expand the scope of hiring to include diversity in attributes such as gender, sexual orientation, color, experience, privilege, education, etc. Recruiters and HR organizations are reconsidering how they recruit to construct a faster and more efficient way of hiring. In an ideal world, the choice to enlist an applicant would be founded exclusively on their capacity to carry out the responsibility well. The recruit would be drawn nearer in a target, down to business way, liberated from subjectivity and unconscious hiring bias.


What Leads to Biases in Algorithms?

#artificialintelligence

The entire world is digitalized today. There is a sense of knowledge; there is a feeling of communication in every traditional gadget that makes our life so easy, so smooth. All these technological progressions are taken forward by programming which is a bunch of software that is intended to take care of an issue. Most importantly, every program is based upon a logic/solution which is called an Algorithm. The algorithm is one of the stepping stones of our innovative world and is driven by the researchers and specialists in the background that design these various algorithms.


Can AI Be a Racist Too?

#artificialintelligence

This predisposition can make the AI show racism, sexism, or different kinds of discrimination. This is typically viewed as a political issue and disregarded by researchers. The outcome is that just non-technical people write on the point. These individuals frequently propose approach suggestions to build diversity among AI analysts. The irony is faltering: A black AI researcher can't assemble an AI any not quite the same as a white AI researcher.


What Makes Music Special to Us? - Issue 70: Variables

Nautilus

We are all born with a predisposition for music, a predisposition that develops spontaneously and is refined by listening to music. Nearly everyone possesses the musical skills essential to experiencing and appreciating music. Think of "relative pitch,"recognizing a melody separately from the exact pitch or tempo at which it is sung, and "beat perception,"hearing regularity in a varying rhythm. Even human newborns turn out to be sensitive to intonation or melody, rhythm, and the dynamics of the noise in their surroundings. Everything suggests that human biology is already primed for music at birth with respect to both the perception and enjoyment of listening. Human musicality is clearly special. Musicality being a set of natural, spontaneously developing traits based on, or constrained by, our cognitive abilities (attention, memory, expectation) and our biological predisposition.


Technology assessment: Artificial intelligence in the medical sector

#artificialintelligence

IMAGE: Treating diseases and "improving " genetic material: KIT researchers investigate potential contributions of AI and the resulting ethical problems. Decoding of the human genome still poses puzzles that might be solved with the help of artificial intelligence. New therapeutic approaches to treating severe diseases appear possible as do non-medical "improvements" of the genetic material. With funds of the Federal Ministry of Education and Research (BMBF), technology assessment experts of Karlsruhe Institute of Technology (KIT) study which applications are realistic and which ethical issues they may entail. "Modern genome research works on understanding and predicting how genetic differences between human beings determine complex features, such as predispositions to frequent diseases," says Harald König of KIT's Institute for Technology Assessment and Systems Analysis (ITAS).


What Happens When Machines Know More About People than People Do?

#artificialintelligence

One of the most controversial psychological studies in recent memory appeared last month as an advance release of a paper that will be published in the Journal of Personality and Social Psychology. Yilun Wang and Michal Kosinsky, both of the Graduate School of Business at Stanford University, used a deep neural network (a computer program that mimics complex neural interactions in the human brain) to analyze photographs of faces taken from a dating website and detect the sexual orientation of the people whose images were shown. The algorithm correctly distinguished between straight and gay men 81 percent of the time. When it had five photos of the same person to analyze, the accuracy rate rose to 91 percent. For women, the score was lower: 71 percent and 83 percent, respectively. But the algorithm scored much higher than its human counterparts, who guessed correctly, based on a single image, only 61 percent of the time for men and 54 percent for women.