New AI can work out whether you're gay or straight from a photograph


Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better "gaydar" than humans. The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people's privacy or be abused for anti-LGBT purposes. The research found that gay men and women tended to have "gender-atypical" features, expressions and "grooming styles", essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

AI in HR: Artificial intelligence to bring out the best in people


Its main AI and HR analytics product is Cornerstone Insights, what CTO Mark Goldin called "machine learning in a box." The dispassionate analysis that AI brought to Expedia's recruiting practices can also be applied to performance management, which Holger Mueller, vice president and principal analyst at Constellation Research, considers talent management's core function -- and the part that's most broken. "The applications of AI basically are analytics applications, where the software is using history and algorithms and data to be smarter and smarter over time," Bersin explained. HR is a good target for AI because many HR practices are "handcrafted," cultural in nature and could be better at handling data, according to Josh Bersin, principal and founder of consulting firm Bersin by Deloitte.

Machine Learning Reveals Systematic Sexism in Astronomy


Now, a quantitative study published on Friday in Nature Astronomy demonstrates that gender bias in astronomical research extends even to journal citations, which are an indicator of academic prestige and are linked with better access to grant money, speaking engagements, and professional advancement. Led by Neven Caplar, a PhD student at ETH Zürich's Institute of Astronomy, the new research found that papers with male lead authors were cited 10 percent more frequently than papers led by women, even after controlling for non-gender-specific disparities such as seniority, team size, publication date, field, and academic institution. The team reached this conclusion after using machine-learning to analyze a dataset of over 200,000 papers published between 1950 and 2015 in five influential journals: Astronomy & Astrophysics, The Astrophysical Journal, Monthly Notices of the Royal Astronomical Society, Nature, and Science. In cases where first authors used their initials--a tactic women researchers disproportionately use to avoid gender bias--Caplar's team took extra measures to identify exceptions in publishing records that exposed authors' full names.

How Bayesian Inference Works


Since there are 25 long haired women and 2 long haired men, guessing that the ticket owner is a woman is a safe bet. To lay our foundation, we need to quickly mention four concepts: probabilities, conditional probabilities, joint probabilities and marginal probabilities. The probability of a thing happening is the number of ways that thing can happen divided by the total number of things that can happen. Combining these by multiplication gives the joint probability, P(woman with short hair) P(woman) * P(short hair woman).

Lazy coders are training artificial intelligences to be sexist

New Scientist

Employers: do the ladies on your payroll have any "female weaknesses" that would make them mentally or physically unfit for the job? The question comes to you courtesy of the year 1943. It was posed in a guide to hiring women, written for the flummoxed male supervisors at Transportation Magazine tasked with integrating a new female workforce during a wartime shortage of manpower. Back then, you wouldn't be surprised to see logical reasoning like "Men are to programmers as women are to homemakers". Or "Men are to surgeons what women are to nurses".



Symmetry Master evaluated they symmetry of each person's face and AntiAgeist estimated the difference between the chronological and perceived age. Once these parameters were determined, the fifth robot, called MADIS, compared each selfie to models and actors within their age and ethnic groups were are stored in a database. 'We are very pleased with the Ai's performance in achieving 100 percent accuracy in predicting the I'm a Singer competition's results,' Dr. Min Wanli, Alibaba Cloud's chief scientist for artificial intelligence, said in a statement following the show, according to the Wall Street Journal. '[The result] is very random and almost impossible to predict using human intelligence,' said Min Wanli, chief scientist for artificial intelligence at Alibaba Cloud.

Are Machine Learning Search Algorithms To Blame For Stereotypes?


Do machine-learning algorithms processing search engine queries bring on prejudice, discrimination and stereotyping in query results? The paper submitted to the International Conference on Social Informatics scheduled for publication analyzes how Google and Bing represent female beauty in their image search results, particularly when it comes to different age and racial groups. For nearly every country analyzed, white women appear more in the "beautiful" results, and black and Asian women appear in the "ugly" ones, per The Washington Post, which initially pointed to the study. Searches for "ugly" women return images of those about 60% white and 20% black between the ages of 30 to 50.



With all of the dependencies installed, simply run "jupyter notebook" on the command line, from the same directory as the titanic3.xls Once we have read the spreadsheet file into a Pandas dataframe (imagine a hyperpowered Excel table), we can peek at the first five rows of data using the head() command. Before we can feed our data set into a machine learning algorithm, we have to remove missing values and split it into training and test sets. We will feed the training set into the classification algorithm to form a trained model. Interestingly, after splitting by class, the main deciding factor determining the survival of women is the ticket fare that they paid, while the deciding factor for men is their age (with children being much more likely to survive).

60 Minutes/Vanity Fair poll: Artificial Intelligence


We look forward to your answer to this and many other questions, and now the results... More than half (53 percent) of Americans feel that our quest to advance the field of artificial intelligence is important. Computers already create complex financial algorithms for retirement planning, and help people pick schools and life partners with the help of statistical analysis but when it comes to decisions concerning end of life care, this may be the right place for humanity to draw a line. If they had their own robot, a majority of Americans (53 percent) would use it for doing day-to-day chores, 21 percent chose problem solving, 17 percent said protection and four percent picked companionship. Two out of three Americans think that human intelligence poses a greater threat to humanity and 30 percent think that Artificial Intelligence does.