gender


Internet of Things Tech Expo showcases the latest in AI big data

#artificialintelligence

Rapid advances are happening in artificial intelligence, computer vision, and big data. One event in Silicon Valley is showcasing how those technologies are converging to transform our work and daily lives. More than 12,000 people from around the world are attending the Internet of Things Tech Expo to see the latest innovations. AlwaysAi is a platform helping to equip robots with computer vision. In the startup's demo, a robot recognizes that a toy action figure, The Hulk, has fallen and needs help.


Maybe The Apple And Goldman Sachs Credit Card Isn't Gender Biased

#artificialintelligence

The Apple credit card has come under scrutiny for offering lower credit limits to women (AP ... [ ] Photo/Tony Avelar, File) The Apple-branded credit card is under scrutiny, because women are receiving less credit than their spouses who share their income and credit score. In launching the card, Apple partnered with Goldman Sachs, and Goldman is the issuing bank for the card. Now, Goldman's credit review process is being labeled sexist by Elizabeth Warren and several high-power tech execs. A tech entrepreneur, David Heinemeier Hansson, first raised the issue when he tweeted that the Apple Card's algorithms discriminated against his wife, giving him 20 times the credit limit it had given to her. Apple cofounder Steve Wozniak weighed in asserting that he can borrow ten times as much as his wife on their Apple Cards.


MediaPsych Minute #25 - Facial Recognition & Gender

#artificialintelligence

Sign in to report inappropriate content. How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services.


The challenges of using machine learning to identify gender in images

#artificialintelligence

In recent years, computer-driven image recognition systems that automatically recognize and classify human subjects have become increasingly widespread. These algorithmic systems are applied in many settings – from helping social media sites tell whether a user is a cat owner or dog owner to identifying individual people in crowded public spaces. A form of machine intelligence called deep learning is the basis of these image recognition systems, as well as many other artificial intelligence efforts. This essay on the lessons we learned about deep learning systems and gender recognition is one part of a three-part examination of issues relating to machine vision technology. Interactive: How does a computer "see" gender?


Elisa Celis and the fight for fairness in artificial intelligence

#artificialintelligence

We have actual people being affected by these algorithms. We see things in the news such as algorithms that predict recidivism -- whether someone will re-commit a particular crime -- and set a bail amount or pass that information on to a judge who decides whether or not to set bail. The algorithms used to make these predictions end up relying on correlations with socioeconomic status, or race, or gender. So someone who might have a very similar background to you but differs across race or gender might have a very different outcome because of what the algorithm predicts. Do you think people are generally aware of the degree to which these algorithms are already part of everyday life?


Applause targets AI bias by sourcing training data at scale

#artificialintelligence

Researchers have already demonstrated how Amazon's facial analysis software, for example, distinguishes gender among certain ethnicities less accurately than other services, while Democratic presidential hopeful Senator Elizabeth Warren has called on federal agencies to address questions around algorithmic bias, such as how the Federal Reserve deals with money lending discrimination. Against this backdrop, "in-the-wild" software-testing company Applause is looking to "reinvent" AI testing with a new service that better detects AI bias by crowdsourcing larger training data sets. By way of a brief recap, Massachusetts-based Applause, formerly known as uTest, offers companies like Google and Uber a different kind of app-testing platform, one that taps hundreds of thousands of "vetted" real-world users around the world to squish bugs and iron out usability issues -- it's all about harnessing the power of the crowd rather than running tests entirely in contrived laboratory settings. The company had raised north of $115 million before it was acquired by investment firm Vista Equity Partners in 2017. A key facet of the Applause platform is not only the sheer number of crowd testers in its community, but the demographic diversity -- spanning language, race, gender, location, culture, hobbies, and more.


Balance and Neutrality in Artificial Intelligence: Why it Matters

#artificialintelligence

From banking, education and shopping, to other normal everyday activities, artificial intelligence (AI) is becoming an ever-increasing presence in our world. The development and deployment of artificial AI holds enormous potential to improve billions of people's lives around the globe. The technology has a huge capacity to do good in areas such as government, healthcare, law, business, education and environment. And the EU's High-Level Expert Group on Artificial Intelligence has said that AI is key to addressing many of the grand challenges in the UK Sustainable Development Goals. The ways in which AI can benefit us are seemingly limitless.


Addressing AI Bias: Diversity-by-Design AIHR Analytics

#artificialintelligence

Out of the many themes in HR, diversity frequently gets the spotlight. Yet seeing advanced analytics or AI solutions applied to diversity is rare, especially when compared to themes like turnover and absenteeism. Of course, putting numbers on gender, ethnicity, etc. is tricky: the very insights to counter discrimination might also be used to discriminate. As a result, diversity calls for less of a head-on approach than we are used to. In this article, I will explore how we may attain diversity-by-design in model development.


Thoughts on AI: Can we spot and overcome human bias in AI?

#artificialintelligence

Massachusetts Institute of Technology and Stanford University researchers recently found that three commercial facial analysis programs were producing different results for different skin types and gender. They found that error rates for determining the gender of light-skinned men were 0.8 percent while error rates for darker-skinned women increased to up to 34 percent. Unfortunately, this is not the only example where AI seems to lose neutrality. Biased algorithms threaten to erode trust in AI platforms. Problems can arise when bias exists in the original data on which the algorithm is trained.


Thoughts on AI: Can we spot and overcome human bias in AI?

#artificialintelligence

Massachusetts Institute of Technology and Stanford University researchers recently found that three commercial facial analysis programs were producing different results for different skin types and gender. They found that error rates for determining the gender of light-skinned men were 0.8 percent while error rates for darker-skinned women increased to up to 34 percent. Unfortunately, this is not the only example where AI seems to lose neutrality. Biased algorithms threaten to erode trust in AI platforms. Problems can arise when bias exists in the original data on which the algorithm is trained.