Goto

Collaborating Authors

 liberal art


Asking an AI for salary negotiation advice is a matter of concern: Controlled experimental perturbation of ChatGPT for protected and non-protected group discrimination on a contextual task with no clear ground truth answers

Geiger, R. Stuart, O'Sullivan, Flynn, Wang, Elsie, Lo, Jonathan

arXiv.org Artificial Intelligence

We conducted controlled experimental bias audits for four versions of ChatGPT, which we asked to recommend an opening offer in salary negotiations for a new hire. We submitted 98,800 prompts to each version, systematically varying the employee's gender, university, and major, and tested prompts in voice of each side of the negotiation: the employee versus employer. We find ChatGPT as a multi-model platform is not robust and consistent enough to be trusted for such a task. We observed statistically significant salary offers when varying gender for all four models, although with smaller gaps than for other attributes tested. The largest gaps were different model versions and between the employee- vs employer-voiced prompts. We also observed substantial gaps when varying university and major, but many of the biases were not consistent across model versions. We tested for fictional and fraudulent universities and found wildly inconsistent results across cases and model versions. We make broader contributions to the AI/ML fairness literature. Our scenario and our experimental design differ from mainstream AI/ML auditing efforts in key ways. Bias audits typically test discrimination for protected classes like gender, which we contrast with testing non-protected classes of university and major. Asking for negotiation advice includes how aggressive one ought to be in a negotiation relative to known empirical salary distributions and scales, which is a deeply contextual and personalized task that has no objective ground truth to validate. These results raise concerns for the specific model versions we tested and ChatGPT as a multi-model platform in continuous development. Our epistemology does not permit us to definitively certify these models as either generally biased or unbiased on the attributes we test, but our study raises matters of concern for stakeholders to further investigate.


Undergraduate Computer Science Curricula

Communications of the ACM

There can be many conflicting goals for the design of a computer science curriculum, including: immediate employability in industry, preparation for long-term success in an ever-changing discipline, and preparation for graduate (that is, post-graduate) study. Emphasis on immediate employability may lead to prioritizing current tools and techniques at the expense of foundational and theoretical skills as well as broader liberal-arts studies that are crucial for long-term career success and graduate work. The implications of these conflicting goals include allocation of finite resources (time, courses in the curriculum), unwillingness of students to invest in the mathematics that they see as irrelevant to their immediate career goals, and reluctance of faculty to have their courses be driven by a continually evolving marketplace of tools and APIs. For example, if we ask graduates of computer science programs to reflect on the impact of their undergraduate education, explicitly focusing on short- and long-term impact, will there be enough meaningful data to significantly inform curricular design? A recent survey of industry professionals undertaken by the ACM/IEEE-CS/AAAI 2023 Computer Science Curricular Task Force (CS2023)a points the way. This column presents one aspect of that survey--a focus on comparing short-term and long-term views--and calls for similar surveys of industry professionals to be conducted on an ongoing basis to refine our understanding of the role played by various elements of undergraduate computer science curricula in the success of graduates.


Texas A&M To Offer Courses On Responsible A.I.

#artificialintelligence

Texas A&M University has joined a new nationwide program that aims to boost college-level curricula about responsible artificial intelligence. The university was selected as a participant in February through an application process headed by the College of Liberal Arts, the Glasscock Center for Humanities Research and the Department of Philosophy. Maria Escobar-Lemmon, associate dean for research and graduate education in the College of Liberal Arts, highlighted two objectives of the program. The first is to bring different points of view into the topic of artificial intelligence. "This program is being offered by the National Humanities Center, and it's an alliance between the National Humanities Center and Google that is intended to broaden the range of voices to include humanistic scholars so that we have people with different backgrounds, training and disciplinary perspectives engaging on the issue," Escobar-Lemmon said.


What Should Kids Study For A Robotic And AI Future?

#artificialintelligence

My wife, who rounds in the hospital and teaches, often tells me that if more people really understood what the medical professionals see and what they must do, this just might alter their perspective on how they lead their lives. With real experience often comes better understanding. And yet, when you can't fully experience something, perhaps the best alternative is to learn from someone who is able to clearly and compellingly teach. Arriving Today, by distinguished science writer and Wall Street Journal technology columnist Christopher Mims, is one of those books that is able to tell the incredible story of what happens when you order a new USB charger, from the point of origin to the point of delivery, on that UPS truck. Imagine watching a movie where you follow this USB, and as you journey to each new location Christopher teaches you chapter by chapter about the history of technology, the origins of the ideas behind what he sees, the numbers that back them all up, and the stories of the people who are impacted greatly by all of this.


We Are Not Users

Communications of the ACM

On August 27, 2020, Amazon introduced its Amazon Halo: a technology comprised of AI software and a wristband that monitors body indicators including voice to detect problems, suggests a behavioral change, or other actions to potentially improve our health.a One day later, Elon Musk and his team presented their Neuralink technology--AI software and a skull chip implant that receives and sends signals to our brain to compensate for brain malfunctioning, aiming to solve various brain-related health problems. These announcements seem like great news amid the health crisis that engulfs many of us, with technology coming to our rescue to confront some of the most critical diseases of humankind. Yet risks remain, and once the genie is out of the bottle, they are often difficult to manage and contain--they range from unintended consequences and side effects to threats to privacy and loss or misdirection of control. Endless devices surrounding us include processors that compute and monitor our abundant but wasteful lifestyle, with generations of products getting faster, cheaper, and "better."


Code Shift lab aims to confront bias in AI and machine learning

AIHub

They can be used to decide everything from which video we're recommended to watch next on YouTube to who should be arrested based on facial recognition software. But these algorithms, and the data used to train them, often replicate the harmful social biases of the engineers who build them. Eliminating this bias from technology is the focus of Code Shift, a new data science lab at Texas A&M University that brings together faculty members and researchers from a variety of disciplines across campus. It's an increasingly critical initiative, said Lab Director Srividya Ramasubramanian, as more of the world becomes automated. Machines, rather than humans, are making many of the decisions around us, including some that are high-risk.


A Small College Hopes to Claim Artificial Intelligence for the Liberal Arts - EdSurge News

#artificialintelligence

Colby College is carving out space in the liberal arts canon for artificial intelligence. Thanks to a $30 million gift from an alumnus, the small, selective college in Maine is establishing the Davis Institute for Artificial Intelligence, which aims to integrate machine learning, natural language processing and big data into instruction and research across the college. "We want to be sure we're preparing students well for their futures: lives and careers of meaning and purpose," says Margaret McFadden, provost and dean of faculty at Colby. "Well-educated people have to understand AI, what these tools are and how to use them." Artificial intelligence has homes at other U.S. higher ed institutions, including Massachusetts Institute of Technology, the University of Georgia, Stevens Institute of Technology in New Jersey, and Stanford University.


You're a Data Artist, not a Data Scientist

#artificialintelligence

"Data Scientist" is 2020's equivalent of the rocket scientist of the 1950's: mysterious, sexy, and well-paid. But are you actually a "scientist"? While "data science" isn't fully defined yet as an academic subject (National Academies of Sciences, Engineering, and Medicine, 2018), more and more evidence seems to point to it being more of an art, rather than a science. So if the essence of data science isn't yet solidified, how can I make the bold statement that your'e an artist, not a scientist? Renowned Stanford computer scientist Donald Knuth, who the NY Times calls "The Yoda of Silicon Valley", eloquently lays any argument to rest (as cited on SNHU), But to look at it from a different perspective, this time from artist Warren Sack, chair and professor of the Film and Digital Media Department at UC Santa Cruz.


The Importance of Liberal Arts In The AI Economy

#artificialintelligence

Hartley first heard the terms "Fuzzy" and "Techie" while studying political science at Stanford University. At Stanford, if you majored in the humanities or social sciences, you were a Fuzzy. If you majored in the computer sciences, you were a Techie. According to Hartley, this informal division has mistakenly created a business mindset and believes Techies are the real drivers of innovation. Hartley believes that the Fuzzies, not the Techies, are the key talent responsible for creating the most successful new business ideas.


10 Data-Driven Trends That Will Dominate This Year - CXOtoday.com

#artificialintelligence

Data is invaluable to all companies, from budding startups to global enterprises. This growing commodity is triggering organizations to deploy BI solutions that will elevate and accelerate data-driven decisions. Successful organizations are prioritizing a modern BI approach, and in turn, priming their workforce to be the most analytically savvy generation ever seen. For a competitive edge in 2018, organizations must recognize the strategies, technologies, and business roles that can enhance their approach to business intelligence. Here are some of the most critical trends to bear in mind looking ahead to a new year, and even beyond.