Goto

Collaborating Authors

Michigan


Understanding media narratives with machine learning and NLP

#artificialintelligence

Storytelling and narrative crafting are central to communication techniques -- so much so that they drive the way news media, advertising, and public relations operate today. But the way narratives are used in these communications, as well as how they impact the opinions of individuals or an entire society, is extremely complex and difficult to express with any specificity. A new project at the University of Michigan supported by the Air Force Office of Scientific Research (AFOSR) aims to use computational tools to conceptualize these narratives and the impact they have on readers. "It remains unclear how to effectively represent and extract narratives at scale," says Computer Science and Engineering Prof. Lu Wang, the project's lead investigator, "and little is known about how they interact with people's inclination to have an impact and confirm their own values." This uncertainty stems from the problem's scope: understanding the narratives used in news media, for example, and how they affect millions of unique individuals involves countless variables.


Responsible Data Management

Communications of the ACM

Incorporating ethics and legal compliance into data-driven algorithmic systems has been attracting significant attention from the computing research community, most notably under the umbrella of fair8 and interpretable16 machine learning. While important, much of this work has been limited in scope to the "last mile" of data analysis and has disregarded both the system's design, development, and use life cycle (What are we automating and why? Is the system working as intended? Are there any unforeseen consequences post-deployment?) and the data life cycle (Where did the data come from? How long is it valid and appropriate?). In this article, we argue two points. First, the decisions we make during data collection and preparation profoundly impact the robustness, fairness, and interpretability of the systems we build. Second, our responsibility for the operation of these systems does not stop when they are deployed. To make our discussion concrete, consider the use of predictive analytics in hiring. Automated hiring systems are seeing ever broader use and are as varied as the hiring practices themselves, ranging from resume screeners that claim to identify promising applicantsa to video and voice analysis tools that facilitate the interview processb and game-based assessments that promise to surface personality traits indicative of future success.c Bogen and Rieke5 describe the hiring process from the employer's point of view as a series of decisions that forms a funnel, with stages corresponding to sourcing, screening, interviewing, and selection. The hiring funnel is an example of an automated decision system--a data-driven, algorithm-assisted process that culminates in job offers to some candidates and rejections to others. The popularity of automated hiring systems is due in no small part to our collective quest for efficiency.


Families of Oxford High School shooting victims react after board again rejects independent investigation

FOX News

The parents of several Oxford High School students, including deceased Tate Myre, have filed a lawsuit against shooting suspect Ethan Crumbley, his parents and school staff. The parents of two victims of the Nov. 30, 2021, shooting at Oxford High School in Michigan are demanding more transparency from the Oxford Community School District after the board voted against moving forward with an independent investigation into the tragedy last fall. The Oxford Board of Education on Tuesday announced that the district has, for the second time, declined an offer from Michigan Attorney General Dana Nessel to conduct a third-party investigation into the school shooting with the goal of determining how shooting suspect Ethan Crumbley, 15, managed to kill four students and injure seven others last fall. "To me, this is an admission of guilt," Buck Myre, father of deceased 16-year-old Tate Myre, said during a Thursday press conference. "They know that things didn't go right that day, and they don't want to stand up and fix it. They're going to hide behind governmental immunity and they're going to hide behind insurance and the lawyers. What's this teach the kids? "We just want accountability," he added later when asked why an independent investigation is important to parents. Oakland County Prosecutor Karen McDonald revealed in December 2021 that school officials met with Crumbley and his parents to discuss violent drawings he created just hours before the deadly rampage. The 15-year-old suspect was able to convince them during the meeting that the concerning drawings were for a "video game." His parents "flatly refused" to take their son home. The shooting has also resulted in several lawsuits, including two that seek $100 million in damages each, against the school district and school employees on behalf of the family of two sisters who attend the school. Ethan Robert Crumbley, 15, charged with first-degree murder in a high school shooting, poses in a jail booking photograph taken at the Oakland County Jail in Pontiac, Michigan. Myre and Meghan Gregory, the mother of 15-year-old Keegan Gregory, who survived the shooting but witnessed and was traumatized by Crumbley's rampage, are suing the shooting suspect's parents, James and Jennifer Crumbley, as well as school staff for negligence. JENNIFER CRUMBLEY, ETHAN CRUMBLEY'S MOTHER, SENT OMINOUS TEXTS ON DAY OF SHOOTING: 'HE CAN'T BE LEFT ALONE' "They're the ones that know what happened that day.


How Language-Generation AIs Could Transform Science

Scientific American: Technology

Machine-learning algorithms that generate fluent language from vast amounts of text could change how science is done -- but not necessarily for the better, says Shobita Parthasarathy, a specialist in the governance of emerging technologies at the University of Michigan in Ann Arbor. In a report published on 27 April, Parthasarathy and other researchers try to anticipate societal impacts of emerging artificial-intelligence (AI) technologies called large language models (LLMs). These can churn out astonishingly convincing prose, translate between languages, answer questions and even produce code. The corporations building them -- including Google, Facebook and Microsoft -- aim to use them in chatbots and search engines, and to summarize documents. They sometimes parrot errors or problematic stereotypes in the millions or billions of documents they're trained on.


The DataHour: Build Your First Chatbot Using Open Source Tools

#artificialintelligence

The latest edition of our flagship learning series on everything in and about data analytics is sure to excite your minds, be prepared for the DataHour on Building your First Chatbot using Open Source Tools. The session will be hosted by Dr. Rachael Tatman- Staff Developer Advocate at Rasa, the world's leading conversational AI platform, that enables enterprises to revamp customer experience with cutting-edge open-source machine learning implementations. In this session, you will be led on an engaging journey of using the open-source platform Rasa, and the lecture will be helmed by an ex-Googler and an instructor at the University of Michigan, Dr. Rachael Tatman. The session is for both freshers and professionals alike who would like to design chatbots to improve the CX for their organisations or simply get hands-on experience with open source tools like Rasa. Chatbots have been around for some time.


Manager, Data Science (Hybrid)

#artificialintelligence

May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think. Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We're building the world's best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we've given more than 300,000 autonomy-enabled rides to real people around the globe.


Is AI threatened by too little data?

#artificialintelligence

Whether it's due to a lack of funding, lack of know-how or censorship, some governments and entities are shrinking the amount of data that they incorporate into their AI. Does this compromise the integrity of AI results? Intentional data shrinking is occurring as a matter of policy and expediency. Roya Ensafi, assistant professor of computer science and engineering at the University of Michigan, discovered that censorship was increasing in 103 countries. Most censorship actions "were driven by organizations or internet service providers filtering content," Ensafi reported.



How language-generation AIs could transform science

Nature

Shobita Parthasarathy says that LLMs could help to advance research, but their use should be regulated. Machine-learning algorithms that generate fluent language from vast amounts of text could change how science is done -- but not necessarily for the better, says Shobita Parthasarathy, a specialist in the governance of emerging technologies at the University of Michigan in Ann Arbor. In a report published on 27 April, Parthasarathy and other researchers try to anticipate societal impacts of emerging artificial-intelligence (AI) technologies called large language models (LLMs). These can churn out astonishingly convincing prose, translate between languages, answer questions and even produce code. The corporations building them -- including Google, Facebook and Microsoft -- aim to use them in chatbots and search engines, and to summarize documents. They sometimes parrot errors or problematic stereotypes in the millions or billions of documents they're trained on.


Toward Justice in Computer Science through Community, Criticality, and Citizenship

Communications of the ACM

Neither technologies nor societies are neutral, and failing to acknowledge this, results at best, in a narrow view of both. At worst, it leads to technology that reinforces oppressive societal norms. We agree with Alex Hanna, Timnit Gebru, and others who argue individual harms reflect institutional problems, and thus require institutional and systemic solutions. We believe computer science (CS) as a discipline often promotes itself as objective and neutral. This tendency allows the field to ignore systems of oppression that exist within and because of CS. As scholars in educational psychology, computer science education, and social studies education, we suggest a way forward through institutional change, specifically in the way we teach CS.