A number of AI researchers are pushing back and developing ways to make sure AIs can't learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference. "I don't like people taking things from me that they're not supposed to have," says Emily Wenger at the University of Chicago, who developed one of the first tools to do this, called Fawkes, with her colleagues last summer: "I guess a lot of us had a similar idea at the same time." Actions like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models. But these efforts typically require collective action, with hundreds or thousands of people participating, to make an impact.
Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior -- in other words, learning from their personalized data and altering their choices in some subtle way. Since the Cambridge Analytica Scandal in 2017, for example, it is widely known that the flood of targeted advertising and highly personalized content on Facebook may not only nudge users into buying more products, but also to coax and manipulate them into voting for particular political parties. University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein popularized the term "nudge" in 2008, but due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers' behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals' decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective.
In 1969, as revolutionary fires burned, the Academy gave its Best Picture award to "Oliver!" Hollywood, still ruled by the crumbling studio system, was almost willfully blind to the nineteen-sixties; even breakthrough films such as "2001: A Space Odyssey" and "Rosemary's Baby" were left off the Best Picture list, which included representatives of such superannuated genres as the big-budget musical ("Funny Girl") and the medieval costume drama ("The Lion in Winter"). Under the newly devised rating system, "Oliver!" became the first G-rated film to win Best Picture, and it remains the last. By the next year, movies like "Midnight Cowboy" and "Easy Rider" finally injected the ceremony with a dose of sixties counterculture--but the decade was over. Two of this year's eight Best Picture nominees are set largely in 1969, and they show what Hollywood wouldn't bring itself to see back then. "The Trial of the Chicago 7" dramatizes the politicized court proceedings against activists who, the year before, protested the Democratic National Convention in Chicago.
"Mitchell knows what she's talking about. Artificial Intelligence has significantly improved my knowledge when it comes to automation technology, [but] the greater benefit is that it has also enhanced my appreciation for the complexity and ineffability of human cognition."―John Warner, Chicago Tribune "Without shying away from technical details, this survey provides an accessible course in neural networks, computer vision, and natural-language processing, and asks whether the quest to produce an abstracted, general intelligence is worrisome . . . Mitchell's view is a reassuring one." AI isn't for the faint of heart, and neither is this book for nonscientists . . .
When members of the scientific community gathered at the AAAS Annual Meeting in February, they did so in front of laptops and tablets from their home offices and dining tables. They presented over Zoom, submitted questions via chat, and caught up with colleagues over social media. The 2021 AAAS Annual Meeting was unlike any other in the meeting's 187-year history, but the fully virtual setting did not dampen enthusiasm for sharing science in keeping with the “Understanding Diverse Ecosystems” meeting theme. Dozens of scientific sessions shared new research in areas ranging from microbiomes to space travel. More than 40 workshops offered attendees the opportunity to discuss strategies for working in the ecosystems of academia and science policy. Plenary and topical lecturers covered timely topics, including Ruha Benjamin on how technology can deepen inequities, Anthony Fauci on the next steps for COVID-19 response, Mary Gray on research ethics, and Yalidy Matos on immigration policies. “The quality of the speakers was absolutely undeniable, and the diversity of the speakers—across gender, race, region—was just extraordinary,” said Sudip Parikh, chief executive officer of AAAS and executive publisher of the Science family of journals. “That is what our vision of the world looks like in a place where science is done with creativity and innovation and excellence.” Selecting a diverse meeting program is grounded in AAAS's values, but it is not without concerted effort, according to Claire Fraser. Fraser, who served as AAAS president through February and now serves as chair of the AAAS Board of Directors, selected the meeting theme and led the AAAS Meeting Scientific Program Committee, which oversees selection of the meeting's speakers. “The diversity doesn't happen by accident. I think it reflects the very strong commitment on the part of the Scientific Program Committee to make sure that not only is the science presented timely and excellent, but the diversity of speakers and participants is as broad as it possibly can be,” said Fraser, director of the Institute for Genome Sciences at the University of Maryland School of Medicine. Diversity isn't an afterthought—it's a deliberate part of the very first review of potential scientific sessions, according to Andrew Black, chief of staff and chief public affairs officer. When hundreds of volunteer reviewers evaluate the quality of the submissions before sending the best for consideration by the Scientific Program Committee, they are also looking for diversity across many dimensions, Black said. Among those dimensions are diversity of scientific discipline—befitting AAAS's multidisciplinary membership—but also gender, race and ethnicity, geographic diversity, career stage, and type of institution, including all types and sizes of universities, industry, and government. “Who do you see, who do you hear, and what kind of voices are in dialogue with each other? That's part of our assessment process,” said Agustín Fuentes, professor of anthropology at Princeton University and a member of the Scientific Program Committee. The review process offers opportunities for applicants to diversify their sessions. Applicants are often encouraged to look beyond their own networks to add a range of voices to their presentation to best communicate their ideas to the broader scientific community, Fuentes said. “We need to think very carefully in this moment in time about how do we not only redress past biases and discriminatory practices but how do we create a space, a voice, and a suite of presenters that is very inviting to a diverse audience,” Fuentes said. Added Fraser, “What you end up with is even better because you have such broad perspectives represented.” The committee also emphasized the importance of ensuring that a diverse group of decision-makers have a seat at the table. Members of the Scientific Program Committee, who are nominated from across AAAS and its 26 disciplinary sections and approved by the AAAS Board, represent a broad range of groups and perspectives, Fraser said. “What I firmly believe is that you can't come up with a diverse program like we had this year and like we've had in previous years without that diversity in the program committee,” Fraser said. Commitment to diversity across many axes is part of AAAS Annual Meeting history. In the 1950s, AAAS refused to hold meetings in the segregated South. In 1976, under one of AAAS's first female presidents, Margaret Mead, the Annual Meeting was fully accessible to people with disabilities for the first time. According to the AAAS Project on Science, Technology, and Disability, wheelchair ramps were added to the conference hall, programs were made accessible for hearing-impaired and visually impaired attendees, and Mead's presidential address was simultaneously interpreted in sign language. In 1978, AAAS's Board of Directors voted to move the following year's Annual Meeting out of Chicago because Illinois had not ratified the Equal Rights Amendment. In 1993, AAAS moved its 1999 meeting from Denver after Colorado voters adopted a constitutional amendment to deny residents protection from discrimination based on sexual orientation. Leaders at AAAS note that there is always more work to be done in the present and future—both at the Annual Meeting and year-round. AAAS continues to focus on its own systemic transformation in areas of diversity, equity, and inclusion and on the breadth of initiatives in its new Inclusive STEM Ecosystems for Equity & Diversity program, all to ensure that the scientific enterprise reflects the full range of talent. That goal resonated with many 2021 AAAS Annual Meeting speakers, too. A more diverse group of scientists creating artificial intelligence systems can improve those systems, said Ayanna Howard, a roboticist who leads The Ohio State University's College of Engineering, during her topical lecture, “Demystifying AI Through the Lens of Fairness and Bias.” Said Howard, “We as people are diverse and we're different and it makes us unique and beautiful, and our AI systems should be designed in such a way.” Nalini Nadkarni, a University of Utah biologist who delivered a topical lecture on “Forests, the Earth, and Ourselves: Understanding Dynamic Systems Through an Interdisciplinary Lens,” shared how she reaches young girls to let them know that science—and her own scientific specialty—is a space where they can thrive. She and her students created and distributed “Treetop Barbie,” dressing a doll in fieldwork clothes and creating a doll-sized booklet about canopy plants. The Annual Meeting offers a chance to show that science is best when it is for everyone, regardless of background or perspective, whether they're a kid or just a kid at heart. Said Parikh, “The AAAS Annual Meeting is where the pages of Science literally come alive. It's a place where scientists, no matter what discipline or industry they decided to pursue, can pull back and just fall in love with the idea of science again—like we did when we were kids.”
Western Illinois University was the closest, lowest-tier team to Chicago, where O'Donnell lives, and so he selected them, wheeled out an Xbox 360 "that sounds like a f---ing air conditioner" and got to work with recruiting and inputting coaching strategies. He decided to simulate the games, cataloguing the first two seasons and taking photos of the game on his TV screen. His comprehensive reports include game recaps, recruiting strategy, player analysis, team matchup breakdowns and more.
Two disciplines familiar to econometricians, factor analysis of equities returns and machine learning, have grown up alongside each other. Used in tandem, these fields of study can build effective investment-management tools, according to City University of Hong Kong's Guanhao Feng (a graduate of Chicago Booth's PhD Program), Booth's Nicholas Polson, and Booth PhD candidate Jianeng Xu. The researchers set out to determine whether they could create a deep-learning model to automate the management of a portfolio built on buying stocks that are expected to rise and short selling those that are expected to fall, known as a long-short strategy. They created a machine-learning algorithm that built a long-short equity portfolio from the top and bottom 20 percent of a 3,000-stock universe. They ranked the equities using the five-factor model of Chicago Booth's Eugene F. Fama and Dartmouth's Kenneth R. French.
Software that uses artificial intelligence (AI) may help improve breast cancer diagnosis. QuantX, developed in Chicago, uses AI to analyze breast MRIs. Radiologists can use the technology to help assess if breast lesions are cancerous. Research shows the technology led to a 39% reduction in missed cancers, according to a clinical trial. Maryellen Giger, PhD, a professor of radiology at the University of Chicago, developed the technology, which the FDA cleared in 2017.
TYSONS, Va.--(BUSINESS WIRE)--QOMPLX, a leader in cloud-native risk analytics, has entered into a definitive agreement to acquire RPC Tyche LLP ("Tyche"), a rapidly growing insurance software modeling and consulting firm based in London, Cambridge, Paris and Chicago. Tyche bolsters QOMPLX's insurance analytics offerings, and the combined business will offer more comprehensive insurance underwriting, pricing, risk modeling, capital modeling, and reserving functionality. It is an exceptional software business that combines innovative technology with actuarial expertise to help reduce the time and costs that insurers, reinsurers and intermediaries face in producing actionable data feeding today's commercial and regulatory decision-making. Tyche and QOMPLX's combined team are building the insurance data factory of the future with superior capabilities for data integration, transformation, analysis, and contextualization for corporations, employees, and consumers. Tyche's core modeling platform focuses on the complex challenges facing insurers: pricing risks, modeling and reserving capital, and improving efficiency.
Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission. The lawsuit, filed in Alameda County Superior Court in the San Francisco bay area, says the New York company violates California's constitution and seeks a court order to bar it from collecting biometric information in California and requiring it to delete data on Californians. The lawsuit says the company has built "the most dangerous" facial recognition database in the nation, has fielded requests from more than 2,000 law enforcement agencies and private companies and has amassed a database nearly seven times larger than the FBI's. Separately, the Chicago Police Department stopped using the New York company's software last year after Clearview AI was sued in Cook County by the American Civil Liberties Union. The California lawsuit was filed by four activists and the groups Mijente and Norcal Resist.