Goto

Collaborating Authors

 public policy


Contributor: Rob Reiner reshaped how California understands and invests in children

Los Angeles Times

Things to Do in L.A. Hollywood director Rob Reiner engineered Proposition 10, a 1998 tobacco tax that created First 5 California, generating more than $11 billion for early childhood programs statewide. This is read by an automated voice. Please report any issues or inconsistencies here . After his tragic death Sunday, the world remembers Rob Reiner as a cinematic force -- and he was one, as an unforgettable presence on the ambitious 1970s sitcom "All in the Family" and later as the director of beloved films. I came to know him differently: as a restless thinker who transformed his own life story into bold public policy, reshaping how California understands and invests in its youngest children.


The normalization of (almost) everything: Our minds can get used to anything, and even crises start feeling normal Science

Science

For a long time, many climate scientists and advocates held onto an optimistic belief that once the impacts of climate change became undeniable, people and governments would act. But whereas the predictions of climate models have increasingly borne out, the assumptions about human behavior have not. Even as disasters mount, climate change remains low on voters' priority lists, and policy responses remain tepid. To me, this gap reflects a deeper failure--not just in policy or communication, but in how we understand human adaptability. When I began my career as a computational cognitive scientist, I was drawn to a defining strength of human cognition--a marked ability to adapt.


California lawmakers are trying to regulate AI before it's too late. Here's how

Los Angeles Times

For four years, Jacob Hilton worked for one of the most influential startups in the Bay Area -- OpenAI. His research helped test and improve the truthfulness of AI models such as ChatGPT. He believes artificial intelligence can benefit society, but he also recognizes the serious risks if the technology is left unchecked. Hilton was among 13 current and former OpenAI and Google employees who this month signed an open letter that called for more whistleblower protections, citing broad confidentiality agreements as problematic. "The basic situation is that employees, the people closest to the technology, they're also the ones with the most to lose from being retaliated against for speaking up," says Hilton, 33, now a researcher at the nonprofit Alignment Research Center, who lives in Berkeley.


'Huge egos are in play': behind the firing and rehiring of OpenAI's Sam Altman

The Guardian

OpenAI's messy firing and re-hiring of its powerful chief executive this week shocked the tech world. But the power struggle has implications beyond the company's boardroom, AI experts said. It throws into relief the greenness of the AI industry and the strong desire in Silicon Valley to be first, and raises urgent questions about the safety of the technology. "The AI that we're looking at now is immature. There are no standards, no professional body, no certifications. Everybody figures out how to do it, figures out their own internal norms," said Rayid Ghani, a professor of machine learning and public policy at Carnegie Mellon University.


UK needs AI legislation to create trust so companies can 'plug AI into British economy' – report

AIHub

The British government should offer tax breaks for businesses developing AI-powered products and services, or applying AI to their existing operations, to "unlock the UK's potential for augmented productivity", according to a new University of Cambridge report. Researchers argue that the UK currently lacks the computing capacity and capital required to build "generative" machine learning models fast enough to compete with US companies such as Google, Microsoft or Open AI. Instead, they call for a UK focus on leveraging these new AI systems for real-world applications – such as developing new diagnostic products and addressing the shortage of software engineers, for example – which could provide a major boost to the British economy. However, the researchers caution that without new legislation to ensure the UK has solid legal and ethical AI regulation, such plans could falter. British industries and the public may struggle to trust emerging AI platforms such as ChatGPT enough to invest time and money into skilling up. The policy report is a collaboration between Cambridge's Minderoo Centre for Technology and Democracy, Bennett Institute for Public Policy, and ai@cam: the University's flagship initiative on artificial intelligence.


Ex-Google chief built 'oligarch-style empire' to influence AI, Biden White House and public policy: report

FOX News

Former Google CEO Eric Schmidt has developed a vast network of strategic investments and political relationships that's allowed the tech billionaire to wield significant influence over artificial intelligence and public policy in Washington, D.C., according to an explosive new report. The Bull Moose Project, a nonprofit advocacy group committed to developing "the next generation of America First leaders and policies," has spent months investigating Schmidt's financial disclosures, tax records, business documents and other publicly available information. On Thursday, the group released a report outlining its findings, first obtained by Fox News Digital. "Americans don't want to believe that they live under'the rule of the few,' rather than a democracy's'rule of the many' – but this sobering report is a wake-up call that our elected representatives can't ignore," said Aiden Buzzetti, president of the Bull Moose Project. "What we've put together reinforces the puppet-master role that big tech's leaders play in the public's lives. All items in this database and report are backed by reputable, verifiable sources, and we plan to update this it regularly so that the public has access to Schmidt's dealings, even if government refuses to disclose them. Get ready for your mind to be blown."


Climate Policy Tracker: Pipeline for automated analysis of public climate policies

Żółkowski, Artur, Krzyziński, Mateusz, Wilczyński, Piotr, Giziński, Stanisław, Wiśnios, Emilia, Pieliński, Bartosz, Sienkiewicz, Julian, Biecek, Przemysław

arXiv.org Artificial Intelligence

The number of standardized policy documents regarding climate policy and their publication frequency is significantly increasing. The documents are long and tedious for manual analysis, especially for policy experts, lawmakers, and citizens who lack access or domain expertise to utilize data analytics tools. Potential consequences of such a situation include reduced citizen governance and involvement in climate policies and an overall surge in analytics costs, rendering less accessibility for the public. In this work, we use a Latent Dirichlet Allocation-based pipeline for the automatic summarization and analysis of 10-years of national energy and climate plans (NECPs) for the period from 2021 to 2030, established by 27 Member States of the European Union. We focus on analyzing policy framing, the language used to describe specific issues, to detect essential nuances in the way governments frame their climate policies and achieve climate goals. The methods leverage topic modeling and clustering for the comparative analysis of policy documents across different countries. It allows for easier integration in potential user-friendly applications for the development of theories and processes of climate policy. This would further lead to better citizen governance and engagement over climate policies and public policy research.


SEC's Gary Gensler on how artificial intelligence is changing finance

#artificialintelligence

Artificial intelligence is giving finance a boost -- through robo advising, its ability to improve fraud detection and claims processing, and more. Despite the upsides, there are risks and public policy challenges that must be considered, said Gary Gensler, chair of the Securities and Exchange Commission and a former professor at MIT Sloan. "I think that we're living in a truly transformational time," said Gensler, who spoke at the recent AI Policy Forum summit at MIT. Artificial intelligence is "every bit as transformational as the internet," especially when it comes to predictive data analytics, "but it comes with some risks." During the conversation, Gensler shared his thoughts on how artificial intelligence is changing finance. Having solid predictive models is crucial in AI, whether it's in social media or in driverless cars.


AI for smarter legislation

#artificialintelligence

Legislation is an inherently human endeavor. But just as organizations across industries are unlocking new capabilities and efficiencies through artificial intelligence (AI), governments also can aid their legislative processes through the application of AI. For the past five years, we've studied the potential impact of AI on government. We've looked at everything from how much time AI could save workers in each US federal agency to the rate of AI adoption in US federal, state, and local governments.2 While AI can help many different areas of the legislative process--from AI assistants answering members' questions about legislation to natural language processing analyzing the US Code for contradictions--two key applications stand out.


Robot law: Public policy, legal liability, and the new world of autonomous systems

#artificialintelligence

Algorithmic disgorgement might sound like a phrase from a science-fiction horror film. In fact, it's a new tool for regulators to address the consequences of autonomous systems, ordering companies to remove or destroy algorithms and models in their products based on data obtained unfairly or deceptively. This is one of topics and papers to be presented and discussed at We Robot, an annual conference where scholars and technologists discuss legal and policy questions relating to robots and artificial intelligence. We Robot is taking place next week, from Sept. 14-16, at the University of Washington in Seattle, with a virtual option, as well. It's also an example of how the legal and regulatory landscape for robots, AI, and autonomous systems have changed in the decade since the conference was first held at the University of Miami in 2012. "We've come very far," said Ryan Calo, one of the organizers of the conference, a University of Washington law professor who specializes in areas including privacy, artificial intelligence and robots.