Goto

Collaborating Authors

 engler


Three things to know about how the US Congress might regulate AI

MIT Technology Review

Schumer's plan is a culmination of many other, smaller policy actions. On June 14, Senators Josh Hawley (a Republican from Missouri) and Richard Blumenthal (a Democrat from Connecticut) introduced a bill that would exclude generative AI from Section 230 (the law that shields online platforms from liability for the content their users create). Last Thursday, the House science committee hosted a handful of AI companies to ask questions about the technology and the various risks and benefits it poses. House Democrats Ted Lieu and Anna Eshoo, with Republican Ken Buck, proposed a National AI Commission to manage AI policy, and a bipartisan group of senators suggested creating a federal office to encourage, among other things, competition with China. Though this flurry of activity is noteworthy, US lawmakers are not actually starting from scratch on AI policy.


An early guide to policymaking on generative AI

MIT Technology Review

She wanted to know if I had any suggestions, and asked what I thought all the new advances meant for lawmakers. I've spent a few days thinking, reading, and chatting with the experts about this, and my answer morphed into this newsletter. Though GPT-4 is the standard bearer, it's just one of many high-profile generative AI releases in the past few months: Google, Nvidia, Adobe, and Baidu have all announced their own projects. In short, generative AI is the thing that everyone is talking about. And though the tech is not new, its policy implications are months if not years from being understood.


Responsible AI: What Does It Take to Turn Principles into Practice?

#artificialintelligence

Many agree on what responsible, ethical AI looks like -- at least at a zoomed-out level. But outlining key goals, like privacy and fairness, is only the first step. Policymakers need to determine whether existing laws and voluntary guidance are powerful enough tools to enforce good behavior, or if new regulations and authorities are necessary. And organizations will need to plan for how they can shift their culture and practices to ensure they're following responsible AI advice. That could be important for compliance purposes or simply for preserving customer trust.


Can the world's de facto tech regulator really rein in AI? - Coda Story

#artificialintelligence

Artificial intelligence is creeping into every aspect of our lives. AI-powered software is triaging hospital patients to determine who gets which treatment, deciding whether an asylum seeker is lying or telling the truth in their application and even conjuring up weird conceits for sitcoms. Just lately, these kinds of tools have been helping killer robots select their targets in the war in Ukraine. AI systems have been proven to carry systemic biases again and again, but their increasing centrality to the way we live makes those debates even more urgent. In typical tech fashion, AI-driven tools are advancing much faster than the laws that could theoretically govern them.


3 things the AI Bill of Rights does (and 3 things it doesn't)

#artificialintelligence

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here. Expectations were high when the White House released its Blueprint for an AI Bill of Rights on Tuesday. Developed by the White House Office of Science and Technology Policy (OSTP), the blueprint is a non-binding document that outlines five principles that should guide the design, use and deployment of automated systems, as well as technical guidance toward implementing the principles, including recommended action for a variety of federal agencies. For many, high expectations for dramatic change led to disappointment, including criticism that the AI Bill of Rights is "toothless" against artificial intelligence (AI) harms caused by big tech companies and is just a "white paper."


Assessing the intersection of open source and AI

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Open source technology has been a driving factor in many of the most innovative developments of the digital age, so it should come as no surprise that it has made its way into artificial intelligence as well. But with trust in AI's impact on the world still uncertain, the idea that open source tools, libraries, and communities are creating AI projects in the usual wild west fashion is creating yet more unease among some observers. Open source supporters, of course, reject these fears, arguing that there is just as little oversight into the corporate-dominated activities of closed platforms. In fact, open source can be more readily tracked and monitored because it is, well, open for all to see. And this leaves us with the same question that has bedeviled technology advances through the ages: Is it better to let these powerful tools grow and evolve as they will, or should we try to control them?


Open source powers AI, yet policymakers haven't seemed to notice

#artificialintelligence

"Open source software quietly affects nearly every issue in AI policy," wrote Alex Engler in a Brookings Institution briefing, yet this is barely discussed by government policymakers. This is a mistake, and it's one that crosses the political aisle. The Trump administration barely mentioned open source in its AI policies, while the Obama administration touted open source as driving AI innovation but stopped there. In Europe things are no better, with new regulations about AI skipping the topic of open source entirely. Given how prevalent open source has become in the artificial intelligence software that companies and governments use, policymakers would do well to pay attention, noted Engler.


Don't expect AI to solve the coronavirus crisis on its own

#artificialintelligence

Scientists are exploring every possible option for help battling the coronavirus pandemic, and artificial intelligence represents an intriguing avenue. AI has been used to search for new molecules capable of treating Covid-19, to scan through lung CTs for signs of Covid-related pneumonia, and to aid the epidemiologists who tracked the disease's spread early on. The technology is even powering new tracking software that might help identify those walking around with a fever or catch people violating quarantine rules. But how much faith should people really have in these untested tools? In a recent brief, Alex Engler, who studies AI at the Brookings Institution, warned that people should manage their expectations.


Action Plan for HR as Artificial Intelligence Spreads

#artificialintelligence

Will robots take my job? Yes, there's actually a website that indicates the likelihood of you being displaced by a bot. Gartner TalentNeuron data shows that by 2020, artificial intelligence will be pervasive in new software products and services and AI will become a positive net job motivator, creating 2.3 million jobs while only eliminating 1.8 million jobs. "Beyond the net impact on employment numbers, AI is changing the skills needed to perform today's jobs," Scott E. Engler, Gartner VP, Advisory, said at ReimagineHR 2018 in Orlando, FL. Roles are shifting to focus more and more on social-creative skills (which AI can't perform) and digital dexterity skills (the skills for working with technology and its outputs).