Goto

Collaborating Authors

Results


AI Ethics Flummoxed By Those Salting AI Ethicists That "Instigate" Ethical AI Practices

#artificialintelligence

Is it okay or is it questionable for those salting AI Ethicists that seek to get hired by a firm ... [ ] solely to from-within stoke Ethical AI precepts? Salting has been in the news quite a bit lately. I am not referring to the salt that you put into your food. Instead, I am bringing up the "salting" that is associated with a provocative and seemingly highly controversial practice associated with the interplay between labor and business. You see, this kind of salting entails the circumstance whereby a person tries to get hired into a firm to ostensibly initiate or some might arguably say instigate the establishment of a labor union therein. I will cover first the basics of salting and then will switch to an akin topic that you might be quite caught off-guard about, namely that there seems to be a kind of salting taking place in the field of Artificial Intelligence (AI). This has crucial AI Ethics considerations. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. Now, let's get into the fundamentals of how salting typically works. Suppose that a company does not have any unions in its labor force. One means would be to take action outside of the company and try to appeal to the workers that they should join a union. This might involve showcasing banners nearby to the company headquarters or sending the workers flyers or utilizing social media, and so on. This is a decidedly outside-in type of approach. Another avenue would be to spur from within a spark that might get the ball rolling.


Newsom announces Jared Blumenfeld will no longer serve as head of California EPA

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Jared Blumenfeld, California's top environmental regulator and a key climate adviser to Gov. Gavin Newsom, will leave the administration at the end of the month, Newsom announced Friday. Newsom, a Democrat, appointed Blumenfeld as secretary of the California Environmental Protection Agency on his first day in office in 2019. Blumenfeld will become the president of the Waverley Street Foundation, a $3 billion climate initiative funded by Laurene Powell Jobs.


What Ever Happened to the Transhumanists?

#artificialintelligence

Gizmodo is 20 years old! To celebrate the anniversary, we're looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools. Like so many others after 9/11, I felt spiritually and existentially lost. It's hard to believe now, but I was a regular churchgoer at the time. Watching those planes smash into the World Trade Center woke me from my extended cerebral slumber and I haven't set foot in a church since, aside from the occasional wedding or baptism. I didn't realize it at the time, but that godawful day triggered an intrapersonal renaissance in which my passion for science and philosophy was resuscitated. My marriage didn't survive this mental reboot and return to form, but it did lead me to some very positive places, resulting in my adoption of secular Buddhism, meditation, and a decade-long stint with vegetarianism.


Responses to Jack Clark's AI Policy Tweetstorm

#artificialintelligence

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.


AI Regulation: Where do China, the EU, and the U.S. Stand Today?

#artificialintelligence

Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions. Accordingly,enterprises are increasingly embracing this dynamic technology. A 2022 global study by IBM found that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a 2021 PwC study the COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises' impact on workforce planning, supply chain resilience, and demand projection.


How does information about AI regulation affect managers' choices?

#artificialintelligence

Artificial intelligence (AI) technologies have become increasingly widespread over the last decade. As the use of AI has become more common and the performance of AI systems has improved, policymakers, scholars, and advocates have raised concerns. Policy and ethical issues such as algorithmic bias, data privacy, and transparency have gained increasing attention, raising calls for policy and regulatory changes to address the potential consequences of AI (Acemoglu 2021). As AI continues to improve and diffuse, it will likely have significant long-term implications for jobs, inequality, organizations, and competition. Premature deployment of AI products can also aggravate existing biases and discrimination or violate data privacy and protection practices.


Understanding the Ethical Use of Open Data While Protecting PII

#artificialintelligence

People have been wondering for years – when and even sometimes IF artificial intelligence will live up to its incredible potential. The technology is finally beginning to change industries and lives. Now implemented across everything from smartphone cameras and self-driving vehicles to manufacturing facilities, AI has racked up numerous high-profile success stories: People now rely on AI to silently optimize photos, perfect their parallel parking, and discover product defects. AI can either be cool or creepy, but it's currently on the right side of that line. At the same time, however, the public is becoming increasingly aware of AI ethics, as researchers and journalists question the sources of data powering AI innovations, and spotlight ways AI data is being misused by tech giants.


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.


Rage Against the Machine rails against Roe v. Wade decision in return to the stage: 'abort the Supreme Court'

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Alternative rock band Rage Against the Machine returned to the stage for their first performance in 11 years and did not mince words when expressing their anger about the Supreme Court overturning Roe v. Wade. During a show Saturday at Alpine Valley Music Theatre in Wisconsin, the band broadcasted several captions on a screen on stage blasting the high court over its decision to reverse the 1973 ruling on abortion rights – with one caption going as far as to suggest an elimination of the court, the Milwaukee Journal Sentinel reported. In addition to speaking out in favor of abortion rights, the captions touched on a number of other hot-button issues, including a reference to women as "birth-givers" and highlighting the rate of child gun violence victims.