Goto

Collaborating Authors

 unethical ai


Will The White House's Artificial Intelligence "Bill of Rights" Protect Consumers from Big-Tech's Advertising Abuses?

#artificialintelligence

The Biden administration just released a document that they believe should define the standards for responsible use of one of the more critical technologies that is set to define the future – Artificial Intelligence (AI). The document, "The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People," was released by the White House Office of Science and Technology Policy (WHOSTP). It lays out the five guiding principles that the WHOSTP feels should guide the "design, use, and deployment" of automated systems in order to protect Americans in the age of AI. The Blueprint emphasizes creating safe and effective AI systems, providing algorithmic discrimination protections, data privacy, clarified notice and explanations of how AI may be used, and providing alternative options for consumers that choose to opt out. This idea of governmental guidance in AI may seem innovative, but the truth is, at least 60 countries already have national AI protocols and the United States is merely playing catch-up at this point.


What It Takes To Create And Implement Ethical Artificial Intelligence

#artificialintelligence

Artificial intelligence "acts" unethically in ways that are different from humans, even if the harms that both AI and humans can cause are similar. For example, even if both humans and AI can invade people's privacy, discriminate, or cause physical harm, artificial intelligence does not act with intention to cause such harm. Rather, the harm results from how artificial intelligence collects and processes data. Currently, artificial intelligence cannot achieve consciousness, though one Google engineer disagrees. Today, the type of artificial intelligence that companies are creating and incorporating into their operations and decision systems is artificial narrow intelligence, which refers to a computer's ability to perform a single task or limited tasks extremely well.


We used game theory to determine which AI projects should be regulated

#artificialintelligence

Ever since artificial intelligence (AI) made the transition from theory to reality, research and development centers across the world have been rushing to come up with the next big AI breakthrough. This competition is sometimes called the "AI race". In practice, though, there are hundreds of "AI races" heading towards different objectives. Some research centers are racing to produce digital marketing AI, for example, while others are racing to pair AI with military hardware. Some races are between private companies and others are between countries.

  Country:
  Genre: Research Report (0.32)
  Industry:

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

Washington Post - Technology News

In an internal memo that he later posted online explaining Gebru's departure, Dean told employees that the paper "didn't meet our bar for publication" and "ignored too much relevant research" on recent positive improvements to the technology. Gebru's superiors had insisted that she and the other Google co-authors either retract the paper or remove their names. Employees in Google Research, the department that houses the ethical AI team, say authors who make claims about the benefits of large language models have not received the same scrutiny during the approval process as those who highlight the shortcomings.


Framing Right Testing Strategy to Avoid Challenges of Unethical AI

#artificialintelligence

The benefits of artificial intelligence are flourishing across several industries and finding its way to all kinds of technical aspects. From education to manufacturing the technology has served every sector for better while introducing various innovations across its verticals. But, as experts fear, the broader AI use becomes, the higher the risk of "AI gone wrong" which means the algorithms can evolve on their own to make unintended decisions. In a recent blog for Forrester, Vice President and Principal Analyst Diego Lo Giudice discussed the expansion of artificial intelligence and the increased need for checks and balances. However, testing AI is not as simple as testing traditional software and as Lo Giudice puts it, how can one test something when they don't know the desired or anticipated outcome.