Fox News correspondent Bill Melugin reports live from Del Rio, Texas, as border crisis intensifies and migrant facilities are overrun. Fox News drone footage over the International Bridge in Del Rio Texas shows thousands of migrants being kept there as they wait to be apprehended after crossing illegally into the United States -- as local facilities are overwhelmed and the crisis at the border continues. Border Patrol and law enforcement sources told Fox News that over 4,200 migrants are waiting to be apprehended under the bridge after crossing into the United States. The new footage shows how the migrant crisis that has rocked border states, with a knock-on effect in states across the country, appears to be far from over. Click here to see the footage.
Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficially--in this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.
Highly realistic deepfake videos didn't quite make the splash some feared they would during the 2020 presidential election. Nevertheless, deepfakes are causing trouble--for regular people. In March, the Federal Bureau of Investigation warned that it expected fraudsters to leverage "synthetic content for cyber … operations in the next 12-18 months." In deepfake videos, which first appeared in 2017, a computer-generated face (often of a real person) is superimposed on someone else. After the swap, the fraudsters can make the target person say or do just about anything.
The US Federal Trade Commission (FTC) warns of extortion scammers targeting the LGBTQ community via online dating apps such as Grindr and Feeld. As the FTC revealed, the fraudsters would pose as potential romantic partners on LGBTQ dating apps, sending explicit photos and asking their targets to reciprocate. If they fall for the scammers' tricks, the victims will be blackmailed to pay a ransom, usually in gift cards, under the threat of leaking the shared sexual imagery with their family, friends, or employers. "To make their threats more credible, these scammers will tell you the names of exactly who they plan to contact if you don't pay up. This is information scammers can find online by using your phone number or your social media profile," the FTC said.
The big picture: A US judge ruled this week that an artificial intelligence cannot be listed as the inventor of a patent. This ruling is the latest on an issue that has come before judges in multiple countries. A court in Alexandria, Virginia, ruled that inventions can only be patented under the name of a "natural person." The decision was made against someone who tried to list two designs under the name of an AI as part of a broader project to gain worldwide recognition of AI-powered inventions. Imagination Engines, Inc. CEO Stephen Thaler built an AI called DEBUS, which independently designed a new kind of drink holder and flashing light (used to get someone's attention). The name "DEBUS," along with "Invention generated by artificial intelligence," was used in the attempted patent filing for the inventions.
Artificial intelligence is being deployed in many different areas. Within higher education, it is used for college admissions and financial aid decisions. Health researchers employ it to scan the scientific literature for chemical compounds that may generate new medical treatments. E-commerce sites deploy algorithms to make product recommendations for consumers based on their areas of interest.1 But one of the most important growth areas lies in finance and operations. Both public and private sector organizations have large budgets to manage and it is important to operate efficiently and effectively. Accusations of budget inefficiencies or wasteful spending decrease public confidence and make it important to figure out how to manage resources in fair ways. To help with budgetary oversight, AI is being used for financial management and fraud detection. Advanced algorithms can spot abnormalities and outliers that can be referred to human investigators to determine if fraud actually has taken place. It is a way to use technology to improve budget audits, personnel performance, and organizational activities. Yet is it crucial to overcome several problems that plague public sector innovation: procurement obstacles, insufficiently trained workers, data limitations, a lack of technical standards, cultural barriers to organizational change, and making sure anti-fraud applications adhere to responsible AI principles.
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero's analysis of grand jury testimony and hate crime prosecution documents, Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee. Amid growing public awareness about algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo, and the Utah attorney general's office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.
This is the first part of a 2-part series on the growing importance of teaching Data and AI literacy to our students. This will be included in a module I am teaching at Menlo College but wanted to share the blog to help validate the content before presenting to my students. Apple plans to introduce new iPhone software that uses artificial intelligence (AI) to churn through the vast collection of photos that people have taken with their iPhones to detect and report child sexual abuse. See the Wall Street article "Apple Plans to Have iPhones Detect Child Pornography, Fueling Priva..." for more details on Apple's plan. Apple has a strong history of working to protect its customers' privacy.
Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...
The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirty-Fifth Conference on Artificial Intelligence was held virtually from February 8-9, 2021. There were twenty-six workshops in the program: Affective Content Analysis, AI for Behavior Change, AI for Urban Mobility, Artificial Intelligence Safety, Combating Online Hostile Posts in Regional Languages during Emergency Situations, Commonsense Knowledge Graphs, Content Authoring and Design, Deep Learning on Graphs: Methods and Applications, Designing AI for Telehealth, 9th Dialog System Technology Challenge, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, 5th International Workshop on Health Intelligence, Hybrid Artificial Intelligence, Imagining Post-COVID Education with AI, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture During Training, Meta-Learning and Co-Hosted Competition, ...