On the floor of the New York Auto Show this week, Genesis showed off its sweet little Mint concept, an electric two-seater with a very abbreviated sedan body. The Hyundai luxury arm does not, however, have any plans to put the adorable thing into production--perhaps because, as we learned this week, getting world-changing tech into the market takes a fair amount of elbow grease. Elon Musk's Boring Company is slowly making its way through the necessary paperwork to make its DC to Baltimore Loop concept a real, live thing. Uber is rounding up the oodles of cash it needs to develop self-driving vehicles. "Flying taxi" engineers are trying to get their concepts past now-nervous aviation regulators.
Artificial Intelligence (AI) has been omnipresent and the latest in the block is Military. In recent times, AI has become a critical part of modern warfare. Compared with the conventional systems, military establishments churning enormous volumes of data are capable to integrate AI on a more unified process. Ensuring operational efficiency, AI improves self-regulation, self-control and self-actuation of combat systems, credit to its inherent computing coupled with accurate decision-making capabilities. Taking into account the enormous capability Artificial intelligence (AI) holds in the modern-day warfare, many of the world's most powerful countries have increased their investments into military and self-security.
Just a week after it was announced, Google's new AI ethics board is already in trouble. The board, founded to guide "responsible development of AI" at Google, would have had eight members and met four times over the course of 2019 to consider concerns about Google's AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. Of the eight people listed in Google's initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won't serve, and two others are the subject of petitions calling for their removal -- Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James's removal.
The FBI has failed to appease concerns about the use of its facial recognition technology in criminal investigations. Multiple issues were raised three years ago after a congressional watchdog urged the bureau to improve its practices in order to meet privacy and accuracy standards. The FBI - and other US law enforcement agencies - have been using the Next Generation Identification-Interstate Photo System since 2015. It uses facial recognition software to link potential suspects to crimes from a vast database of 30 million pictures, including mugshots. The report slamming the FBI for its failure to moderate the software comes as the bureau increases its use of the technology.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. It only takes 10 Spotpower (SP) to haul a truck across the Boston Dynamics parking lot ( 1 degree uphill, truck in neutral). These Spot robots are coming off the production line now and will be available for a range of applications soon.
The federal government wants to hold Mark Zuckerberg personally accountable for Facebook's privacy woes. According to a report in the Washington Post, the Federal Trade Commission (FTC) is currently investigating Facebook and looking into whether the Facebook's founder and CEO should be held liable for the company's data mishandling and privacy issues. Facebook and the FTC have been in discussions for more than a year over the agency's probe into the company. Sources familiar with these discussions say that the FTC is mulling over an unusual decision to hold Zuckerberg himself accountable for the company's data leaks and breaches. The FTC does not regularly go after executives when levying fines or other penalties for a company's wrongdoings.
Artificial intelligence systems can – if properly used – help make government more effective and responsive, improving the lives of citizens. Improperly used, however, the dystopian visions of George Orwell's "1984" become more realistic. On their own and urged by a new presidential executive order, governments across the U.S., including state and federal agencies, are exploring ways to use AI technologies. As an AI researcher for more than 40 years, who has been a consultant or participant in many government projects, I believe it's worth noting that sometimes they've done it well – and other times not quite so well. The potential harms and benefits are significant.
Despite concerns over facial recognition's impact on civil liberties, public agencies have continued to apply the tool liberally across the U.S. with one of the biggest deployments coming to an airport near you. The U.S. Department of Homeland Security (DHS) said that it plans to expand its application of facial recognition to 97 percent of all passengers departing the U.S. by 2023, according to the Verge. By comparison, facial recognition technology is deployed in just 15 airports, according to figures recorded at the end of 2018. In what is being referred to as'biometric exit,' the agency plans to use facial recognition to more thoroughly track passengers entering and leaving the country. The U.S. Department of Homeland Security (DHS) said that it plans to expand its application of facial recognition to 97 percent of all passengers departing the U.S. by 2023 The system functions by taking a picture of passengers before they depart and then cross-referencing the image with a database containing photos of passports and visas.
Your reporting on the use of facial recognition in China for "minority identification" is a stark reminder that the battle over the future of artificial intelligence will not simply be about who gathers the top scientists or who is first to innovate. It will also be about who is able to preserve fundamental rights during a period of rapidly changing technology. The White House has already made some progress on this front, highlighting American values, including privacy and civil liberties, in an executive order earlier this year, and backing an important international framework at the Organization for Economic Cooperation and Development. But there is much more to be done. The United States must work with other democratic countries to establish red lines for certain A.I. applications and ensure fairness, accountability and transparency as A.I. systems are deployed.
During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold. In this blog, I'll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief--how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives. When it comes to immediate and high-precision emergency response, data is gold. Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet. Aside from democratizing the world's information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.