With a bit of a drum roll, I now present you with the official statement about the anomaly (as per the NASA website): "Approximately 54 seconds into the flight, a glitch occurred in the pipeline of images being delivered by the navigation camera. This glitch caused a single image to be lost, but more importantly, it resulted in all later navigation images being delivered with inaccurate timestamps. From this point on, each time the navigation algorithm performed a correction based on a navigation image, it was operating based on incorrect information about when the image was taken. The resulting inconsistencies significantly degraded the information used to fly the helicopter, leading to estimates being constantly'corrected' to account for phantom errors.
There are dozens of routes that Alaska Airlines Flight 1405 can take from Oklahoma City to Seattle, and dispatcher Brad Ward zeroed in on what he thought was the best one, taking into account weather, wind speeds, and other air traffic. But his new colleague at the Alaska Airlines operations center had other thoughts. A storm cell near Oklahoma City was likely to turn into a thunderstorm around the time Flight 1405 took off, and the airspace north of Amarillo would be closed for military exercises. Better to reroute, the young colleague said, suggesting an alternative that Ward admitted was safer and more efficient. The entire conversation lasted just seconds and passed without a word being spoken: a red box lit up on Ward's computer screen when the colleague, an artificial intelligence program he has affectionately nicknamed Algo, had an idea.
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Khan, Falaah Arif, Heath, Victoria, Galinkin, Erick, Khurana, Ryan, Ganapini, Marianna Bergamaschi, Fancy, Muriam, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
While AI has benefited humans, it may also harm humans if not appropriately developed. We conducted a literature review of current related work in developing AI systems from an HCI perspective. Different from other approaches, our focus is on the unique characteristics of AI technology and the differences between non-AI computing systems and AI systems. We further elaborate on the human-centered AI (HCAI) approach that we proposed in 2019. Our review and analysis highlight unique issues in developing AI systems which HCI professionals have not encountered in non-AI computing systems. To further enable the implementation of HCAI, we promote the research and application of human-AI interaction (HAII) as an interdisciplinary collaboration. There are many opportunities for HCI professionals to play a key role to make unique contributions to the main HAII areas as we identified. To support future HCI practice in the HAII area, we also offer enhanced HCI methods and strategic recommendations. In conclusion, we believe that promoting the HAII research and application will further enable the implementation of HCAI, enabling HCI professionals to address the unique issues of AI systems and develop human-centered AI systems.
The trustworthiness of Robots and Autonomous Systems (RAS) has gained a prominent position on many research agendas towards fully autonomous systems. This research systematically explores, for the first time, the key facets of human-centered AI (HAI) for trustworthy RAS. In this article, five key properties of a trustworthy RAS initially have been identified. RAS must be (i) safe in any uncertain and dynamic surrounding environments; (ii) secure, thus protecting itself from any cyber-threats; (iii) healthy with fault tolerance; (iv) trusted and easy to use to allow effective human-machine interaction (HMI), and (v) compliant with the law and ethical expectations. Then, the challenges in implementing trustworthy autonomous system are analytically reviewed, in respects of the five key properties, and the roles of AI technologies have been explored to ensure the trustiness of RAS with respects to safety, security, health and HMI, while reflecting the requirements of ethics in the design of RAS. While applications of RAS have mainly focused on performance and productivity, the risks posed by advanced AI in RAS have not received sufficient scientific attention. Hence, a new acceptance model of RAS is provided, as a framework for requirements to human-centered AI and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human's capacity. while focusing on contributions to humanity.
There's no denying that data is the backbone on which modern companies operate. Organizations, big and small, use it to make critical decisions and drive business forward. Whether it's self-driving cars, social networking, entertainment, music, health care or something else, every industry today is data-enabled, contributing to the generation of diverse data sets. Real-time and batch updates from sensors, software and hardware contribute to the speed at which data is generated. Every day 2.5 quintillion bytes of data are generated worldwide thanks to an always-on culture with billions of connected consumers and IoT devices.
Delseny, Hervé, Gabreau, Christophe, Gauffriau, Adrien, Beaudouin, Bernard, Ponsolle, Ludovic, Alecu, Lucian, Bonnin, Hugues, Beltran, Brice, Duchel, Didier, Ginestet, Jean-Brice, Hervieu, Alexandre, Martinez, Ghilaine, Pasquet, Sylvain, Delmas, Kevin, Pagetti, Claire, Gabriel, Jean-Marc, Chapdelaine, Camille, Picard, Sylvaine, Damour, Mathieu, Cappi, Cyril, Gardès, Laurent, De Grancey, Florence, Jenn, Eric, Lefevre, Baptiste, Flandin, Gregory, Gerchinovitz, Sébastien, Mamalet, Franck, Albore, Alexandre
Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.
A sports car capable of switching from road vehicle to aircraft in only three minutes has successfully flown 1,500 ft (460 m) in the air of Slovakia during a test flight, according to a video from the developers shared on YouTube. Explosive footage of the AirCar -- which the Slovakian firm KleinVision developed -- shows the vehicle driving to a runway before halting to deploy its wings, with which it soared majestically into the open sky. As the fifth prototype of the flying car, developers say it could be perfect for leisure and self-driving adventures -- and perhaps even as a commercial taxi service. No details on price were revealed for the futuristic vehicle -- but as of writing it can go roughly 620 miles (1,000 km) in one shot, and might be seen in the air and on public roads starting next year. We can't emphasize this enough: the public might see flying cars sweeping through the air in 2021.
Artificial Intelligence (AI) is a technology that fuels machines with human intelligence -- machines that have AI capabilities can automate manual tasks and learn on the go just like humans. Such automation gets repetitive and time-consuming tasks under the AI-powered systems that learn with time and can eventually carry out critical tasks and make decisions on their own. Such unique potential drove the transportation businesses to start investing into AI technology to improve revenue and stay ahead of their competitors. Transportation industry has just begun to apply AI in critical tasks however the reliability and safety in transport are still under question. Major challenges in transport like safety, capacity issues, environmental pollution, reliability etc. provide a huge opportunity for AI innovation.
A lot of the conversation about the future of AI and automation focuses on the AGI endgame ("will humans still work when artificial general intelligence can do everything?"). But there are more interesting, tractable, and concrete questions to answer about the effects of "narrow," task-specific AI that looks more or less like what we have today. In the near future, we can expect more advanced robotics, autonomous cars, customer service chatbots, and other applications powered by such narrow AI to take over certain tasks from humans. Should we be optimistic about labor in the next 10-50 years, when parts of industries will be automated by narrow AI? What early signs of those trends should we be concerned about now?