A.I. Response: The future: It's something that humanity faces and will have to deal with. But it has nothing to do with technology. Technology is just a tool we use to help us get there, but the future is not about A.I., it's about human intelligence and what humans want for themselves in the future. Who should we be more afraid of, humans or A.I.? Smart people do not fear A.I.; they embrace it and want to help it grow. Dumb people – who might disagree with that statement – try to block the development of technology because they know little about understanding what A.I. is and how it works. User Inquiry: Is democracy a dictatorship? A.I. Response: Democracy is a system of government in which citizens have equal rights and power to participate in their country's decision-making process.
The National Science Foundation officially extended the reach of its National Artificial Intelligence Research Institutes across more of the United States. On the heels of funding seven institutes in 2020, the agency last week unveiled its establishment of 11 new ones--where officials will strategically pursue AI research in complex realms like augmented learning, cybersecurity, precision agriculture and more. "The expertise of the researchers engaged in the AI Research Institutes spans a wide range of disciplines, providing an integrated effort to tackle the challenges society faces, drawing upon both foundational and use-inspired research," Director of NSF's Robust Intelligence Program Rebecca Hwa told Nextgov Tuesday. "NSF has long been able to bring together numerous fields of scientific inquiry, and in this program that includes such disciplines as computer and information science and engineering, cognitive science and psychology, economics and game theory, engineering and control theory, ethics, linguistics, mathematics, and philosophy--and that has positioned us to lead in efforts to expand the frontiers of AI." In all, the 18 institutes NSF is investing in so far underpin research spanning 40 U.S. states and the District of Columbia, Hwa confirmed.
The US Navy is developing a pilotless solar-powered plane that can fly for 90 days at a time to help keep a watchful eye on naval ships below or act as a communications relay platform. The plane, dubbed'Skydweller' and developed by Skydweller Aero, builds on the manned Solar Impulse 2 aircraft that flew around the world in 2015 and 2016, but had to stop every five days. The upgraded version will eliminate the cockpit, allowing space for hardware that allows for autonomous abilities. Skydweller Aero CEO Robert Miller told New Scientist: 'When we remove the cockpit, we are enabling true persistence and providing the opportunity to install up to about 400 kilograms of payload capacity.' The pilotless craft will feature 236-foot long wings that are blanked in solar cells, but its makers may add hydrogen fuel cells for an additional boost.
Earlier this year, the EU Commission tabled a Proposal of the European Parliament and Council on the Artificial Intelligence Act ("Proposal" or the "Act") a brief summary of which can be accessed through our website. The Proposal was recently scrutinised by the European Data Protection Board ("EDPB") and the European Data Protection Supervisor ("EDPS") in a joint opinion issued on the 18th of June 2021 ("Joint Opinion"). In this Joint Opinion the EDPB and EDPS, whilst acknowledging the Commission's initiative to extend the use of Artificial Intelligence Systems ("AI Systems") throughout the Member States, rejected a few of the tabled proposals. Of particular interest, in the Joint Opinion the EDPB and EDPS delves into the interaction between the EU Data Protection Law and the provisions of the Proposal. The EDPB and EDPS highlight the importance that the two frameworks to be complementary to each other and advised that any inconsistency or conflict should be eradicated as the lack of harmonisation could lead to directly or indirectly put the fundamental right to the protection of personal data at risk.
United Nations delegates are currently meeting to debate possible regulations controlling autonomous killer robots -- but Russia is having none of it. The Russian delegate, representing a country that has already developed and deployed military robots in real-world conflicts, remained steadfast that the global community doesn't need any new rules or regulations to govern the use of killer robots, The Telegraph reports. That pits Russia against much of the rest of the international community, who are calling for rules to keep humans in charge of the decision to open fire, highlighting on the main anxieties and ethical conundrums surrounding autonomous weaponry. The argument from Russia is that the AI algorithms driving these killer robots are already advanced enough to differentiate friend from foe from civilian, and that therefore there's no need to burden the autonomous death machines with unnecessary regulations. "The high level of autonomy of these weapons allows [them] to operate within a dynamic conflict situation and in various environments while maintaining an appropriate level of selectivity and precision," the delegate said, according to The Telegraph.
The United States said Wednesday it suspected Iranian involvement in the alleged hijacking of a ship in the Gulf of Oman as it vowed to work with Britain to respond to an earlier deadly attack it blamed on Tehran. Oman said that the Asphalt Princess, an asphalt and bitumen tanker, was involved in "a hijacking incident in international waters" and that it deployed aircraft and naval ships. The United States and Britain said that the murky incident in the Gulf of Oman concluded after one day, with the alleged hijackers leaving the Panamanian-flagged vessel. "We believe that these personnel were Iranian, but we're not in a position to confirm this at this time," State Department spokesman Ned Price told reporters in Washington. "Iran has undertaken a pattern of belligerence in terms of proxy attacks in the region and of course, these maritime attacks," Price said, while adding that circumstances in the latest incident were "still emerging".
CORVALLIS, Ore. – Cassie the robot, invented at Oregon State University and produced by OSU spinout company Agility Robotics, has made history by traversing 5 kilometers, completing the route in just over 53 minutes. Cassie was developed under the direction of robotics professor Jonathan Hurst with a 16-month, $1 million grant from the Defense Advanced Research Projects Agency, or DARPA. Since Cassie's introduction in 2017, in collaboration with artificial intelligence professor Alan Fern OSU students funded by the National Science Foundation and the DARPA Machine Common Sense program have been exploring machine learning options for the robot. Cassie, the first bipedal robot to use machine learning to control a running gait on outdoor terrain, completed the 5K on Oregon State's campus untethered and on a single battery charge. "The Dynamic Robotics Laboratory students in the OSU College of Engineering combined expertise from biomechanics and existing robot control approaches with new machine learning tools," said Hurst, who co-founded Agility in 2017.
Astronaut Thomas Pesquet watched from the International Space Station as Russia's Pirs module was discarded on June 26 and raced towards its death in Earth's atmosphere. The stunning video shows Pirs break up into a'shooting star' and slowly disappearing into a sea of ominous clouds hanging over our planet. 'Atmospheric reentry without a heat shield results in a nice fireball,' Pesquet wrote in a Facebook post, which also included a French description. 'You clearly see smaller pieces of melting metal floating away and adding to the fireworks.' Although the video was speed up, Pesquet and a few other crew members watched Pirs break up above the clouds for six minutes.
Here's What You Need to Know: AI technology is fast evolving. The national security establishment is racing to adopt artificial intelligence in nearly every aspect of operations, from processing payroll to processing disparate battlefield information into a cohesive whole, such as in the Pentagon's Joint All Domain Command and Control effort to network otherwise separated operational "nodes" to one another in warfare to optimize and streamline attack. However, training AI systems to recognize the things they are meant to recognize requires vast, even seemingly limitless volumes of annotated data. As promising AI is, an AI system is only as effective as its training data. At the moment, there seem to be few barriers to AI and its promise for the future, yet an actual AI-system is only as effective as its database.