Do you want to create a legacy of meaningful research for the greater good? Do you want to lead and contribute to work in support of an organization that addresses some of today's most challenging problems that face our Nation? Then join us in the Data Sciences and Analytics Group at the Pacific Northwest National Laboratory (PNNL)! For more than 50 years, PNNL has advanced the frontiers of science and engineering in the service of our nation and the world in the areas of energy, the environment and national security. PNNL is committed to advancing the state-of-the-art in artificial intelligence through applied machine learning and deep learning to support scientific discovery and our sponsors' missions.
Deepfakes is a class of synthetic media generated by AI and represents another dark side of technology -- this form of Artificial Intelligence stole the headlines last year when a LinkedIn user by the name Katie Jones, who appeared on the platform & started connecting with the Who's Who of the political elite in Washington DC. It was alarming, how deep learning created a real-life image of a person & then penetrated the social media spreading misinformation. With the U.S presidential elections looming, lawmakers in the country are worried about how deepfakes can greatly jeopardize the transparency of the democratic process. Many of the leading tech companies have been asked for help and are working on developing tools that can detect this fake synthetic media. Global software giant, Microsoft, has now released two new tools that can spot if a certain media has been artificially manipulated.
Google users contribute more than 20 million pieces of information on Maps every day – that's more than 200 contributions every second. The uncertainty of traffic can crash the algorithms predicting the best ETA. There is also a chance of new roads and buildings being built all the time. Though Google Maps gets its ETA right most of the time, there is still room for improvement. Researchers at Alphabet-owned DeepMind have partnered with the Google Maps team to improve the accuracy of the real-time ETAs by up to 50% in places like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C.
Does it really matter if Tesla's advanced driving system is called "Autopilot" but it doesn't actually take over driving? A new American Automobile Association (AAA) study released late Wednesday found that, actually, yes: What we call the programs and systems in our vehicles actually matters. Tesla's Autopilot is a highly automated feature that keeps drivers centered in the lane, automatically brakes, and maintains driving speeds, but it can lull drivers into complacency, even when they're still supposed to be paying attention. Tesla technically requires hands on the wheel and eyes on the road while using Autopilot. The AAA Foundation for Traffic Safety, a nonprofit research group within the car association, had 90 participants from the Washington, D.C., area learn about and then use what they were told were two different driver-assistance systems.
WASHINGTON, D.C., Sept. 3, 2020 – Internet2 confirmed the selection of two research teams using an external academic review panel for the second and final phase of the Exploring Clouds for Acceleration of Science (E-CAS) project that was first announced in November 2018. The second phase of the E-CAS project builds on lessons learned and leading practices that have been identified by the six research proposals that were selected in March 2019 with the goal of producing a deeper understanding of the use of cloud computing in accelerating scientific discoveries. "The first phase of the E-CAS project supported six teams to develop their computational workflows and test them at scale, and the results from all teams were very impressive," said Howard Pfeffer, president and CEO, Internet2. "Now in the second phase, the teams from MIT and SUNY Downstate have the opportunity to build on their technological achievements using the commercial cloud platforms with a focus on the scientific outcomes of their work." The research team from MIT has developed a range of new tools and code to take advantage of the newest graphical processing units (GPUs) and field programmable gate arrays (FPGAs) to perform accelerated machine learning tasks in Amazon Web Services (AWS) and Google Cloud using remote procedure calls from their main workflows running on high-performance clusters at MIT and FermiLab.
It has been nearly 13 years since Google Maps started providing traffic data to help people navigate their way around, alongside providing detail about whether the traffic along the route is heavy or light, the estimated travel time, and the estimated time of arrival (ETAs). In a bid to further enhance those traffic prediction capabilities, Google and Alphabet's AI research lab DeepMind have improved real-time ETAs by up to 50% in places such as Sydney, Tokyo, Berlin, Jakarta, Sao Paulo, and Washington DC by using a machine learning technique known as graph neural networks. Google Maps product manager Johann Lau said Google Maps uses aggregate location data and historical traffic patterns to understand traffic conditions to determine current traffic estimates, but it previously did not account for what traffic may look like if a traffic jam were to occur while on the journey. "Our ETA predictions already have a very high accuracy bar -- in fact, we see that our predictions have been consistently accurate for over 97% of trips … this technique is what enables Google Maps to better predict whether or not you'll be affected by a slowdown that may not have even started yet," he said in a blog post. The researchers at DeepMind said by using graph neural networks, this allowed Google Maps to incorporate "relational learning biases to model the connectivity structure of real-world road networks."
Google Maps helps users navigate over one billion kilometers in more than 200 countries and territories daily, and Google says its estimated time of arrival (ETA) predictions have been consistently accurate for over 97 percent of trips. That's not good enough for Google, though, so the company partnered with DeepMind to use machine learning to make its ETAs even more accurate. Before partnering with DeepMind, an Alphabet AI research lab, Google Maps used a combination of historical traffic patterns and live traffic conditions to understand current traffic patterns. The partners wanted to be able to predict future traffic patterns, so DeepMind developed a graphic neural network, which also considers data on the time of year, road quality, speed limits, accidents and closures. Thanks to that machine learning approach, Google Maps has improved the accuracy of real-time ETAs by up to 50 percent in places like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C. Now, Google Maps can warn users about traffic jams before they exist.
In the battle of artificial intelligence versus a human fighter pilot, it wasn't even close. The artificial intelligence algorithm, developed by Heron Systems, swept a human F-16 pilot in a simulated dogfight 5-0 in the Defense Advanced Research Projects Agency's AlphaDogfight Trials on Aug. 20. The company beat out seven other companies before going head to head with "Banger," a pilot from the District of Columbia Air National Guard and a recent graduate of the Air Force Weapons School's F-16 Weapons Instructor Course. The pilot, whose full name was not provided, is an operational fighter pilot with more than 2,000 hours in the F-16. Banger and Heron Systems' AI fought in five different basic fighter maneuver scenarios with the simulated fight only using the Fighting Falcon's guns, and each time the AI was able to out maneuver and take out Banger.
Members of the National Association of Insurance Commissioners (NAIC) have agreed unanimously to adopt an Artificial Intelligence Guiding Principles. The NAIC's AI Working Group based the document on a set of AI principles developed by the Organisation for Economic Co-operation and Development (OECD). The United States is one of the 42 countries that has adopted the OECD AI principles. The NAIC is a Kansas City, Missouri-based group for the top insurance regulators in the states, the District of Columbia and U.S. territories. At the NAIC, regulators and representatives for consumer groups have said careful oversight of AI-based systems is important, because of concerns that insurers or other entities could use AI systems to hide illegal forms of discrimination, such as race-based discrimination.
His motto is "Aim high, fly-fight-win." But for a top U.S. Air Force fighter pilot and weapons school graduate, aiming high--and in one instance aiming low--wasn't enough to prevail against an AI opponent in a simulated competition last week. The Defense Advanced Research Project Agency (DARPA) sponsored the AlphaDogfight trials as part of its effort to use AI to help pilots in realtime combat and encourage developers to sign up for its Air Combat Evolution (ACE) program to design AI defense systems. The winning program, designed by a Maryland-based defense contractor Heron Systems, outmaneuvered its human opponent flawlessly in a five-round sweep. Encouragingly, 'Banger,' a District of Columbia Air National Guard pilot and recent Air Force Weapons School Instructor Course graduate with over 2,000 hours of experience flying F-16s, was able to last longer each round.