The General Services Administration expects that its new partnership with the Pentagon's Joint Artificial Intelligence Center will ultimately lead to significant benefits for civilian agencies. The GSA is working with JAIC, which was established last year to speed up AI adoption across the Pentagon, to accelerate the center's process by adding AI into acquisition work, which GSA officials said they hope to turn around and offer civilian government. "We're able to utilize a lot of that educational material [and] best practices that they're getting and scale it up, standardize it in a sense so it can be spread among civilian agencies," said Omid Ghaffari-Tabrizi, acquisition lead at the GSA Centers of Excellence, speaking Dec. 5 at the GovernmentCIO AI and RPA in Government conference. "All of the AI that we're procuring for them, we're also hoping to procure for ourselves," Ghaffari-Tabrizi added. One frustration with the acquisition process is the time it takes from the start of the project to the end.
MUNICH ― U.S. Defense Secretary Mark Esper on Saturday called out China as America's main adversary and warned allies that letting the Chinese firm Huawei build its next-generation, or 5G, network risks their security cooperation and information sharing arrangements with the U.S. "Reliance on Chinese 5G vendors, for example, could render our partners' critical systems vulnerable to disruption, manipulation and espionage," Esper said in a speech at the high-level Munich Security Conference. "It could also jeopardize our communication and intelligence sharing capabilities, and by extension, our alliances." Adopting Huawei's equipment on allies' 5G networks, Esper said, "could inject serious risk into our defense cooperation." It was a tough statement partially at odds with other U.S. officials, including Secretary of State Mike Pompeo, who offered assurances last week that U.S.-U.K. intelligence sharing remained strong despite Britain's decision to include Huawei in some parts of its nascent 5G network. A day earlier, the White House's point person for international telecommunications policy, Robert Blair, told reporters: "There will be no erosion in our overall intelligence sharing."
Imagine getting to a courthouse and seeing paper signs stuck to the doors with the message "Systems down." What about police officers in the field unable to access information on laptops in their vehicles, or surgeries delayed in hospitals? That's what can happen to a city, police department, or hospital in a ransomware attack. Ransomware is malicious software that can encrypt or control computer systems. Criminals who launch these attacks can then refuse to return access until they get paid.
Artificial Intelligence (AI) is permeating across various sectors. This includes even the defense sector. With this, it is important to identify the implications of this rising technology in our current way of handling national security and what must be done in order to ensure that the nation remains safe if AI continues to assert domination. This matter is the central issue in the published book, The Department of Defense Posture for Artificial Intelligence, made by the RAND Corporation as mandated by the US Department of National Defense (DoD). If you want to read the book, you may download the free ebook.
The White House released a budget proposal this week that at first glance, looks like a big win for the fields of artificial intelligence and machine learning. The budget for fiscal year 2021 (which begins in October) would ramp up spending for AI research at DARPA (the Pentagon's research arm) and the National Science Foundation by roughly $549 million. The budget request, which still needs to be approved by Congress, increases AI funding from $50 million to $249 million at DARPA, and from $500 million to $850 million at NSF. But while technologists applaud the increased investment in AI, the White House budget proposal is giving many in the science community pause. Overall, the budget proposes $142.2 billion in spending for research and development, a 9% cut from current levels.
The US military is developing a portable face-recognition device capable of identifying individuals from a kilometre away. The Advanced Tactical Facial Recognition at a Distance Technology project is being carried out for US Special Operations Command (SOCOM). It commenced in 2016, and a working prototype was demonstrated in December 2019, paving the way for a production version. SOCOM says the research is ongoing, but declined to comment further. Initially designed for hand-held use, the technology could also be used from drones.
The future of artificial intelligence (AI) is here: self-driving cars, grocery-delivering drones and voice assistants like Alexa that control more and more of our lives, from the locks on our front doors to the temperatures of our homes. For example, should an autonomous vehicle swerve into a pedestrian or stay its course when facing a collision? These questions plague technology companies as they develop AI at a clip outpacing government regulation, and have led Seattle University to develop a new ethics course for the public. Launched last week, the free, online course for businesses is the first step in a Microsoft-funded initiative to merge ethics and technology education at the Jesuit university. Seattle U senior business-school instructor Nathan Colaner hopes the new course will become a well-known resource for businesses "as they realize that [AI] is changing things," he said.
Researchers at the Tampa veterans' hospital are training computers to diagnose cancer. It's one example of how the Department of Veterans Affairs is expanding artificial intelligence development. Inside a laboratory at the James A. Haley Veterans' Hospital in Tampa, Fla., machines are rapidly processing tubes of patients' body fluids and tissue samples. Pathologists examine those samples under microscopes to spot signs of cancer and other diseases. But distinguishing certain features about a cancer cell can be difficult, so Drs.
In 2020, we will see US governments shift the conversation from who implements AI fastest to how we can implement most responsibly. While China is already using AI to measure students' brain waves with IoT sensors during class to help teachers provide more customizable content to achieve better retention and results, it's likely that the U.S. government will focus heavily in the coming year on privacy regulations to ensure AI use cases like this are fully vetted before being allowed. Federal regulations on privacy when it comes to the use of AI will take center stage in 2020. We've already seen the beginnings of this with two instances of the U.S. government taking action to prevent AI overstepping in states California and Massachusetts. This past May, the San Francisco Board of Supervisors banned the use of facial recognition technology by police and all other municipal agencies under the Stop Secret Surveillance Ordinance.
Applications such as robot control and wireless communication require planning under uncertainty. Partially observable Markov decision processes (POMDPs) plan policies for single agents under uncertainty and their decentralized versions (DEC-POMDPs) find a policy for multiple agents. The policy in infinite-horizon POMDP and DEC-POMDP problems has been represented as finite state controllers (FSCs). We introduce a novel class of periodic FSCs, composed of layers connected only to the previous and next layer. Our periodic FSC method finds a deterministic finite-horizon policy and converts it to an initial periodic infinite-horizon policy.