A group of high school students was one of the top teams to emerge from the recent AI Tech Sprint by the Department of Veterans Affairs, delivering a web application that could help match cancer patients to clinical trials. The three students from Northern Virginia entered their work in a competition that included software companies like Oracle Healthcare and MyCancerDB. Digital consulting company Composite App took the $20,000 first place prize for its solution -- a tool for helping patients stay on track with their care plan -- but the clinical trials team got an honorable mention. The tech sprint was organized by the VA's new AI institute, and it focused on partnering with outside organizations and companies interested in applying artificial intelligence tools and techniques to VA data. The high school team's members -- Shreeja Kikkisetti, Ethan Ocasio and Neeyanth Kopparapu -- met as part of the Northern Virginia-based nonprofit Girls Computing League.
This blog post is adapted from our June 10 response to the National Institute of Standards and Technology's (NIST) request for information (RFI) 2019-08818: Developing a Federal AI Standards Engagement Plan. This RFI was released in response to an Executive Order directing NIST to create a plan for the development of a set of standards for the acceptable use of AI technologies. Given the wide adoption of AI technologies and the lag in commensurate laws and regulations, this post aims to help NIST by highlighting the current state, plans, challenges, and opportunities in ethics and AI. In 2016 the European Union (EU) created the General Data Protection Regulation (GDPR) that would expand protections around EU citizens' personal data beginning in 2018. Meanwhile, China has extensively integrated AI technologies into their government and social structure via the China Social Credit System.
"The JAIC is working to bring critical AI detection technology to the first responders who bravely battle wildfires. Increased use of AI will reduce response timelines, increase situational awareness, and save more American lives." On July 16, our new Secretary of Defense, Mark Esper, was asked by Congress what the No. 1 priority for DOD technology modernization ought to be. I think artificial intelligence will likely change the character of warfare, and I believe whoever masters it first will dominate on the battlefield for many, many, many years. We have to get there first."
Sen. Ed MarkeyEdward (Ed) John MarkeyParnas pressure grows on Senate GOP Sanders defends vote against USMCA: 'Not a single damn mention' of climate change The Hill's Morning Report -- President Trump on trial MORE (D-Mass.) on Thursday sent a series of questions to the CEO of Clearview AI after reports that the company has been selling facial recognition software with an expansive database to law enforcement. The New York Times first reported over the weekend that more than 600 law enforcement agencies have started working with Clearview, which claims to have a database of more than 3 billion photos, in the last year. "Any technology with the ability to collect and analyze individuals' biometric information has alarming potential to impinge on the public's civil liberties and privacy," Markey wrote in the letter to CEO Hoan Ton-That. "Clearview's product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans' expectation that they can move, assemble, or simply appear in public without being identified," he continued. According to the Times, Clearview has built its software by scraping major social media platforms and allowing users to upload photos of strangers.
For human travelers, the iconic moment of space exploration occurred a half-century ago, when Neil Armstrong planted the first human boot-print on the moon. But if you don't mind using robots as our stand-ins, the greatest era is unfolding right now on Mars, where NASA's Curiosity rover is rolling across the rusty, dusty surface and leaving behind tread marks that spell out the letters "J-P-L" in Morse code. JPL stands for the Jet Propulsion Laboratory, the NASA center that designed and built Curiosity along with three earlier Mars rovers. Collectively, these machines have racked up 46.4 miles of travel, tremendously expanded our understanding of the Martian environment, and energized the search for life in the universe. Everywhere the rovers have gone, they have discovered unexpected complexity.
We need better ways to help people. What's the medical breakthrough that could save the most lives in the U.S. over the next ten years? In the 2020s, medical research will likely inch forward when it comes to major killers like heart disease and cancer. But the biggest potential to save lives could lie in learning to prevent suicide. The rates of reported suicides have been creeping up over the last two decades.
More and improved use of artificial intelligence, and an overhaul of medical education to include advances in machine learning, could cut down significantly the time it takes to develop and bring new drugs to market, according to a new joint report by the National Academy of Medicine and the Government Accountability Office. Before that can happen, however, the United States must address legal and policy impediments that inhibit the collection and sharing of high-quality medical data among researchers, the report said. "Machine learning holds tremendous potential in drug development," according to the two-part report released Tuesday, which said such technologies could cut down the current time of about 10 to 15 years it takes to develop and bring a new drug to market. "In drug discovery, researchers are using [machine learning] to identify new drug targets, screen known compounds for new therapeutic applications, and design new drug candidates, among other applications." Researchers involved in drug discovery said infusion of machine learning technologies at the early stage of drug development could result in savings of between $300 million and $400 million per successful drug, the GAO said.
In the decision, the UKIPO Hearing Officer, Huw Jones, citing sections 7 and 13 of the Act (The Patents Act 1977) and Rule 10 of the Rules (The Patents Rules 2007), Officer Jones said "the Office accepts that DABUS created the inventions" in the patent applications but that as it was a machine and not a natural person, it could not be regarded as an inventor. Moreover, as DABUS has no rights to the inventions, the Officer stated it is unclear how the applicant derived the rights to the inventions from DABUS: "There appears to be no law that allows for the transfer of ownership of the invention from the inventor to the owner in this case, as the inventor itself cannot hold property." Id. at p. 6. Officer Jones further noted that while he agreed inventors other than natural persons were not contemplated when the EPC was drafted, "it is settled law that an inventor cannot be a corporate body." Accordingly, since the "applicant acknowledges DABUS is an AI machine and not a human, so cannot be taken to be a'person' as required by the Act." However, the Hearing Officer also added that the case raised an important question: given that an AI machine cannot hold property rights, in what way can it be encouraged to disseminate information about an invention?
ABOUT A CENTURY ago, engineers created a new sort of space: the control room. Before then, things that needed control were controlled by people on the spot. But as district heating systems, railway networks, electric grids and the like grew more complex, it began to make sense to put the controls all in one place. Dials and light bulbs brought the way the world was working into the room. Levers, stopcocks, switches and buttons sent decisions back out. By the 1960s control rooms had become a powerful icon of the modern. At Mission Control in Houston, young men in horn rimmed glasses and crewcuts sent commands to spacecraft heading for the Moon. In the space seen through television sets, travellers exploring strange new worlds did so within an iconic control room of their own: the bridge of Star Trek's USS Enterprise. A hexagonal room built in Santiago de Chile a decade later fitted right into the same philosophy--and aesthetic. It had an array of screens full of numbers and arrows. It was linked to a powerful computer. It had futuristic swivel chairs, complete with geometric buttons in the armrests to control the displays.
In 1985 the US pulled the plug on a computer-controlled anti-aircraft tank after a series of debacles in which its electronic brain locked guns onto a stand packed with top generals reviewing the device. Mercifully it didn't fire, but did subsequently attack a portable toilet instead of a target drone. The M247 Sergeant York (pictured above) may have been an embarrassing failure, but digital technology and artificial intelligence (AI) have changed the game since then. Today defence contractors around the world are competing to introduce small unmanned tracked vehicles into military service. Just like an army on the move, there are contrasting views about how far and how fast this technology will advance.