After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds. Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future. The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds. Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline -- the time it takes from when sensor data is collected to when a weapon system is ordered to engaged -- from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where it's collected and its destination. "We use artificial intelligence and machine learning in several ways out here," Brigadier General Ross Coffman, director of the Army Futures Command's Next Generation Combat Vehicle Cross-Functional Team, told visiting media.
Politics are in the air, like that ominous reddish glow suffocating much of the West in recent weeks on account of all those tragic wild fires. This coming week we get our first presidential debate. A chance for Donald Trump and Joe Biden to shake hands and have a respectful, reasoned exchange of views on the future of the unfairly maligned Section 230 of the Communications Decency Act; the need to reform the Stored Communications Act; the wisdom of replicating Europe's General Data Privacy Regulation; the merits of taking antitrust action against Google for its manipulation of search results or against Amazon for its treatment of third-party sellers on its platform. Maybe we will even see the candidates reflect humbly on humanity's place in the universe, in light of the breaking news from Venus. The debate will probably be all tense, no future--maybe not as heated as a debate between 2016 Lindsey Graham and 2020 Lindsey Graham, but close.
Artificial intelligence (AI) involves the simulation of human intelligence through programming machines or creating software to think similar to humans and mimic their actions. In other words, AI research seeks to develop technology that is capable of learning and problem solving the same way that a human would. Though the idea itself can be traced back to antiquity, AI has become increasingly popular in recent years, with ever-evolving applications across many Canadian industries. To this end, read on for IBISWorld's evaluation of how two up-and-coming ventures have the potential to affect the operations of different industries in Canada. In London, ON, a new AI tool called the Chronic Homelessness Artificial Intelligence model (CHAI) analyzes points, such as age, gender, family and shelter history, to assess the chance that a particular individual will become chronically homeless over the next six months.
It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".
DURHAM – The National Science Foundation has awarded Duke University a $3 million, five-year Research Traineeship grant to develop a program for graduate students to develop expertise in using artificial intelligence (AI) for materials science research. The aiM (AI for Understanding and Designing Materials), program will fill a vital workforce gap by training the next generation in the new convergent field of materials and computer science research. "To achieve the promise of the U.S. Materials Genome Initiative of accelerated discovery, design and application of new materials, we must integrate the traditional tools of experimentation, theory and computation with the emerging tools of data science to transform the way we approach materials understanding and discovery," said Cate Brinson, chair of the Department of Mechanical Engineering & Materials Science and director of aiM. The Materials Genome Initiative (MGI), launched in 2011, is a multi-agency federal government effort to accelerate the development and deployment of new, advanced materials to address a host of challenges in clean energy, national security, health and welfare. "The MGI promoted a paradigm shift from slow individual experiments and computation to the beginnings of data-driven AI approaches in materials science research," added Brinson.
The graph represents a network of 2,995 Twitter users whose tweets in the requested range contained "#FinServ", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Sunday, 20 September 2020 at 23:42 UTC. The requested start date was Sunday, 20 September 2020 at 00:01 UTC and the maximum number of days (going backward) was 14. The maximum number of tweets collected was 7,500. The tweets in the network were tweeted over the 13-day, 21-hour, 20-minute period from Sunday, 06 September 2020 at 01:03 UTC to Saturday, 19 September 2020 at 22:24 UTC.
Critics of mask recognition also think that this new technology could be prone to some of the same pitfalls as facial recognition. Many of the training datasets used for facial recognition are dominated by light-skinned individuals. In 2019 Joy Buolamwini, a researcher at the Massachusetts Institute of Technology's Media Lab, and the AI Now Institute's Deborah Raji investigated the accuracy of commercially available datasets used by major tech companies. When they checked the performance of recognition systems using an algorithm trained with the standard datasets, and then using a new set of faces with much more racial and ethnic balance, the researchers found that the algorithm was less than 70 percent accurate in identifying new faces.
U.S. Chief Technology Officer Michael Kratsios and Energy Secretary Dan Brouillette shed a little light on how the Energy Department and Trump administration are thinking about ethics, regulatory approaches, and broader societal implications as they push the rollout of artificial intelligence and other emerging technologies. During a fireside chat in Pittsburgh Tuesday, Brouillette reflected on similar-but-as-serious considerations previously made when the agency was developing nuclear technologies many years ago. He noted that now, when focusing on ethics, his mind tends to hone in on negative aspects and "bad results" that could arise with tech adoption. "I haven't thought this through with great depth, but there seems to be some positive aspects of AI, too, on the ethics front that we need to explore," Brouillette told the chat's moderator Carnegie Mellon University Vice President of Research Michael McQuade. "And perhaps through that process we can speed the adoption of some of these technologies," he said, adding that he'd like to give it all more thought.
In February of this year, the Department of Defense (DoD) issued five Ethical Principles for Artificial Intelligence (AI): Responsible, Equitable, Traceable, Reliable and Governable. The DoD principles build off recommendations from 2019 by the Defense Innovation Board and the interim report of the National Security Commission on AI (NSCAI). The defense industry and others in the private sector have also been considering ethical issues regarding AI, including the issue of whether businesses should have an AI code of ethics. When cyber first became an issue about 22-years ago, the trend was to raise awareness and think through the consequences. Similarly, now we are developing awareness of the issues and beginning to think through the consequences of AI.
SFU researchers have received $300,000 in funding from Innovate BC's Ignite Program to develop technology that allows farmers to grow more food with fewer synthetic pesticides. The research project commenced earlier this year and involves a collaboration with Vancouver-based agtech company Terramera's Actigate technology platform, which aims to reduce global synthetic pesticide use by 80 per cent by 2030. "The growing world population needs more food and we need to grow food that is environmentally sustainable," says SFU computing science professor Martin Ester, who is the principal investigator for the project. "One approach is to develop organic pesticides that are as effective as chemical pesticides, but less harmful to the environment." Distinguished for his research in the fields of data mining and machine learning, Ester was named a Royal Society of Canada (RSC) Fellow last year.