Hundreds of billions in public and private capital is being invested in Artificial Intelligence (AI) and Machine Learning companies. The number of patents filed in 2021 is more than 30 times higher than in 2015 as companies and countries across the world have realized that AI and Machine Learning will be a major disruptor and potentially change the balance of military power. Until recently, the hype exceeded reality. Today, however, advances in AI in several important areas (here, here, here, here and here) equal and even surpass human capabilities. If you haven't paid attention, now's the time. Artificial Intelligence and the Department of Defense (DoD) The Department of Defense has thought that Artificial Intelligence is such a foundational set of technologies that they started a dedicated organization- the JAIC – to enable and implement artificial intelligence across the Department. They provide the infrastructure, tools, and technical expertise for DoD users to successfully build and deploy their AI-accelerated projects. Some specific defense related AI applications are listed later in this document. We're in the Middle of a Revolution Imagine it's 1950, and you're a visitor who traveled back in time from today. Your job is to explain the impact computers will have on business, defense and society to people who are using manual calculators and slide rules. You succeed in convincing one company and a government to adopt computers and learn to code much faster than their competitors /adversaries. And they figure out how they could digitally enable their business – supply chain, customer interactions, etc. Think about the competitive edge they'd have by today in business or as a nation. That's where we are today with Artificial Intelligence and Machine Learning. These technologies will transform businesses and government agencies.
Gen. Mark Milley tells graduates of the US Military Academy to prepare West Point military academy graduates to prepare for increasingly dangerous world. Gen. Mark Milley told cadets graduating from U.S. Military Academy West Point Saturday to be prepared for increasing risk of global conflict and a host of new weapons technologies in their careers. "The world you are being commissioned into has the potential for a significant international conflict between great powers. And that potential is increasing, not decreasing," Milley, the chairman of the Joint Chiefs of Staff, told the cadets at the 2022 commencement ceremony in West Point, New York. "And right now, at this very moment, a fundamental change is happening in the very character of war. We are facing right now two global powers, China and Russia, each with significant military capabilities, and both who fully intend to change the current rules based order," Milley said.
It was an important milestone for a company that has, at least in the popular imagination, struggled to catch up with SpaceX. So it's fitting how Boeing decided it would celebrate a successful mission. When the crew of the ISS opened the hatch to Starliner, they found a surprise inside the spacecraft. Floating next to Orbital Flight Test-2's seated test dummy was a plush toy representing Jebediah Kerman, one of four original "Kerbonauts" featured in Kerbal Space Program. Jeb, as he's better known by the KSP community, served as the flight's zero-g indicator. Russian cosmonaut Yuri Gagarin took a small doll with him on the first-ever human spaceflight, and ever since it has become a tradition for most space crews to carry plush toys with them to make it easy to see when they've entered a microgravity environment.
Can we ever rein in the Big Tech firms to foster indigenous innovation, stimulate balanced growth, and protect national sovereignty? Can we have a balanced set of rules and a clear framework to safeguard larger public interest? Can we check the weaponisation of the internet with balanced cybersecurity and secure data governance framework to make Google (Alphabet); Apple; Facebook (Meta); Amazon; and Microsoft, besides others, more responsible and resilient? Look around, Big Tech run most of the digital services that are integral and ubiquitous to our life. Our minds, economy, national security, democracy, and progress are invisibly controlled by a few technology firms.
A few years ago, many people imagined a world run by robots. The promises and challenges associated with artificial intelligence (AI) were widely discussed as this technology moved out of the labs and into the mainstream. Many of these predictions seemed contradictory. Robots were mooted to steal our jobs, but also create millions of new ones. As more applications were rolled out, AI hit the headlines for all the right (and wrong) reasons, promising everything from revolutionizing the healthcare sector to making light of the weight of data now created in our digitized world.
Incorporating ethics and legal compliance into data-driven algorithmic systems has been attracting significant attention from the computing research community, most notably under the umbrella of fair8 and interpretable16 machine learning. While important, much of this work has been limited in scope to the "last mile" of data analysis and has disregarded both the system's design, development, and use life cycle (What are we automating and why? Is the system working as intended? Are there any unforeseen consequences post-deployment?) and the data life cycle (Where did the data come from? How long is it valid and appropriate?). In this article, we argue two points. First, the decisions we make during data collection and preparation profoundly impact the robustness, fairness, and interpretability of the systems we build. Second, our responsibility for the operation of these systems does not stop when they are deployed. To make our discussion concrete, consider the use of predictive analytics in hiring. Automated hiring systems are seeing ever broader use and are as varied as the hiring practices themselves, ranging from resume screeners that claim to identify promising applicantsa to video and voice analysis tools that facilitate the interview processb and game-based assessments that promise to surface personality traits indicative of future success.c Bogen and Rieke5 describe the hiring process from the employer's point of view as a series of decisions that forms a funnel, with stages corresponding to sourcing, screening, interviewing, and selection. The hiring funnel is an example of an automated decision system--a data-driven, algorithm-assisted process that culminates in job offers to some candidates and rejections to others. The popularity of automated hiring systems is due in no small part to our collective quest for efficiency.
U..S. employment statistics hit a new milestone last year, but not a positive one. In August 2021, almost 4.3 million workers quit their jobs, according to the U.S. Department of Labor. That's the highest number since the department began tracking voluntary resignations. Their reasons for leaving their jobs vary--the numbers track people who quit for a different position, as well as those who quit without having another job lined up. While the reasons for quitting vary, one thing is clear: Businesses are having a tough time getting employees to come back.
Artificial intelligence is defined as systems that do not operate according to a designed algorithm but are able to learn from new data. The fact that European policymakers have turned their eyes to the challenges of applying AI technologies is an important step forward, according to Jokūbas Drazdas, director of UAB Acrux Cyber Service, a Lithuanian IT company specialising in AI and cyber security. Europe is lagging far behind the US and China in the development and deployment of AI. In 2020, only 7% of European companies were using AI systems. The US and China are currently trying to accelerate the use of AI in the public and private sectors.
On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC defines AI as "any method for programming computers or products to enable them to carry out tasks or behaviors that would require intelligence if performed by humans" and machine learning as "an iterative process of applying models or algorithms to data sets to learn and detect patterns and/or perform tasks, such as prediction or decision making that can approximate some aspects of intelligence."3 To inform the ongoing discussion on how to regulate AI, machine learning, and related technologies, the CPSC provides the following list of considerations: Do AI and machine learning affect consumer product safety? Do AI and machine learning affect consumer product safety? UL 4600 Standard for Safety for the Evaluation of Autonomous Products covers "fully autonomous systems that move such as self-driving cars along with applications in mining, agriculture, maintenance, and other vehicles including lightweight unmanned aerial vehicles."5