These and other insights are from LinkedIn's Top Startups 2020: The 50 U.S. companies on the rise published today. This is the 4th annual LinkedIn list of the hottest startups to work for. The list is determined by the billions of actions taken by LinkedIn's 706 million members. The annual list is a reflection of how business and work is evolving through the pandemic, what industries are emerging and growing and where people want to work now, reflecting the current state of the economy and the world. Even in the face of Covid-19, the startups on this year's list are all still innovating and experiencing growth and the majority of the companies on the list are currently hiring, with 3,000 jobs now open on LinkedIn. To be eligible for the list, a company must be independent and privately held, have at least 50 employees, be seven years old or younger, be headquartered in the country on the list which they appear and have a minimum of 15% employee growth over the time period. The top 50 U.S. startups include the following: Full-time headcount: 4,000 Headquarters: New York City Year founded: 2016 What you should know: While the U.S. economy quickly sank into a recession at the start of the pandemic, one of its engines has been roaring: housing.
As one of the hottest technologies of recent years, artificial intelligence (AI) has started penetrating both the US public and the private sectors--though to differing degrees. While the private sector seems bullish on AI, the public sector's approach appears tempered with more caution--a Deloitte survey of select early adopters of AI shows high concern around the potential risks of AI among public sector organizations (see the sidebar "About the survey"). They give a peek into how public sector organizations are approaching AI; and how the approaches, in many cases, differ from those of their private sector counterparts. AI is not completely new to the public sector. The first AI contract was awarded in 1985 by the US Social Security Administration,1 but the technology still wasn't advanced enough to become common in the following decades.
The next year will be pivotal for the Air Force's effort to acquire a new class of autonomous drones, as industry teams compete for a chance to build a fleet of robotic wingmen that will soon undergo operational experimentation. The "Skyborg" program is one of the service's top science-and-technology priorities under the "Vanguard" initiative to deliver game-changing capabilities to its warfighters. The aim is to acquire relatively inexpensive, attritable unmanned aircraft that can leverage artificial intelligence and accompany manned fighter jets into battle. "I expect that we will do sorties where a set number are expected to fly with the manned systems, and we'll have crazy new [concepts of operation] for how they'll be used," Assistant Secretary of the Air Force for Acquisition, Technology and Logistics Will Roper said during an online event hosted by the Mitchell Institute for Aerospace Studies. The platforms might even be called upon to conduct kamikaze missions.
Fox News Flash top entertainment and celebrity headlines are here. Check out what's clicking today in entertainment. The U.S. military recently conducted a live-fire full combat replication with unmanned-to-unmanned teaming guiding attacks, small reconnaissance drones, satellites sending target coordinates to ground artillery and high-speed, AI-enabled "networked" warfare. This exercise was a part of the Army's Project Convergence 2020, a weapons and platform combat experiment which, service leaders say, represents a massive transformation helping the service pivot its weapons use, tactics and maneuver strategies into a new era. Taking place at Yuma Proving Grounds, Arizona, Project Convergence involved live-fire war experiments aligned in three distinct phases, intended to help the Army cultivate its emerging modern Combined Arms Maneuver strategy.
After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds. Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future. The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds. Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline -- the time it takes from when sensor data is collected to when a weapon system is ordered to engaged -- from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where it's collected and its destination. "We use artificial intelligence and machine learning in several ways out here," Brigadier General Ross Coffman, director of the Army Futures Command's Next Generation Combat Vehicle Cross-Functional Team, told visiting media.
Politics are in the air, like that ominous reddish glow suffocating much of the West in recent weeks on account of all those tragic wild fires. This coming week we get our first presidential debate. A chance for Donald Trump and Joe Biden to shake hands and have a respectful, reasoned exchange of views on the future of the unfairly maligned Section 230 of the Communications Decency Act; the need to reform the Stored Communications Act; the wisdom of replicating Europe's General Data Privacy Regulation; the merits of taking antitrust action against Google for its manipulation of search results or against Amazon for its treatment of third-party sellers on its platform. Maybe we will even see the candidates reflect humbly on humanity's place in the universe, in light of the breaking news from Venus. The debate will probably be all tense, no future--maybe not as heated as a debate between 2016 Lindsey Graham and 2020 Lindsey Graham, but close.
U.S. Chief Technology Officer Michael Kratsios and Energy Secretary Dan Brouillette shed a little light on how the Energy Department and Trump administration are thinking about ethics, regulatory approaches, and broader societal implications as they push the rollout of artificial intelligence and other emerging technologies. During a fireside chat in Pittsburgh Tuesday, Brouillette reflected on similar-but-as-serious considerations previously made when the agency was developing nuclear technologies many years ago. He noted that now, when focusing on ethics, his mind tends to hone in on negative aspects and "bad results" that could arise with tech adoption. "I haven't thought this through with great depth, but there seems to be some positive aspects of AI, too, on the ethics front that we need to explore," Brouillette told the chat's moderator Carnegie Mellon University Vice President of Research Michael McQuade. "And perhaps through that process we can speed the adoption of some of these technologies," he said, adding that he'd like to give it all more thought.
In February of this year, the Department of Defense (DoD) issued five Ethical Principles for Artificial Intelligence (AI): Responsible, Equitable, Traceable, Reliable and Governable. The DoD principles build off recommendations from 2019 by the Defense Innovation Board and the interim report of the National Security Commission on AI (NSCAI). The defense industry and others in the private sector have also been considering ethical issues regarding AI, including the issue of whether businesses should have an AI code of ethics. When cyber first became an issue about 22-years ago, the trend was to raise awareness and think through the consequences. Similarly, now we are developing awareness of the issues and beginning to think through the consequences of AI.
The following declaration was released by the Governments of the United States of America and the United Kingdom of Great Britain and Northern Ireland during the September 25 inaugural meeting of the Special Relationship Economic Working Group. We intend to establish a bilateral government-to-government dialogue on the areas identified in this vision and explore an AI R&D ecosystem that promotes the mutual wellbeing, prosperity, and security of present and future generations. Signed in London and Washington on September 25, 2020, in two originals, in the English language.
Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.