In 2016, three veterans of the still young autonomous vehicle industry formed Aurora, a startup focused on developing self-driving cars. Partnerships followed with major automakers, including Hyundai and Volkswagen. CEO Chris Urmson said at the time that the link-ups would help the company bring "mobility as a service" to urban areas--Uber-like rides without a human behind the wheel. But by late 2019, Aurora's emphasis had shifted. It said self-driving trucks, not cars, would be quicker to hit public roads en masse. Its executives, who had steadfastly refused to provide a timeline for their self-driving-car software, now say trucks equipped with its "Aurora Driver" will hit the roads in 2023 or 2024, with ride-hail vehicles following a year or two later.
AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner Why #MLOps is the key for productionized ML system? ML model code is only a small part ( 5–10%) of a successful ML system, and the objective should be to create value by placing ML models into production. F1 score) while stakeholders focus on business metrics (e.g. Improving labelling consistency is an iterative process, so consider repeating the process until disagreements are resolved as far as possible. For instance, partial automation with a human in the loop can be an ideal design for AI-based interpretation of medical scans, with human judgement coming in for cases where prediction confidence is low.
All the sessions from Transform 2021 are available on-demand now. Google parent Alphabet has spun out a new industrial robotics company called Intrinsic. Led by Wendy Tan-White, a veteran entrepreneur and investor who has served as VP of "moonshots" at Alphabet's R&D business X since 2019, Intrinsic is setting out to "unlock the creative and economic potential" of the $42 billion industrial robotics market. The company said it's creating "software tools" to make industrial robots more affordable and easier to use, extending their utility beyond big businesses and to more people -- 70% of the world's manufacturing currently takes place in just 10 countries. Industrial robots have surged in demand over the past year in the wake of the pandemic -- in Q1 this year, the Association for Advancing Automation reported a 19.6% increase in orders across North America alone.
In a perfect world, what you see is what you get. If this were the case, the job of Artificial Intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action--steer right, steer left, or continue straight--to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called'adversarial inputs,' it might take unnecessary and potentially dangerous action.
Self-driving features have been creeping into automobiles for years, and Tesla (TSLA) even calls its autonomous system "full self-driving." That's hype, not reality: There's still no car on the market that can drive itself under all conditions with no human input. But researchers are getting close, and automotive supplier Mobileye just announced it's deploying a fleet of self-driving prototypes in New York City, to test its technology against hostile drivers, unrepentant jaywalkers, double parkers, omnipresent construction and horse-drawn carriages. The company, a division of Intel (INTC), describes NYC as "one of the world's most challenging driving environments" and says the data from the trial will push full self-driving capability closer to prime time. In an interview, Mobileye CEO Amnon Shashua said fully autonomous cars could be in showrooms by the end of President Biden's first term.
Drone racing is an increasingly popular sport with big money prizes for skilled professionals. New control algorithms developed at the University of Zurich (UZH) have beaten experienced human pilots for the first time – but they still have significant limitations. In the past, attempts to develop automated algorithms to beat humans have run into problems with accurately simulating the limitations of the quadcopter and the flight path it takes. Traditional flight paths around a complex drone racing course are calculated using polynomial methods which produce a series of smooth curves, and these are not necessarily as fast as the sharper and more jagged paths flown by human pilots. A team from the Robotics and Perception Group at UZH has developed a trajectory planning algorithm to calculates the optimal route at every point in the flight, rather than doing it section by section.
Due to the recent adaptive quarantine measures imposed in virtually all parts of the world, air travel, public transportation, and many other sectors took a really big hit in 2020. However, the automotive world and autonomous vehicles, in particular, have shown increased resilience during this difficult time. In fact, companies like Ford have increased their investments in the development of electric and self-driving cars by allocating $29 billion dollars in the fourth quarter of last year. Specifically, $7 billion of that money will go towards the development of self-driving cars. So Ford is joining General Motors, Tesla, Baidu, and other automakers in heavily investing in autonomous vehicles.
The researchers developed a method to model different levels of driver cooperativeness how likely a driver was to pull over to let the other driver pass and used those models to train an algorithm that could assist an autonomous vehicle to safely and efficiently navigate this situation. An algorithm developed by researchers at Carnegie Mellon University (CMU) could enable autonomous vehicles to navigate crowded, narrow streets where vehicles traveling in opposite directions do not have enough space to pass each other and there is no knowledge about what the other driver may do. Such a scenario requires collaboration among drivers, who must balance aggression with cooperation. The researchers modeled different levels of cooperation between drivers and used them to train the algorithm. In simulations, the algorithm was found to outperform current models; it has not yet been tested on real-world vehicles.
AI systems are becoming increasingly popular and central in many industries. They decide who might get a loan from the bank, whether an individual should be convicted, and we may even entrust them with our lives when using systems such as autonomous vehicles in the near future. Thus, there is a growing need for mechanisms to harness and control these systems so that we may ensure that they behave as desired. One important issue that has been gaining popularity in the last few years is fairness. While usually ML models are evaluated based on metrics such as accuracy, the idea of fairness is that we must ensure that our models are unbiased with regard to attributes such as gender, race and other selected attributes.
GlobalData predicts cellular IoT subscriptions will grow in the range of 12-16% CAGR, depending on region, over the next five years, as remote working, autonomous vehicles, robotics, and other advanced use cases accelerate. There are many recent examples of IoT deals and alliances that signify traction. GlobalData's Q2 mobile trends report provides insights into subscriptions for mobile networks; among many other key findings, it offers a clue to the progress of IoT uptake in different regions. North America: Cellular IoT subscriptions will reach 151.5 million at year-end 2021, and will make up 26.5% of total mobile subscriptions in the region. GlobalData expects the number of North American IoT connections to increase at a CAGR of 15.6% from 2021-2026, reaching 312.3 million at the end of the period.