A Tesla engineer has informed California regulators that the electric vehicle company might not have a fully self-driving vehicle ready for this year. The information comes from documents dated May 6 exchanged between the California Department of Motor Vehicles and several Tesla employees, including CJ Moore, the company's autopilot engineer. The documents were released by the legal transparency group PlainSite, which got them under the Freedom of Information Act (FOIA). In January, Tesla chief Elon Musk said he was "highly confident the car will be able to drive itself with reliability in excess of human this year." "Tesla is at Level 2 currently. The ratio of driver interaction would need to be in the magnitude of 1 or 2 million miles per driver interaction to move into higher levels of automation," California DMV noted in the memo.
Federal investigators said Monday they were able to glean some insights into what might have happened after a fire erupted from a Tesla crash that killed two people in the Houston area in April and destroyed the vehicle's data recorder. . The National Transportation Safety Board released preliminary findings from its probe into the crash, which raised speculation about whether the vehicle's partially self-driving system, Autopilot, was to blame. The speculation stemmed from local authorities saying they were nearly positive that no one was behind the wheel when the vehicle crashed. The NTSB, in its preliminary report, said video footage from the vehicle owner's home security system showed him getting behind the wheel of the Tesla Model S and then slowly exiting the driveway. The vehicle traveled about 550 feet "before departing the road on a curve, driving over the curb, and hitting a drainage culvert, a raised manhole and a tree," according to the NTSB.
Last week we discussed level 1 and 2 autonomy and this week we will move on to L3-L5 which is considered to be "true" autonomous driving. With L3 in certain situations (e.g., highway driving) the car can fully take over all driving tasks including lane changing, but the driver must be constantly paying attention and has to keep his/her hands near the steering wheel at all times and must be paying attention and not distracted by some other tasks such as watching tv, staring at a phone or sleeping. The reason the driver must always pay attention is that if the autonomous system finds itself in a situation it cannot handle (e.g., an unexpected detour or highway construction) it will provide a warning (e.g., seat vibrates or an alarm sounds) and then hand control back to the driver. L4 is a fully autonomous car that can perform all driving functions without fail and doesn't ever require intervention from the driver, though the driver has the option to take over at any time. The caveat however is that the autonomous function can ONLY be used in certain prescribed situations (e.g., proper weather conditions with certain visibility) or locations (e.g., in a well-mapped city or vicinity).
When a self-driving car passes by, you tend to notice. The towering sensors whirling around on the top of the car more than stand out. But Chinese autonomous vehicle company Pony.ai is reimagining the roofline for its next generation of autonomous taxicabs. As part of a partnership with autonomous vehicle sensor maker Luminar announced Monday, the Pony.ai Typical LiDAR sensors like those from Velodyne, Intel's Mobileye, and Waymo's own Laser Bear Honeycomb are mostly cone-shaped to help pull in a full 360-degree view from the top and around the car.
Plus plans to merge with Hennessy Capital Investment Corp. V in a transaction that would bring the company, which is based in California and China, about $500 million in gross proceeds and a market capitalization of roughly $3.3 billion. The agreement is expected to close in the third quarter, the companies said Monday. The deal would provide "a significant cash infusion for us to expand our commercialization efforts," Plus Chief Executive and co-founder David Liu said, as the company steps up production and aims to fill thousands of contracted orders and vehicle reservations from Chinese and U.S. fleets. The transaction would include a $150 million private placement of shares with BlackRock Inc., D.E. Top news and in-depth analysis on the world of logistics, from supply chain to transport and technology.
China is shaping up to be the first real test of Big Tech's ambitions in the world of carmaking, with giants from Huawei Technologies Co. to Baidu Inc. plowing almost $19 billion into electric and self-driving vehicle ventures widely seen as the future of transport. While Apple Inc. has long had plans for its own car and Alphabet Inc. has Waymo, its autonomous driving unit, the size -- and speed -- of the move by China's tech titans puts them at the vanguard of that broader push. The lure is an industry that's becoming increasingly high tech as it pivots away from the combustion engine, with sensors and operating systems making cars more like computers, and the prospect of autonomy re-envisioning how people use will them. As the world's biggest market for new-energy cars, China is a key battlefield. Established automakers like Volkswagen AG and General Motors Co. are already slogging it out with local upstarts such as market darling Nio Inc. and Xpeng Inc.
The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment. Today's AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them. Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans. In a new paper titled "Why AI is Harder Than We Think," Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts.
If we are not actively engaged in industries related to technology, we may fail to fully appreciate how we might already be influenced by artificial intelligence in our day-to-day world. Everyone is talking about self-driving cars, seemingly inanimate objects conversing with you about your personal preferences, someone somewhere already seems to recommend your shopping list armed with the knowledge of what you like or dislike. From the viewpoint of the business world, all companies today are looking to adopt AI in some form or the other to improve business processes, achieve efficiency, so on and so forth. I recently read an article about Softbank's Masayoshi Son and his vision "for an AI-powered utopia where machines control how we live". While this may sound like an unreal possibility, one could relate to this thought better if one were to ponder over David Fano's (Chief Growth Officer, WeWork) words, "Basically, every object will have the potential to be a computer".
Three decades ago, the internet was just beginning to revolutionize human communications. Little did the world know how much power would fall into the hands of a few technocratic elites as a result. Autonomous vehicles likewise will transform human transportation in the same way; the skill of helming the wheel will no longer be necessary in about a decade or two, just as the art of writing on paper has all but ceased to exist. Recent news of a so-called Apple Car project has done little to bring positive attention to the possibilities of a self-driving revolution. In poll-after-poll, nearly half of Americans say they would not use an autonomous taxi or ride-sharing service.
Artificial intelligence has been all over headlines for nearly a decade, as systems have made quick progress in long-standing AI challenges like image recognition, natural language processing, and games. Tech companies have sown machine learning algorithms into search and recommendation engines and facial recognition systems, and OpenAI's GPT-3 and DeepMind's AlphaFold promise even more practical applications, from writing to coding to scientific discoveries. Indeed, we're in the midst of an AI spring, with investment in the technology burgeoning and an overriding sentiment of optimism and possibility towards what it can accomplish and when. This time may feel different than previous AI springs due to the aforementioned practical applications and the proliferation of narrow AI into technologies many of us use every day--like our smartphones, TVs, cars, and vacuum cleaners, to name just a few. But it's also possible that we're riding a wave of short-term progress in AI that will soon become part of the ebb and flow in advancement, funding, and sentiment that has characterized the field since its founding in 1956. AI has fallen short of many predictions made over the last few decades; 2020, for example, was heralded by many as the year self-driving cars would start filling up roads, seamlessly ferrying passengers around as they sat back and enjoyed the ride.