Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
LG plans to harness Microsoft's artificial intelligence smarts to improve its Advanced Driver Assistance Systems, Driver Status Monitoring Camera, and Multi Purpose Front Camera -- parts that it said last year it was providing to an undisclosed "premium German auto-maker". Meanwhile, Azure's Data Box service will help LG's self-driving platform to learn and evolve even faster at its testing grounds, said the company. "Road and traffic patterns in cities that would normally require more than a full day for [self- driving] systems to comprehend would take only minutes with Azure," declared LG. It could teach LG's software to distinguish between pedestrians and objects and learn the driving patterns of other vehicles on the road. Like its Korean rival Samsung, LG is also chasing the billions of dollars up for grabs in the automotive parts industry.
Reinforcement learning (RL), the category of machine learning that relies on penalties and rewards, can be a powerful technique for teaching machines to adapt to new environments. Deepmind's AlphaGo used it to defeat the world's best Go player despite never having played him before. It has also shown promise in the creation of robots that can perform under changing conditions. But the technique has its limitations. It requires a machine to blunder around as it slowly refines its actions over time.
Right on the heels of Canada introducing new, stricter regulations for drone operations, the US Department of Transportation proposed a new set of rules for drones that would allow the unmanned vehicles to fly over populated areas and operate at night. The proposal also includes a pilot program for drone traffic management that would help to integrate the aircrafts into the nation's airspace. Under the proposed rules, the Federal Aviation Administration would no longer require drone operators to get waivers to operate at night. Instead, it would require drones flying after twilight to have an anti-collision light that would make it visible for at least three miles. Pilots operating a drone at night would also have to undergo knowledge testing and training before being cleared to fly.
One of the most important developments out of CES 2019 was the momentum of cellular vehicle-to-everything technology and how Ford will roll it out to its global fleet by 2022. Amid robotics, health and wellness gadgets, laptops and a bevy of other devices, Ford's move to adopt Qualcomm's cellular vehicle-to-everything (C-V2X) platform was largely overlooked. A few years from now I'm willing to bet that Ford's move to roll out C-V2X will be the one technology development during CES 2019 that anyone will actually remember. Simply put, C-V2X is the glue that'll bind automobiles, smart cities and infrastructure to enable a lot of the promise that to date has been elusive. Here's a primer on C-V2X and what it'll enable.
Although many will feel that one of life's pleasures will be robbed from them, there's a compelling argument that the introduction of self-driving cars will lead to increased road safety. Fewer traffic accidents will occur, predictive software can accurately monitor traffic flow into cities and a more sophisticated traffic light system can help reduce congestion at key junctions. AI undoubtedly offers an exciting future for us all due to the potential for automation and increased efficiency.
Today's cars negatively affect our climate, our time, our physical abilities, our finances and our living and working environments. Coupled with the safety aspect of driving, which relies on a person performing well, human error (including, but not limited to, boredom, limited reaction times, limited attention span, tiredness, mood) has amounted to the cause of 90% of road accidents. The International Journal of Intelligent Unmanned Systems recently published a paper discussing the management of single and multilane roads with driverless cars. It argues, by using intelligent systems, such as radar, laser, GPS, odomotry and computer vision, that the 90% of road accidents mentioned above can be eliminated altogether. It also states that by embedding Spatial Grasp Language (SGL), the resulting infrastructure can mediate traffic increases by slowing down and'pre-empting' traffic hotspots, as well as navigating past broken down vehicles safely, changing lanes and overtaking slower vehicles.
Computer vision is a scholastic term that depicts the capability of a machine to get and analyze visual information all alone and afterward settle on decisions about it. That can consist of photographs and videos, yet more comprehensively may consist of "pictures" from thermal, or infrared sensor, indicators and different sources. Computer vision is now being used for various purposes, however, on the customer level, it is now depended upon by remote control drones to keep away from impediments, and via vehicles from Tesla and Volvo, among others. Computer vision permits computers, and in this manner robots, other computer-controlled vehicles, and everything from processing plants and farm equipment to semi-independent cars and drones, to run all the more productively and shrewdly and even securely. In any case, computer vision's significance has turned out to be considerably increasingly evident in a world deluged with digital pictures.
There is a lot of hype for the accomplishments of Artificial Intelligence. Nextbigfuture keeps you up to date about the achievements made in AI. Two hours ago Nextbigfuture published what IBM has done with AI. We marveled at our own magnificence…. Waze and Google Maps use a lot of Artificial Intelligence and real-time updates to provide the best driving instructions.
Why: With 58 percent of respondents to a 2018 NASCIO survey expecting artificial intelligence and machine learning to be the most impactful emerging technologies over the next three to five years, NASCIO investigates how the technology is currently being used by state governments. Findings: A recent Deloitte report outlines four uses of automation: relieving humans of day-to-day tasks; splitting up tasks to allow computers do so some work and humans to supervise; replacing work done by humans; and augmenting work to make humans more efficient or effective at their jobs. Robotic process automation is making inroads in both back-office and front-office functions, the NASCIO study finds, saving between 40 and 70 percent on labor costs and completing work with near zero error rates. The North Carolina Innovation Center, for example, is using chatbots to split up some of its help desk work, and Mississippi's citizen-facing chatbot can respond to over 100 inquiries. The Minnesota Pollution Control Agency is using AI to import real-time weather information, crunch the numbers and develop basic analysis that meteorologists review.
It's been a long day. As you ride home from the office, you start to nod off. You close your eyes as the self-driving car merges onto the highway. When you're zonked out 15 minutes later, the car changes your route because of traffic, and eventually you wake up at your destination. That's the dream of autonomous cars -- and some very smart people, including Google's Sergey Brin, thought they'd already be driving people around public streets by now.