Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
While many financial institutions are talking about robotics and artificial intelligence, some have already implemented these technologies; some have made significant strides including impressive cost savings of up to 60 per cent in some areas. Over the last two decades, the invasion of technology has changed the entire landscape and the way we perceive things around us. Banking has taken a paradigm shift in this technological revolution. While many new technologies are emerging, it is also bringing in a host of changes to the industry and generating newer forms of employment. Being a critical aspect of the economy, technological progression in the banking sector has become an important topic.
About one month ago, the headlines flashed, 'Gujarat doctor makes history', crediting cardiac surgeon Dr Tejas Patel with conducting the world's first telerobotic surgery on a patient in Ahmedabad. Sitting 32 kilometres away from his patient, a middle-aged woman with a blocked artery at Apex Hospital, Dr Patel guided the robotic arms through a joystick to perform the coronary intervention. The surgery sounded rumblings of a shift in healthcare. Is robotics the way to go? When the trauma caused by incisions in traditional open surgeries started becoming a point of concern, laparoscopic surgeries started becoming popular, in the '90s.
One of the biggest concerns about driverless vehicles is that their actions will be difficult to predict, especially for pedestrians, cyclists and other vulnerable road users. But Jaguar Land Rover has suggested this might not be an issue in the future with the development of a system that notifies everyone outside the car which way the vehicle is about to go. The concept projects the direction of travel onto the road ahead, which the car maker says will help people develop a level of trust in autonomous technology. 'After you...': Jaguar Land Rover believes autonomous cars will be able to notify pedestrians which way they are going to go to prevent collisions Jaguar Land Rover has given plenty of thought to the safety of future vehicles in recent months. Last October it worked with Guide Dogs for the Blind to develop the best sound for electric vehicles to make so they can be heard by those with visual impairments.
The Future of Work is one of the most crucial topics for a successful transition to a new era. As such, it takes center stage at the World Economic Forum (WEF) this week. Picking up from where we left in the first part on the interplay between data, automation, and the future of work, we highlight ongoing trends and examine how to approach soft skills and re-skilling. Based on the WEF's latest report on the Future of Jobs, we highlight the major forces at play today. We discuss how these effect the technology behind the job market with Panos Alexopoulos, Head of Ontology at Textkernel.
For all the recent progress in artificial intelligence, industrial robots remain amazingly dumb and dangerous. Sure, they can perform arduous tasks precisely and repetitively, but they cannot respond to variations in their environment or tackle something new. That severely limits just how useful robots can be in the workplace. Nvidia wants to use machine learning to help solve this problem. The world's leading producer of the specialistcomputer chips that are crucial to artificial intelligenceis opening a new robotics lab in Seattle to make the robots that work alongside humans--co-bots-- smarter and more capable.
Techies and gadget geeks alike have been talking about it for years already, but artificial intelligence made serious waves in 2018, showing up prominently in pop culture and our everyday devices. With companies like Apple, Google, Amazon, and Microsoft investing millions in AI, this will ultimately make it one of the major themes to look out for at the annual Consumer Electronics Show, which kicks off in Las Vegas next week. CES is an opportunity to showcase the consumer use for that technology, so much of what will be displayed are "smart" devices or "smart" products -- take, for instance, this smart bathroom with voice-enabled lighting technology. While there are dozens of players in the AI space, we can expect that Google Assistant and Amazon's Alexa are going to dominate the show this year. Both voice assistants are compatible with more than 10,000 devices, which -- as Wired noted-- will make the showroom floor quite noisy.
Using deep reinforcement learning, we train control policies for autonomous vehicles leading a platoon of vehicles onto a roundabout. Using Flow, a library for deep reinforcement learning in micro-simulators, we train two policies, one policy with noise injected into the state and action space and one without any injected noise. In simulation, the autonomous vehicle learns an emergent metering behavior for both policies in which it slows to allow for smoother merging. We then directly transfer this policy without any tuning to the University of Delaware Scaled Smart City (UDSSC), a 1:25 scale testbed for connected and automated vehicles. We characterize the performance of both policies on the scaled city. We show that the noise-free policy winds up crashing and only occasionally metering. However, the noise-injected policy consistently performs the metering behavior and remains collision-free, suggesting that the noise helps with the zero-shot policy transfer. Additionally, the transferred, noise-injected policy leads to a 5% reduction of average travel time and a reduction of 22% in maximum travel time in the UDSSC. Videos of the controllers can be found at https://sites.google.com/view/iccps-policy-transfer.
Is it Safe to Drive? Abstract--With recent advances in learning algorithms and hardware development, autonomous cars have shown promise when operating in structured environments under good driving conditions. However, for complex, cluttered and unseen environments withhigh uncertainty, autonomous driving systems still frequently demonstrate erroneous or unexpected behaviors, that could lead to catastrophic outcomes. Autonomous vehicles should ideally adapt to driving conditions; while this can be achieved through multiple routes, it would be beneficial as a first step to be able to characterize Driveability in some quantified form. To this end, this paper aims to create a framework for investigating different factors that can impact driveability. Also, one of the main mechanisms to adapt autonomous driving systems to any driving condition is to be able to learn and generalize from representative scenarios. The machine learning algorithms that currently do so learn predominantly in a supervised manner and consequently need sufficient data for robust and efficient learning. Specifically,we categorize the datasets according to use cases, and highlight the datasets that capture complicated and hazardous driving conditions which can be better used for training robust driving models. Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments. I. INTRODUCTION Despite testing autonomous cars in highly controlled settings, thesecars still occasionally fail in making correct decisions, often with catastrophic results According to the accident records, the failures are most likely to happen in complex or unseen driving environments. The fact remains that while autonomous cars can operate well in controlled or structured environments such as highways, they are still far from reliable when operating in cluttered, unstructured or unseen environments . These apply to autonomous vehicles in general. Thesetwo different application fields also suggest that driveability could be quantified in different forms, either as a single metric or a composition of metrics. For example, with ADAS and current Level 2 or 3 autonomy, a scene can be simply defined as driveable if the car can operate safely in autonomous mode. When a non-driveable scene is detected, the autonomous car can hand over control to the human driver in a timely manner .
Self-driving cars are being developed by several major technology companies and carmakers. When a driver slams on the brakes to avoid hitting a pedestrian crossing the road illegally, she is making a moral decision that shifts risk from the pedestrian to the people in the car. Self-driving cars might soon have to make such ethical judgments on their own -- but settling on a universal moral code for the vehicles could be a thorny task, suggests a survey of 2.3 million people from around the world. The largest ever survey of machine ethics1, published today in Nature, finds that many of the moral principles that guide a driver's decisions vary by country. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally.
Those expectations are now hitting speed bumps, according to interviews with eight current and former GM and Cruise employees and executives, along with nine autonomous vehicle technology experts familiar with Cruise. These sources say that some unexpected technical challenges - including the difficulty that Cruise cars have identifying whether objects are in motion - mean putting GM's driverless cars on the road in a large scale way in 2019 is looking highly unlikely.