How does one deal with the unexpected? Our world is full of surprises and we humans are often able to correctly identify a problem and respond appropriately. Consider a new driver encountering their first traffic circle; a student experiencing a hard drive failure in the middle of an assignment; an unexpected question being asked during a job interview. In situations where we have a goal (i.e., reach a destination or submit a completed assignment), we may need to alter our original plan when the unexpected occurs. Could we enable autonomous artificial intelligent agents to do the same?
A long standing area of artificial intelligence is the field of automated planning. The traditional planning problem is to generate a sequence of actions given a concrete, specific goal (e.g., I will be home at dinnertime) and a set of specific actions (e.g., drive-car, fill-gas-tank, walk, etc). Generating plans that are hopefully efficient and optimal from start to finish under different circumstances (e.g., delayed effects) is an active area of research. After a plan has been generated, and during the execution of the plan, the environment may change. For example, a robot retrieving packages in a warehouse may discover it has dropped its package. Or perhaps another robot has broken down due to a hardware failure and is blocking the path of this robot. How can a robot (or any A.I. agent) know something unexpected has happened without knowing all possible future failures?
Fundamental research on autonomy aims to find general approaches to solve this problem. One approach is to generate expectations: facts that should be true during different stages of a plan's execution. When an expectation is violated, a discrepancy occurs between the expected and perceived facts. A new trend in autonomy is to include goal reasoning capabilities. In the event of a failure, the original goal may no longer be warranted. Perhaps robust autonomous agents need to generate and change their goals in response to a changing environment.
Autonomous systems still have a long way to go and open research questions on autonomous systems remain. Funding agencies consistently seek new research on autonomy for diverse operations ranging from cybersecurity to military and vehicular autonomy. What will autonomous systems be like in the future? Will we achieve autonomous agents that can handle any situation they encounter?
- Dustin Dannenhauer
Janielle and Elijah Gilmour, ages 12 and 10, got the surprise of a lifetime in April, when foster parents Courtney and Tom Gilmour announced news of their official adoption date during a visit to Walt Disney World. "We planned it as soon as we got the [official] date, which was the Friday before our trip," Courtney tells Fox News. Courtney and Tom Gilmour had been foster parents to Janielle and Elijah for three years. "We are all happy Gilmours," Courtney tells Fox News.
With the faculty of both intra-city and inter-city route planning, Route Optimization is technology's answer to the famous Traveling Salesman Problem. Predicting disruptions and training AI to learn from contingency plans developed by humans enables automated corrective action in the future. Driverless vehicles, alone have the potential to completely change the way we transport products, and they're closer than we think. AI armed with predictive analytics can analyze massive amounts of data generated by the supply chains and help organizations move to a more proactive form of supply chain management.
Apollo 13 was to be the third mission to land on the moon, but just under 56 hours into flight, an oxygen tank explosion forced the crew to cancel the lunar landing and move into the Aquarius lunar module to return back to Earth. The drama that unfolded during the Apollo 13 mission was re-told in the Hollywood film starring Tom Hanks, Kevin Bacon as Swigert and the late Bill Paxton as Haise. The flight plan for the ill-fated Apollo 13 mission which had to be drastically altered following the'Houston, we have had a problem' emergency on board has been unearthed The Apollo 13 mission which set off on April 11, 1970 was meant to culminate in a third moon landing, with Lovell and Haise voyaging to the lunar surface while Swigert orbited in the command module Odyssey. But just under 56 hours into the mission, an oxygen tank explosion resulted in a major loss of electrical power to the command and service module, forcing the crew to cancel the lunar landing and move into the Aquarius lunar module.
In threat trapping, passive technologies identify malware using models of bad behavior like signatures. Unfortunately, developing accurate malware detection products based on good behavior modeling is not easy. But no company has enough human resources to manually evaluate a large number of alerts about possible security threats. When AI applies both bad and good behavior models, it reduces the number of false positives to a manageable amount.
There's a combination of factors that contribute to this renaissance: more access to data, increased computing power to store and process the data, advances in the algorithms and techniques, and the increase of Open Source tools that help lower the barrier to adoption. The current trend is now the intelligent car that combines the benefits of sensors, connectivity and advances in AI algorithms to enable self-driving cars. While the initial waves were about simple automation of existing business processes with technology playing a supporting role to the organization, the recent disruptions are bringing technology closer and closer to the core of every business. A truly end-to-end planning system goes beyond the existing silos of production planning, material planning, demand planning, transportation planning and so on.
Not only are HRMS changing talent management practices, but they are also helping revolutionize organizational designs. One thing is for sure: automation will eliminate transactional work, freeing up human workers' bandwidths for more value-adding work and making HR a more strategic advisor to business. HR is seeing the proliferation of social technological systems that help analyze and understand the large amounts of data that social makes available. Only when HR and HR tech meet the workforce needs can we leapfrog toward the organizational goals.
The actual problem is that I have a 15 element input tensor (describing the scheduling scenario) and I want to generate a 4 element output tensor that gives me some parameters for my algorithm. I randomly generated a data set (labled as far as I understand) that contains input and output elements and connects them with a score which should be as low (good) as possible. So if I want to generate a model that gives me as good as possible output so that the algorithm can compute a schedule with the score as good as possible, how can I model a NN that in can be trained by this data later gives me good ouput for any scenario. Training data has 3 tables (Input, Output, Score) and is generated randomly over the weekend.
But the task he assigned the class is a very real and illustrative type of tech industry labor, not unlike the work of the Mumbai clickfarm Jared employed last season to boost Pied Piper's user metrics. Because many modern AI advancements are thanks to neural networks, and because those networks must be trained with countless examples so they improve over time, companies often need human beings to help the software make sense of the data. The company's long-term goal is to build AI that can automate away some of the more rote behaviors and routine demands, while humans would increasingly be used only for tasks the software could never perform on its own, like calling Amazon customer service. Launched in beta in fall 2015, M acted like a fully automated personal assistant, but it requires a team of human contractors down in Menlo Park to take control of the bot when, say, someone asks it to call Amazon customer service.
Importantly, this enables users to teach robots skills that can be automatically transferred to other robots with different "kinematics" (ways of moving) – a key time- and cost-saving measure for companies that want a range of robots to perform similar actions. The team tested the system on Optimus, a new two-armed robot designed for bomb disposal that they programmed to perform tasks like opening doors, transporting objects and extracting objects from containers. By matching these keyframes with the knowledge base, the robot can automatically suggest motion plans for the operator to approve or edit as needed. "By using these learned constraints in a motion planner, we can make systems that are far more flexible than those which just try to mimic what's being demonstrated" Shah says that advanced LfD methods could prove important in time-sensitive scenarios like bomb disposal and disaster response, where robots are currently tele-operated at the level of individual joint movements.
Importantly, this enables users to teach robots skills that can be automatically transferred to other robots that have different ways of moving -- a key time- and cost-saving measure for companies that want a range of robots to perform similar actions. The team tested the system on Optimus, a new two-armed robot designed for bomb disposal that they programmed to perform tasks such as opening doors, transporting objects, and extracting objects from containers. By matching these keyframes to the different situations in the knowledge base, the robot can automatically suggest motion plans for the operator to approve or edit as needed. One challenge was that existing constraints that could be learned from demonstrations weren't accurate enough to enable robots to precisely manipulate objects.