Not enough data to create a plot.
Try a different view from the menu above.
A small robot is roving around a massive U.S. nuclear waste site to gather critical samples of potential air and water contamination after an emergency was declared Tuesday. The machine was deployed after a tunnel that stores rail cars filled with radioactive waste partially collapsed at Hanford Nuclear Reservation in Washington state. The mishap raised fears of a radiation leak at the nation's most contaminated nuclear site, though officials said there was no actual indication of a release of plutonium radiation as of 2:20 p.m. PDT. The air- and soil-sampling robot is monitoring for any changes on the scene. This robot is being used at Hanford right now to sample contamination in the air and on the ground.
Few people ever need to deal with a stricken nuclear reactor, but that skill could turn out to be important for the evolution of smarter robots. In Pomona, California, this week, 25 of the world's most advanced humanoid robots will take part in a contest inspired by the challenge of stabilizing a nuclear reactor that's leaking dangerous radioactive material. Teams from universities across the U.S., as well as Japan, China, and Europe, are bringing robots that will try to walk across piles of rubble, climb ladders, operate power tools, and drive buggies, among other chores. Each challenge is inspired by something that might have helped stabilize the Fukushima Daiichi nuclear plant in Japan after it was damaged by an earthquake in 2011. Considerable academic kudos will go to whichever team completes the most tasks within the allotted time by the end of the contest.
"People love the wizards in Harry Potter or'Lord of the Rings,' but this is real," said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. "A new species, Robo sapiens, are emerging," he said. The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world. Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans -- and the dreams of science fiction lovers -- sooner rather than later.
How do you bring a bad drone down? New kinds of drones that can fly autonomously can't be stopped with traditional techniques, the US Air Force has warned. It's put out a call for ideas to yank drones right out of the sky. Millions of drones are sold worldwide each year. Most are flown for fun, but a few have been put to criminal use: carrying cameras to bedroom windows, flying into secure airspace over nuclear power stations, and smuggling contraband into prisons.
Lawrence Livermore National Laboratory (LLNL) has purchased a new brain-inspired supercomputing platform developed by International Business Machines Corp (NYSE:IBM). Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses while consuming only the energy equivalent of a tablet computer. The brain-like, neural network design of the IBM neuromorphic system is able to run complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips. LLNL will receive a 16-chip TrueNorth system representing a total of 16 million neurons and 4 billion synapses. The new system will be used to explore new computing capabilities important to the National Nuclear Security Administration (NNSA) missions in cybersecurity, stewardship of the nation's nuclear weapons stockpile and nonproliferation.
Lawrence Livermore National Laboratory (LLNL) today announced it will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips. The new system will be used to explore new computing capabilities important to the National Nuclear Security Administration's (NNSA) missions in cybersecurity, stewardship of the nation's nuclear weapons stockpile and nonproliferation. NNSA's Advanced Simulation and Computing (ASC) program will evaluate machine-learning applications, deep-learning algorithms and architectures and conduct general computing feasibility studies.
A new low-power, "brain-inspired" supercomputing platform based on IBM chip technology will soon start exploring deep learning for the U.S. nuclear program. Lawrence Livermore National Laboratory announced on Tuesday that it has purchased the platform, based on the TrueNorth neurosynaptic chip IBM introduced in 2014. It will use the technology to evaluate machine-learning and deep-learning applications for the National Nuclear Security Administration. The computer will process data with the equivalent of 16 million neurons and 4 billion synapses and consume roughly as much energy as a tablet PC. Also included will be an accompanying ecosystem consisting of a simulator; a programming language; an integrated programming environment; a library of algorithms and applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.
Atlas, the humanoid robot created by Alphabet (GOOGL, Tech30) company Boston Dynamics, can open doors, balance while walking through the snow, place objects on a shelf and pick itself up after being knocked down. The new version of Atlas is smaller and more nimble than its predecessor. It's fully mobile too -- the previous version had to be tethered to a computer. Atlas was created to perform disaster recovery in places unsafe for humans, such as damaged nuclear power plants. The robot made its debut in 2013 during a competition held by the Defense Advanced Research Projects Agency.
Branlat, Matthieu (The Ohio State University) | Woods, David D. (The Ohio State University)
A large body of research describes the importance of adaptability for systems to be resilient in the face of disruptions. However, adaptive processes can be fallible, either because systems fail to adapt in situations requiring new ways of functioning, or because the adaptations themselves produce undesired consequences. A central question is then: how can systems better manage their capacity to adapt to perturbations, and constitute intelligent adaptive systems? Based on studies conducted in different high-risk domains (healthcare, mission control, military operations, urban firefighting), we have identified three basic patterns of adaptive failures or traps: (1) decompensation – when a system exhausts its capacity to adapt as disturbances and challenges cascade; (2) working at cross-purposes – when sub-systems or roles exhibit behaviors that are locally adaptive but globally maladaptive; (3) getting stuck in outdated behaviors – when a system over-relies on past successes although conditions of operation change. The identification of such basic patterns then suggests ways in which a work organization, as an example of a complex adaptive system, needs to behave in order to see and avoid or recognize and escape the corresponding failures. The paper will present how expert practitioners exhibit such resilient behaviors in high-risk situations, and how adverse events can occur when systems fail to do so. We will also explore how various efforts in research related to complex adaptive systems provide fruitful directions to advance both the necessary theoretical work and the development of concrete solutions for improving systems’ resilience.
Laffey, Thomas J., Cox, Preston A., Schmidt, James L., Kao, Simon M., Readk, Jackson Y.
Real-time domains present a new and challenging environment for the application of knowledge-based problem-solving techniques. However, a substantial amount of research is still needed to solve many difficult problems before real-time expert systems can enhance current monitoring and control systems. In this article, we examine how the real-time problem domain is significantly different from those domains which have traditionally been solved by expert systems. We conduct a survey on the current state of the art in applying knowledge-based systems to real-time problems and describe the key issues that are pertinent in a real-time domain. The survey is divided into three areas: applications, tools, and theoretic issues. From the results of the survey, we identify a set of real-time research issues that have yet to be solved and point out limitations of current tools for real-time problems. Finally, we propose a set of requirements that a real-time knowledge-based system must satisfy.