If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Science fiction lovers know Stanley Kubrics movie 2001 A Space Odyssey to be one of the defining movies of its genre. Not only for its visual effects, but also for its plot: HAL, the onboard AI of a spacecraft send to investigate a possible sign of alien life, becomes problematic as it makes up its own mind and breaks the first rule of robotics as stated by Isaac Asimove: "A robot may not injure a human being or, through inaction, allow a human being to come to harm.". This definitely happens because HAL tries to protect itself. The above report on AI algorithms finding ways outside the expected bounds, so in a way cheating on the challenge given to them shows that a scenario like in Kubrics movie is not far fetched. This can be understood if we consider that in many forms of current AI we do not restrict the use of the tools of the AI.
In the previous article, we studied Tensorflow, its functions, and its python implementations. In this article, we will be studying Artificial Intelligence and more popularly knows as AI. One thing that I believe is that if we are able to correlate anything with us or our life, there are greater chances of understanding the concept. So I will try to explain everything by relating it to humans.
As a proof of concept, engineers used these new actuators to build a soft, battery-powered robot that can walk untethered on flat surfaces and move objects. They also built a soft gripper that can grasp and pick up small objects. The team, led by UC San Diego mechanical and aerospace engineering professor Shengqiang Cai, published the work Oct. 11 in Science Advances. A problem with most soft actuators is that they come with bulky setups. That's because their movements are controlled by pumping either air or fluids through chambers inside.
Our work published recently in Science Robotics describes a new form of computer, ideally suited to controlling soft robots. Our Soft Matter Computer (SMC) is inspired by the way information is encoded and transmitted in the vascular system. Soft robotics has exploded in popularity over the last decade. In part, this is because robots made with soft materials can easily adapt and conform to their environment. This makes soft robots particularly suited to tasks that require a delicate touch, such as handling fragile materials or operating close to the (human) body.
YOLO is a social robot designed and developed to stimulate creativity in children through storytelling activities. Children use it as a character in their stories. This article details the artificial intelligence software developed for YOLO. The implemented software schedules through several Creativity Behaviors to find the ones that stimulate creativity more effectively. YOLO can choose between convergent and divergent thinking techniques, two important processes of creative thought. These techniques were developed based on the psychological theories of creativity development and on research from creativity experts who work with children. Additionally, this software allows the creation of Social Behaviors that enable the robot to behave as a believable character. On top of our framework, we built 3 main social behavior parameters: Exuberant, Aloof, and Harmonious. These behaviors are meant to ease immersive play and the process of character creation. The 3 social behaviors were based on psychological theories of personality and developed using children's input during co-design studies. Overall, this work presents an attempt to design, develop, and deploy social robots that nurture intrinsic human abilities, such as the ability to be creative.
Traditional control methods are inadequate in many deployment settings involving control of Cyber-Physical Systems (CPS). In such settings, CPS controllers must operate and respond to unpredictable interactions, conditions, or failure modes. Dealing with such unpredictability requires the use of executive and cognitive control functions that allow for planning and reasoning. Motivated by the sport of drone racing, this dissertation addresses these concerns for state-of-the-art flight control by investigating the use of deep neural networks to bring essential elements of higher-level cognition for constructing low level flight controllers. This thesis reports on the development and release of an open source, full solution stack for building neuro-flight controllers. This stack consists of the methodology for constructing a multicopter digital twin for synthesize the flight controller unique to a specific aircraft, a tuning framework for implementing training environments (GymFC), and a firmware for the world's first neural network supported flight controller (Neuroflight). GymFC's novel approach fuses together the digital twinning paradigm for flight control training to provide seamless transfer to hardware. Additionally, this thesis examines alternative reward system functions as well as changes to the software environment to bridge the gap between the simulation and real world deployment environments. Work summarized in this thesis demonstrates that reinforcement learning is able to be leveraged for training neural network controllers capable, not only of maintaining stable flight, but also precision aerobatic maneuvers in real world settings. As such, this work provides a foundation for developing the next generation of flight control systems.
In particular, small soft robots at millimeter scale are of practical interest as they can be designed as a combination of miniature actuators simply driven by pneumatic pressure. They are also well suited for navigation in confined areas and manipulation of small objects. However, scaling down soft pneumatic robots to millimeters results in finer features that are reduced by more than one order of magnitude. The design complexity of such robots demands great delicacy when they are fabricated with traditional processes such as molding and soft lithography. Although emerging 3D printing technologies like digital light processing (DLP) offer high theoretical resolutions, dealing with microscale voids and channels without causing clogging has still been challenging.
Researchers at the Korea Advanced Institute of Science and Technology, or KAIST, have developed an ultra-thin actuator for soft robotics. The artificial muscles, recently reported in the journal Science Robotics, were demonstrated with a robotic blooming flower brooch, dancing robotic butterflies, and fluttering tree leaves on a kinetic art piece. Actuators are the robotic equivalents of muscles, expanding, contracting, or rotating like muscle fibers in response to a stimulus such as electricity. Engineers around the world are striving to develop more dynamic actuators that respond quickly, can bend without breaking, and are very durable. Soft robotic muscles could have a wide variety of applications, from wearable electronics to advanced prosthetics.
For decades, technological innovation has been revolutionizing businesses, offering them innumerable long-term benefits and growth. Now, these technologies are set to transform structures that form the foundation of cities and their development. Comparing the buildings of the present to what they were even a few years ago will show massive changes. Modern buildings are more than just four walls and a roof. In fact, building walls now even have ears and eyes, all thanks to digital technologies.
An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand. In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators -- devices that mechanically control robotic systems in response to electrical signals -- that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it's activated, however, it portrays the famous Edvard Munch painting "The Scream."