If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This article proposes a method for mathematical modeling of human movements related to patient exercise episodes performed during physical therapy sessions by using artificial neural networks. The generative adversarial network structure is adopted, whereby a discriminative and a generative model are trained concurrently in an adversarial manner. Different network architectures are examined, with the discriminative and generative models structured as deep subnetworks of hidden layers comprised of convolutional or recurrent computational units. The models are validated on a data set of human movements recorded with an optical motion tracker. The results demonstrate an ability of the networks for classification of new instances of motions, and for generation of motion examples that resemble the recorded motion sequences.
We study the fundamental problem of learning an unknown, smooth probability function via point-wise Bernoulli tests. We provide the first scalable algorithm for efficiently solving this problem with rigorous guarantees. In particular, we prove the convergence rate of our posterior update rule to the true probability function in L2-norm. Moreover, we allow the Bernoulli tests to depend on contextual features, and provide a modified inference engine with provable guarantees for this novel setting. Numerical results show that the empirical convergence rates match the theory, and illustrate the superiority of our approach in handling contextual features over the state-of-the-art.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. ESA astronaut Alexander Gerst welcomed a new face to the Columbus laboratory, thanks to the successful commissioning of technology demonstration Cimon. Short for Crew Interactive Mobile CompanioN, Cimon is a 3D-printed plastic sphere designed to test human-machine interaction in space.
The film Robot and Frank imagined a near-future where robots could do almost everything humans could. The elderly title character was given a "robot butler" to help him continue living on his own. The robot was capable of everything from cooking and cleaning to socialising (and, it turned out, burglary). This kind of science fiction may turn out to be remarkably prescient. As growing numbers of elderly people require care, researchers believe that robots could be one way to address the overwhelming demand.
More than one billion people live in slums around the world. In some developing countries, slum residents make up for more than half of the population and lack reliable sanitation services, clean water, electricity, other basic services. Thus, slum rehabilitation and improvement is an important global challenge, and a significant amount of effort and resources have been put into this endeavor. These initiatives rely heavily on slum mapping and monitoring, and it is essential to have robust and efficient methods for mapping and monitoring existing slum settlements. In this work, we introduce an approach to segment and map individual slums from satellite imagery, leveraging regional convolutional neural networks for instance segmentation using transfer learning. In addition, we also introduce a method to perform change detection and monitor slum change over time. We show that our approach effectively learns slum shape and appearance, and demonstrates strong quantitative results, resulting in a maximum AP of 80.0.
Robotic devices for clinical rehabilitation of patients with neurological impairments come in a wide variety of shapes and sizes and employ different kinds of actuators. The design process for rehabilitation robots is driven by the intention that the technical system will be paired with a human being; it is of paramount importance that safety and flexibility of operation are ensured. When designing a robotic device for people with paretic limbs it is usually desirable to specify the actuators and controllers in such a way that a degree of compliance and yielding is retained, rather than forcing the limbs to rigidly follow a pre-programmed trajectory. This reduces the likelihood of injury which might result from forcing a stiff joint to move in a non-physiological manner, and it allows the patient to positively interact with the system and actively guide the therapy. It is not uncommon to come across the viewpoint that electric actuators are not well suited to applications having compliant design requirements: in traditional control engineering, DC motors are programmed to provide accurate and fast setpoint tracking; it is often thought that they are not ideally suited for clinical rehabilitation tasks where "soft" behavioural characteristics are called for.
Designing of touchless user interface is gaining popularity in various contexts. Using such interfaces, users can interact with electronic devices even when the hands are dirty or non-conductive. Also, user with partial physical disability can interact with electronic devices using such systems. Research in this direction has got major boost because of the emergence of low-cost sensors such as Leap Motion, Kinect or RealSense devices. In this paper, we propose a Leap Motion controller-based methodology to facilitate rendering of 2D and 3D shapes on display devices. The proposed method tracks finger movements while users perform natural gestures within the field of view of the sensor. In the next phase, trajectories are analyzed to extract extended Npen++ features in 3D. These features represent finger movements during the gestures and they are fed to unidirectional left-to-right Hidden Markov Model (HMM) for training. A one-to-one mapping between gestures and shapes is proposed. Finally, shapes corresponding to these gestures are rendered over the display using MuPad interface. We have created a dataset of 5400 samples recorded by 10 volunteers. Our dataset contains 18 geometric and 18 non-geometric shapes such as "circle", "rectangle", "flower", "cone", "sphere" etc. The proposed methodology achieves an accuracy of 92.87% when evaluated using 5-fold cross validation method. Our experiments revel that the extended 3D features perform better than existing 3D features in the context of shape representation and classification. The method can be used for developing useful HCI applications for smart display devices.
Robotics firm Boston Dynamics has unveiled the latest version of its highly-advanced Atlas robot, showing the machine performing parkour tricks over obstacles. Boston Dynamics describes Atlas as the "world's most dynamic humanoid," with previous videos showing the robot performing backflips. "The control software uses the whole body including legs, arms and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace," the video's description states. A caretaker wearing a'HAL for care support' robot suit pushes a wheelchair at Shin-tomi nursing home in Tokyo. Residents follow moves made by humanoid robot'Pepper' during an afternoon exercise routine at Shin-tomi nursing home in Tokyo.
Assistive robotics and particularly robot coaches may be very helpful for rehabilitation healthcare. In this context, we propose a method based on Gaussian Process Latent Variable Model (GP-LVM) to transfer knowledge between a physiotherapist, a robot coach and a patient. Our model is able to map visual human body features to robot data in order to facilitate the robot learning and imitation. In addition , we propose to extend the model to adapt robots' understanding to patient's physical limitations during the assessment of rehabilitation exercises. Experimental evaluation demonstrates promising results for both robot imitation and model adaptation according to the patients' limitations.
A robot is set to become the first non-human to appear as a witness before the UK Parliament. The Commons Education Select Committee invited Pepper the robot from Middlesex University to give evidence at a hearing taking place next week about artificial intelligence, robotics and the fourth industrial revolution. "If we've got the march of the robots, we perhaps need the march of the robots to our select committee to give evidence," Committee chair Robert Halfon told Tes. "The fourth industrial revolution is possibly the most important challenge facing our nation over the next 10, 20 to 30 years." The Independent has reached out for more details about the appearance. Despite dystopian predictions and dire warnings of robots and AI taking over people's jobs, the government has previously expressed interest in the potential of robotic technology.