Goto

Collaborating Authors

 Capitanelli, Alessio


IFRA: a machine learning-based Instrumented Fall Risk Assessment Scale derived from Instrumented Timed Up and Go test in stroke patients

arXiv.org Artificial Intelligence

Effective fall risk assessment is critical for post-stroke patients. The present study proposes a novel, data-informed fall risk assessment method based on the instrumented Timed Up and Go (ITUG) test data, bringing in many mobility measures that traditional clinical scales fail to capture. IFRA, which stands for Instrumented Fall Risk Assessment, has been developed using a two-step process: first, features with the highest predictive power among those collected in a ITUG test have been identified using machine learning techniques; then, a strategy is proposed to stratify patients into low, medium, or high-risk strata. The dataset used in our analysis consists of 142 participants, out of which 93 were used for training (15 synthetically generated), 17 for validation and 32 to test the resulting IFRA scale (22 non-fallers and 10 fallers). Features considered in the IFRA scale include gait speed, vertical acceleration during sit-to-walk transition, and turning angular velocity, which align well with established literature on the risk of fall in neurological patients. In a comparison with traditional clinical scales such as the traditional Timed Up & Go and the Mini-BESTest, IFRA demonstrates competitive performance, being the only scale to correctly assign more than half of the fallers to the high-risk stratum (Fischer's Exact test p = 0.004). Despite the dataset's limited size, this is the first proof-of-concept study to pave the way for future evidence regarding the use of IFRA tool for continuous patient monitoring and fall prevention both in clinical stroke rehabilitation and at home post-discharge. Keywords: Fall Risk, Stroke Rehabilitation, Machine Learning, Mobility Impairment, Instrumented Timed Up and Go test, Inertial Measurement Units, Feature Selection 1 1.


A Framework for Neurosymbolic Robot Action Planning using Large Language Models

arXiv.org Artificial Intelligence

Symbolic task planning is a widely used approach to enforce robot autonomy due to its ease of understanding and deployment. However, symbolic task planning is difficult to scale in real-world when frequent re-planning is needed, for example, due to human-robot interactions or unforeseen events. Plan length and planning time can hinder the robot's efficiency and negatively affect the overall human-robot interaction's fluency. We present a framework, Teriyaki, designed to bridge the gap between symbolic task planning and machine learning approaches, by training Large Language Models (LLMs), namely GPT-3, into neurosymbolic task planners compatible with the Planning Domain Definition Language (PDDL). Potential benefits include: (i) better scalability in so far as the planning domain complexity increases, since LLMs' response time linearly scales with the combined length of the input and the output, instead of super-linearly as in the case of symbolic task planners, and (ii) the ability to synthesize a plan action-by-action instead of end-to-end, and to make each action available for execution as soon as it is generated, which in turn enables concurrent planning and execution. In the past year, significant efforts have been devoted by the research community to evaluate the overall cognitive abilities of LLMs, with alternate successes. Instead, with Teriyaki we aim to providing an overall planning performance comparable to traditional planners in specific planning domains, while leveraging LLMs capabilities in other metrics which are used to build a look-ahead predictive planning model. Preliminary results in selected domains show that our method can: (i) solve 95.5% of problems in a test data set of 1000 samples; (ii) produce plans up to 13.5% shorter than a traditional symbolic planner; (iii) reduce average overall waiting times for a plan availability by up to 61.4%.


Manipulation of Articulated Objects using Dual-arm Robots via Answer Set Programming

arXiv.org Artificial Intelligence

The manipulation of articulated objects is of primary importance in Robotics, and can be considered as one of the most complex manipulation tasks. Traditionally, this problem has been tackled by developing ad-hoc approaches, which lack flexibility and portability. In this paper we present a framework based on Answer Set Programming (ASP) for the automated manipulation of articulated objects in a robot control architecture. In particular, ASP is employed for representing the configuration of the articulated object, for checking the consistency of such representation in the knowledge base, and for generating the sequence of manipulation actions. The framework is exemplified and validated on the Baxter dual-arm manipulator in a first, simple scenario. Then, we extend such scenario to improve the overall setup accuracy, and to introduce a few constraints in robot actions execution to enforce their feasibility. The extended scenario entails a high number of possible actions that can be fruitfully combined together. Therefore, we exploit macro actions from automated planning in order to provide more effective plans. We validate the overall framework in the extended scenario, thereby confirming the applicability of ASP also in more realistic Robotics settings, and showing the usefulness of macro actions for the robot-based manipulation of articulated objects.


A ROS multi-ontology references services: OWL reasoners and application prototyping issues

arXiv.org Artificial Intelligence

The challenge of sharing and communicating information is crucial in complex human-robot interaction (HRI) scenarios. Ontologies and symbolic reasoning are the state of the art approach for a natural representation of knowledge, especially within the Semantic Web domain, and it has been adopted to achieve high expressiveness [2]. Since symbolic reasoning is a high complexity problem, optimizing its performance requires a careful design of the knowledge resolution. Specifically, a robot architecture requires the integration of several components implementing different behaviors and generating a series of beliefs. Most of the components are expected to access, manipulate, and reason upon a run-time generated representation of knowledge grounding robot behaviors and perceptions through formal axioms, with soft real-time requirements. The Robot Operating System (ROS) is a de facto standard for robot software development, which allows for modular and scalable robot architecture designs.