Goto

Collaborating Authors

 crossroad


Large Language Models, Knowledge Graphs and Search Engines: A Crossroads for Answering Users' Questions

Hogan, Aidan, Dong, Xin Luna, Vrandečić, Denny, Weikum, Gerhard

arXiv.org Artificial Intelligence

Much has been discussed about how Large Language Models, Knowledge Graphs and Search Engines can be combined in a synergistic manner. A dimension largely absent from current academic discourse is the user perspective. In particular, there remain many open questions regarding how best to address the diverse information needs of users, incorporating varying facets and levels of difficulty. This paper introduces a taxonomy of user information needs, which guides us to study the pros, cons and possible synergies of Large Language Models, Knowledge Graphs and Search Engines. From this study, we derive a roadmap for future research.


Proceeding of the 1st Workshop on Social Robots Personalisation At the crossroads between engineering and humanities (CONCATENATE)

Tarakli, Imene, Angelopoulos, Georgios, Hellou, Mehdi, Vindolet, Camille, Abramovic, Boris, Limongelli, Rocco, Lacroix, Dimitri, Bertolini, Andrea, Rossi, Silvia, Di Nuovo, Alessandro, Cangelosi, Angelo, Cheng, Gordon

arXiv.org Artificial Intelligence

Nowadays, robots are expected to interact more physically, cognitively, and socially with people. They should adapt to unpredictable contexts alongside individuals with various behaviours. For this reason, personalisation is a valuable attribute for social robots as it allows them to act according to a specific user's needs and preferences and achieve natural and transparent robot behaviours for humans. If correctly implemented, personalisation could also be the key to the large-scale adoption of social robotics. However, achieving personalisation is arduous as it requires us to expand the boundaries of robotics by taking advantage of the expertise of various domains. Indeed, personalised robots need to analyse and model user interactions while considering their involvement in the adaptative process. It also requires us to address ethical and socio-cultural aspects of personalised HRI to achieve inclusive and diverse interaction and avoid deception and misplaced trust when interacting with the users. At the same time, policymakers need to ensure regulations in view of possible short-term and long-term adaptive HRI. This workshop aims to raise an interdisciplinary discussion on personalisation in robotics. It aims at bringing researchers from different fields together to propose guidelines for personalisation while addressing the following questions: how to define it - how to achieve it - and how it should be guided to fit legal and ethical requirements.


Efficient Open-world Reinforcement Learning via Knowledge Distillation and Autonomous Rule Discovery

Nikonova, Ekaterina, Xue, Cheng, Renz, Jochen

arXiv.org Artificial Intelligence

Deep reinforcement learning suffers from catastrophic forgetting and sample inefficiency making it less applicable to the ever-changing real world. However, the ability to use previously learned knowledge is essential for AI agents to quickly adapt to novelties. Often, certain spatial information observed by the agent in the previous interactions can be leveraged to infer task-specific rules. Inferred rules can then help the agent to avoid potentially dangerous situations in the previously unseen states and guide the learning process increasing agent's novelty adaptation speed. In this work, we propose a general framework that is applicable to deep reinforcement learning agents. Our framework provides the agent with an autonomous way to discover the task-specific rules in the novel environments and self-supervise it's learning. We provide a rule-driven deep Q-learning agent (RDQ) as one possible implementation of that framework. We show that RDQ successfully extracts task-specific rules as it interacts with the world and uses them to drastically increase its learning efficiency. In our experiments, we show that the RDQ agent is significantly more resilient to the novelties than the baseline agents, and is able to detect and adapt to novel situations faster.


We stand at a crossroads with AI in elections

FOX News

Vice President Kamala Harris on Wednesday explained artificial intelligence as she convened a roundtable with labor and civil rights leaders to talk about the technology. It appears that there won't be any new regulations on the use of artificial intelligence or AI in elections from the Federal Election Commission (FEC) for the 2024 election cycle, based on the recent vote to table the pursuit of regulations on "deepfake" political ads. Some are concerned that the election could become an "AI arms race". The rationale behind this is a common proliferation scenario where each party fears the other having more weapons, so it gets more itself. Where to draw the line is an important question.


Reporter's Notebook: Italian support for Ukraine on the wane according to recent poll

FOX News

Paolucci co-authored "Oligarchi" or "Oligarchs" in English and "How Putin's Friends are Buying Italy." You will meet people in Italy who are actually pro-Russia. Or at least ready to lay some blame on the United States and/or NATO for provoking Vladimir Putin to attack Ukraine, as if somehow absolving the Russian president. Largely, however, such positions are expressed privately. So when former four-time Prime Minister Silvio Berlusconi, with cameras rolling before him, described his "very, very, very negative view" of Ukrainian President Volodymyr Zelenskyy over the weekend, he set off a firestorm on this side of the Atlantic.


Don't do it: Safer Reinforcement Learning With Rule-based Guidance

Nikonova, Ekaterina, Xue, Cheng, Renz, Jochen

arXiv.org Artificial Intelligence

During training, reinforcement learning systems interact with the world without considering the safety of their actions. When deployed into the real world, such systems can be dangerous and cause harm to their surroundings. Often, dangerous situations can be mitigated by defining a set of rules that the system should not violate under any conditions. For example, in robot navigation, one safety rule would be to avoid colliding with surrounding objects and people. In this work, we define safety rules in terms of the relationships between the agent and objects and use them to prevent reinforcement learning systems from performing potentially harmful actions. We propose a new safe epsilon-greedy algorithm that uses safety rules to override agents' actions if they are considered to be unsafe. In our experiments, we show that a safe epsilon-greedy policy significantly increases the safety of the agent during training, improves the learning efficiency resulting in much faster convergence, and achieves better performance than the base model.


Striking the Right Balance

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. In an era where Artificial Intelligence and ML are becoming like second nature to an organization, it is sometimes essential to step back and introspect on the relevance of machine learning to your use case.


At the crossroads of language, technology, and empathy

#artificialintelligence

Rujul Gandhi's love of reading blossomed into a love of language at age 6, when she discovered a book at a garage sale called "What's Behind the Word?" With forays into history, etymology, and language genealogies, the book captivated Gandhi, who as an MIT senior remains fascinated with words and how we use them. Growing up partially in the U.S. and mostly in India, Gandhi was surrounded by a variety of languages and dialects. When she moved to India at age 8, she could already see how knowing the Marathi language allowed her to connect more easily to her classmates -- an early lesson in how language shapes our human experiences. Initially thinking she might want to study creative writing or theater, Gandhi first learned about linguistics as its own field of study through an online course in ninth grade.


The crossroads between artificial intelligence and music production

#artificialintelligence

Questions naturally arise regarding the application and the future of artificial intelligence. AI could not only be used to generate a musical melody, but also to generate lyrics and tempos for new pieces, and entirely new genres of music. Still, AI-generated music might also not be playable by humans. The possibility of AI taking over the music industry sparks philosophical questions of whether AI-generated music could be viewed as creative work or not.


ABBYY Acquires Pericom Singapore to Expand Footprint in Asia Pacific

#artificialintelligence

ABBYY, a Digital Intelligence company, announced it acquired Pericom Singapore, part of the Pericom Group, a leading solution provider based in Singapore. The acquisition strengthens ABBYY's presence in Asia Pacific following the opening of its Hong Kong office in 2019, long-established office in Japan, and a strong partner network throughout the region. Singapore ranks first in the Asian Digital Transformation Index and is considered the trading crossroads for innovations in cloud computing, artificial intelligence, data analytics and other technologies that span healthcare, security, energy, aviation, defense, smart cities and education. As more Asia Pacific executives look to accelerate their digital business initiatives post-COVID, including 84% of Singapore businesses who have increased their budgets, ABBYY's growing presence signifies its readiness to meet their digital transformation needs. "We have had many successful large-scale implementations in the Asia Pacific market working closely with our valuable partners and large system integrators," commented Ulf Persson, CEO of ABBYY.