Collaborating Authors

What's Coming Next, from the Apple Developer Conference

Huffington Post - Tech news and opinion

The Mac isn't the only operating system in Apple's lineup to get new options. Siri, which processes 2,000,000 requests per week, is now getting opened up to developers, providing voice control options within applications, a potentially powerful option for developers to tap into. QuickType now has more relevant, contextual options. We also get new facial recognition technology for Photos in a feature simply called People. That same deep learning technology also provides object and scene recognition which is either super-cool, or if you forgot to leave your tin foil hat at home, creepy.

Third International Conference on Artificial Intelligence Planning Systems

AI Magazine

The Third International Conference on Artificial Intelligence Planning Systems (AIPS-96) was held in Edinburgh, Scotland, from 29 to 31 May 1996. The main gathering of researchers in AI and planning and scheduling, the conference promoted the practical applications of planning technologies. Details of the conference papers and sessions are provided as well as information on the Defense Advanced Research Projects Agency -- Rome Laboratory Planning Initiative.

Expert panel sees permanent Imperial abdication system as difficult

The Japan Times

The six members of a government advisory panel to discuss Emperor Akihito's possible abdication agreed on Wednesday that it is difficult to establish a permanent system for Imperial abdication. "My impression is that this view has been authorized" by the panel, its acting chair, Takashi Mikuriya, professor emeritus at the University of Tokyo, told a news conference after its seventh meeting. The agreement effectively supported the government's policy of enacting a special law allowing abdication only for the current Emperor. "It seems quite difficult to set conditions for abdication" under a permanent system, Mikuriya said. He quoted a panel member as saying at the meeting that it would be hard to set abdication conditions that will remain relevant in the future.

Equity and Artificial Intelligence in Education: Will "AIEd" Amplify or Alleviate Inequities in Education? Artificial Intelligence

INTRODUCTION With increasing awareness of the societal risks of algorithmic bias and encroaching automation, issues of fairness, accountability, and transparency in data-driven AI systems have received growing academic attention in multiple high-stakes contexts, including healthcare, loan-granting, and hiring (e.g., Barocas & Selbst, 2016; Holstein, Wortman Vaughan, Daumé III, Dudik, & Wallach, 2019; Veale, Van Kleek, & Binns, 2018). Given these noble intentions, why might AIEd systems have inequitable impacts? In this chapter, we ask whether AIEd systems will ultimately serve to A mplify I nequities in Ed ucation, or alternatively, whether they will help to A lleviate existing inequities. We discuss four lenses that can be used to examine how and why AIEd systems risk amplifying existing inequities: (1) factors inherent to the overall socio-technical system design; (2) the use of datasets that reflect historical inequities; (3) factors inherent to the underlying algorithms used to drive machine learning and automated decision-making, and (4) factors that emerge through a complex interplay between automated and human decision-making. Building from these lenses, we then outline possible paths towards more equitable futures for AIEd, while highlighting debates surrounding each proposal. In doing so, we hope to provoke new conversations around the design of equitable AIEd, and to push ongoing conversations in the field forward. PATHWAYS TOWARD INEQUITY IN AIED We begin by presenting four lenses to understand how AIEd systems might amplify existing inequities or even create new ones (cf. While each lens provides a different way of examining pathways towards inequity in AIEd, all are pointed at the same underlying socio-technical system. Figure 1 provides a coarse-grained overview of the broader social-technical systems in which AIEd systems are embedded, and some of the components we will refer to in the four lenses. The accumulated, collective decisions of designers, researchers, policy-makers, and other stakeholders shape these systems' designs. In addition to using or being affected by AIEd systems, on-the-ground stakeholders such as students, teachers, or school administrators may also play a role in shaping their designs; whether directly, through participatory design processes, or indirectly through the passive generation of training data while interacting with an AIEd interface. In turn, decisions regarding what data is used to shape an AIEd system's design (e.g., when used as training data for use with machine learning methods) can shape an AIEd system's algorithmic behavior (e.g., instructional policies learned from data).

Systems of natural-language-facilitated human-robot cooperation: A review Artificial Intelligence

Natural-language-facilitated human-robot cooperation (NLC), in which natural language (NL) is used to share knowledge between a human and a robot for conducting intuitive human-robot cooperation (HRC), is continuously developing in the recent decade. Currently, NLC is used in several robotic domains such as manufacturing, daily assistance and health caregiving. It is necessary to summarize current NLC-based robotic systems and discuss the future developing trends, providing helpful information for future NLC research. In this review, we first analyzed the driving forces behind the NLC research. Regarding to a robot s cognition level during the cooperation, the NLC implementations then were categorized into four types {NL-based control, NL-based robot training, NL-based task execution, NL-based social companion} for comparison and discussion. Last based on our perspective and comprehensive paper review, the future research trends were discussed.