Collaborating Authors


AAAI Conferences

Based on recent experiences between a laughing virtual agent and a human user at the intersection AI and humor and laughter, this paper aims to highlight some of the psychological considerations, when conducting AI and humor experiments. The systematic and standardized approach outlined in this paper will demonstrate how to reduce error variance that may be caused by confound variables such as having poor experimental controls. From the necessity of cover stories, protocols and procedures, the differences to the pros and cons of measuring subjectively and objectively and what is required so that both give valid and reliable results are offered as solutions to achieving this goal. Furthermore, the psychological individual differences that need consideration, such as the appreciation of different types of humor, mood, personality variables, for example, trait and state cheerfulness, and gelotophobia- the fear of being laughed at are discussed.

Submit an Abstract


For those speaking as part of a pre-organized session on the invited program, SDSS will accept abstracts from October 3 – November 2, 2017 (11:59 p.m. Eastern). Abstracts will be accepted for consideration from December 5, 2017 – January 18, 2018 (11:59 p.m. Eastern) for a limited number of concurrent session and e-poster presentations. Only online submissions will be accepted for consideration.

Host of Gun Bills up for Legislators' Consideration

U.S. News

Among them is a proposal to allow carrying a concealed firearm in schools, if education officials allow it. The bill is aimed at rural schools without a school resource officer, since it takes law enforcement a while to respond to an emergency situation.

Jumping Into Artificial Intelligence: Five Considerations For Creating An AI Company


While it's important to have goals, it's also important for AI companies to experiment and allow for failures. As it turns out, data-driven AI projects may not be successful at the same rate as traditional software or Web applications. In 2018, Gartner predicted that "through 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them." These failures may happen if you don't have the right data or the appropriate integration to take action and have a real impact. Thus, an AI company may have to restructure its organization and allow for experiments that fail more often than traditional engineering projects.