Goto

Collaborating Authors

Results


This "ridiculously accurate" (neural network) AI Can Tell if You Have Covid-19 Just by Listening to Your Cough - recognizing 98.5% of coughs from people with confirmed covid-19 cases, and 100% of coughs from asymptomatic people.

#artificialintelligence

A subreddit devoted to the field of Future(s) Studies and evidence-based speculation about the development of humanity, technology, and civilization. For details on the rules see the Rules Wiki. For details on moderation procedures, see the Transparency Wiki. If history studies our past and social sciences study our present, what is the study of our future? Future(s) Studies (colloquially called "future(s)" by many of the field's practitioners) is an interdisciplinary field that seeks to hypothesize the possible, probable, preferable, or alternative future(s).


If Your Aren't Using AI In Your Marketing Strategy You're WAY Behind The Curve

#artificialintelligence

Everyone has their own definition of what Artificial Intelligence is. In its most basic form, AI is simply our attempt to replicate human intelligence in machines. We program computers to play chess and drive cars, and not at the same level as humans, but better. Although we think of AI as something that only scientists at MIT have access to, it is actually something that is being integrated into businesses all over. Whether it is to analyze consumer trends, predict future demand, recommend personalized content or power customer chatbots, there is an AI solution for it all.


If Your Aren't Using AI In Your Marketing Strategy You're WAY Behind The Curve

#artificialintelligence

Everyone has their own definition of what Artificial Intelligence is. In its most basic form, AI is simply our attempt to replicate human intelligence in machines. We program computers to play chess and drive cars, and not at the same level as humans, but better. Although we think of AI as something that only scientists at MIT have access to, it is actually something that is being integrated into businesses all over. Whether it is to analyze consumer trends, predict future demand, recommend personalized content or power customer chatbots, there is an AI solution for it all.


AI Insider: What is AI and How Does AI Works?

#artificialintelligence

Ai is everywhere, it has incorporated into every aspect of our life, unknowingly. It changed the way we live by simplifying things we do in our routine, like shopping, traveling, man-machine interaction. AI almost gained control of our actions. It decides what we shop, by showing ads and recommendations while you are shopping, AI trip advisors suggest you a travel destination and the best vacation packages for your budget. AI helping Businesses and financial institutions to serve their customers better with the automated question and answer chatbots. AI also defines our social media feeds, how many of your Facebook friends have not been showing up on your wall, even they active in social media? Because AI knows what and who you are interested in.


Facebook wants to make AI better by asking people to break it

#artificialintelligence

Benchmarks can be very misleading, says Douwe Kiela at Facebook AI Research, who led the team behind the tool. Focusing too much on benchmarks can mean losing sight of wider goals. The test can become the task. "You end up with a system that is better at the test than humans are but not better at the overall task," he says. "It's very deceiving, because it makes it look like we're much further than we actually are."


Visual Methods for Sign Language Recognition: A Modality-Based Review

arXiv.org Artificial Intelligence

Sign language visual recognition from continuous multi-modal streams is still one of the most challenging fields. Recent advances in human actions recognition are exploiting the ascension of GPU-based learning from massive data, and are getting closer to human-like performances. They are then prone to creating interactive services for the deaf and hearing-impaired communities. A population that is expected to grow considerably in the years to come. This paper aims at reviewing the human actions recognition literature with the sign-language visual understanding as a scope. The methods analyzed will be mainly organized according to the different types of unimodal inputs exploited, their relative multi-modal combinations and pipeline steps. In each section, we will detail and compare the related datasets, approaches then distinguish the still open contribution paths suitable for the creation of sign language related services. Special attention will be paid to the approaches and commercial solutions handling facial expressions and continuous signing.


[R] Neural networks vs The Game of Life

#artificialintelligence

I think I see what you're saying, but... The way the paper reads, the problem tackled is: "Given sample trajectories from Game of Life but no access to / knowledge of the source code, create a model that perfectly predicts the game's dynamic." However, by using n to set the problem difficulty (by using a terminal loss instead of a trajectory loss), the actual problem being tackled is: "Given sample state pairs {y[0], y[n]} spaced n steps apart... create a model that perfectly predicts the game's dynamic." The latter is clearly a much more difficult problem. I can agree that a narrow network may have difficulties with it (perhaps related to the lottery hypothesis).



A Blast From the Past: Personalizing Predictions of Video-Induced Emotions using Personal Memories as Context

arXiv.org Artificial Intelligence

A key challenge in the accurate prediction of viewers' emotional responses to video stimuli in real-world applications is accounting for person- and situation-specific variation. An important contextual influence shaping individuals' subjective experience of a video is the personal memories that it triggers in them. Prior research has found that this memory influence explains more variation in video-induced emotions than other contextual variables commonly used for personalizing predictions, such as viewers' demographics or personality. In this article, we show that (1) automatic analysis of text describing their video-triggered memories can account for variation in viewers' emotional responses, and (2) that combining such an analysis with that of a video's audiovisual content enhances the accuracy of automatic predictions. We discuss the relevance of these findings for improving on state of the art approaches to automated affective video analysis in personalized contexts.


SemEval-2020 Task 7: Assessing Humor in Edited News Headlines

arXiv.org Artificial Intelligence

This paper describes the SemEval-2020 shared task "Assessing Humor in Edited News Headlines." The task's dataset contains news headlines in which short edits were applied to make them funny, and the funniness of these edited headlines was rated using crowdsourcing. This task includes two subtasks, the first of which is to estimate the funniness of headlines on a humor scale in the interval 0-3. The second subtask is to predict, for a pair of edited versions of the same original headline, which is the funnier version. To date, this task is the most popular shared computational humor task, attracting 48 teams for the first subtask and 31 teams for the second.