Goto

Collaborating Authors

Social Media


A short guide for medical professionals in the era of artificial intelligence

#artificialintelligence

Artificial intelligence (A.I.) is expected to significantly influence the practice of medicine and the delivery of healthcare in the near future. While there are only a handful of practical examples for its medical use with enough evidence, hype and attention around the topic are significant. There are so many papers, conference talks, misleading news headlines and study interpretations that a short and visual guide any medical professional can refer back to in their professional life might be useful. For this, it is critical that physicians understand the basics of the technology so they can see beyond the hype, evaluate A.I.-based studies and clinical validation; as well as acknowledge the limitations and opportunities of A.I. This paper aims to serve as a short, visual and digestible repository of information and details every physician might need to know in the age of A.I. We describe the simple definition of A.I., its levels, its methods, the differences between the methods with medical examples, the potential benefits, dangers, challenges of A.I., as well as attempt to provide a futuristic vision about using it in an everyday medical practice.


Can artificial intelligence encourage good behaviour among internet users?

#artificialintelligence

SAN FRANCISCO, Sept 25 ― Hostile and hateful remarks are thick on the ground on social networks in spite of persistent efforts by Facebook, Twitter, Reddit and YouTube to tone them down. Now researchers at the OpenWeb platform have turned to artificial intelligence to moderate internet users' comments before they are even posted. The method appears to be effective because one third of users modified the text of their comments when they received a nudge from the new system, which warned that what they had written might be perceived as offensive. The study conducted by OpenWeb and Perspective API analyzed 400,000 comments that some 50,000 users were preparing to post on sites like AOL, Salon, Newsweek, RT and Sky Sports. Some of these users received a feedback message or nudge from a machine learning algorithm to the effect that the text they were preparing to post might be insulting, or against the rules for the forum they were using.


The United States of Amazon and its Flywheel Economy

#artificialintelligence

"The tech giants have as much money and influence as nation states." Tech Giants include Apple Facebook, and Google ... but Amazon's unique flywheel makes it the torchbearer. "AWS alone is on track to be worth $1 trillion." The Amazon flywheel fuels a circular, data-driven ecosystem that's bolstered by Open Innovation. This article summarizes two from a series called the Tech Nations project.


Facebook wants to make AI better by asking people to break it

#artificialintelligence

Benchmarks can be very misleading, says Douwe Kiela at Facebook AI Research, who led the team behind the tool. Focusing too much on benchmarks can mean losing sight of wider goals. The test can become the task. "You end up with a system that is better at the test than humans are but not better at the overall task," he says. "It's very deceiving, because it makes it look like we're much further than we actually are."


Spectrum Labs raises $10M for its AI-based platform to combat online toxicity – TechCrunch

#artificialintelligence

With the US presidential election now 40 days away, all eyes are focused on how online conversations, in conjunction with other hallmarks of online life like viral videos, news clips, and misleading ads, will be used, and often abused, to influence people's decisions. But political discourse, of course, is just one of the ways that user-generated content on the internet is misused for toxic ends. Today, a startup that's using AI to try to tackle them all is announcing some funding. Spectrum Labs -- which has built algorithms and a set of APIs that can be used to moderate, track, flag and ultimately stop harassment, hate speech, radicalization, and some 40 other profiles of toxic behavior, in English as well as multiple other languages -- has raised $10 million in a Series A round of funding, capital that the company plans to use to continue expanding its platform. The funding is being led by Greycroft, with Wing Venture Capital, Ridge Ventures, Global Founders Capital, and Super{set} also participating.


Go Ahead, Try to Sneak Bad Words Past AI Filters--for Research

WIRED

Facebook's artificial intelligence researchers have a plan to make algorithms smarter by exposing them to human cunning. They want your help to supply the trickery. Thursday, Facebook's AI lab launched a project called Dynabench that creates a kind of gladiatorial arena in which humans try to trip up AI systems. Challenges include crafting sentences that cause a sentiment-scoring system to misfire, reading a comment as negative when it is actually positive, for example. Another involves tricking a hate speech filter--a potential draw for teens and trolls.


Facebook's new benchmarking system asks humans to interrogate AIs

Engadget

Benchmarking is a crucial step in developing ever more sophisticated artificial intelligence. It provides a helpful abstraction of the AI's capabilities and allows researchers a firm sense of how well the system is performing on specific tasks. But they are not without their drawbacks. Once an algorithm masters the static dataset from a given benchmark, researchers have to undertake the time-consuming process of developing a new one to further improve the AI. As AIs have improved over time, researchers have had to build new benchmarks with increasing frequency.


BMF CEO John Kawola on 3D printing parts smaller than a human hair

ZDNet

Ever since I was a boy, I was fascinated by the idea of miniaturization. I read Isaac Asimov's Fantastic Voyage and then, when I finally got my hands on the movie, I probably watched it a dozen times. The premise was that a team of scientists were miniaturized to the point where they could be injected into a person and perform surgery from the inside. Another movie with a similar premise was InnerSpace, starring the incredibly well-matched team of Martin Short and Dennis Quaid. There was the whole Honey, I Shrunk the Kids series of movies and TV shows, and I ate them up as well.


SAP Concur posted on LinkedIn

#artificialintelligence

See why you should be excited about #AI and machine learning coming to the workplace: http://sap.to/6040GaqYe...


Diversity in AI: The Invisible Men and Women

#artificialintelligence

In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.