Goto

Collaborating Authors

Results


Can artificial intelligence encourage good behaviour among internet users?

#artificialintelligence

SAN FRANCISCO, Sept 25 ― Hostile and hateful remarks are thick on the ground on social networks in spite of persistent efforts by Facebook, Twitter, Reddit and YouTube to tone them down. Now researchers at the OpenWeb platform have turned to artificial intelligence to moderate internet users' comments before they are even posted. The method appears to be effective because one third of users modified the text of their comments when they received a nudge from the new system, which warned that what they had written might be perceived as offensive. The study conducted by OpenWeb and Perspective API analyzed 400,000 comments that some 50,000 users were preparing to post on sites like AOL, Salon, Newsweek, RT and Sky Sports. Some of these users received a feedback message or nudge from a machine learning algorithm to the effect that the text they were preparing to post might be insulting, or against the rules for the forum they were using.


Go Ahead, Try to Sneak Bad Words Past AI Filters--for Research

WIRED

Facebook's artificial intelligence researchers have a plan to make algorithms smarter by exposing them to human cunning. They want your help to supply the trickery. Thursday, Facebook's AI lab launched a project called Dynabench that creates a kind of gladiatorial arena in which humans try to trip up AI systems. Challenges include crafting sentences that cause a sentiment-scoring system to misfire, reading a comment as negative when it is actually positive, for example. Another involves tricking a hate speech filter--a potential draw for teens and trolls.


Facebook's new benchmarking system asks humans to interrogate AIs

Engadget

Benchmarking is a crucial step in developing ever more sophisticated artificial intelligence. It provides a helpful abstraction of the AI's capabilities and allows researchers a firm sense of how well the system is performing on specific tasks. But they are not without their drawbacks. Once an algorithm masters the static dataset from a given benchmark, researchers have to undertake the time-consuming process of developing a new one to further improve the AI. As AIs have improved over time, researchers have had to build new benchmarks with increasing frequency.


SAP Concur posted on LinkedIn

#artificialintelligence

See why you should be excited about #AI and machine learning coming to the workplace: http://sap.to/6040GaqYe...


Diversity in AI: The Invisible Men and Women

#artificialintelligence

In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.


#iiot_2020-09-22_14-06-41.xlsx

#artificialintelligence

The graph represents a network of 2,121 Twitter users whose tweets in the requested range contained "#iiot", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Tuesday, 22 September 2020 at 21:13 UTC. The requested start date was Tuesday, 22 September 2020 at 00:01 UTC and the maximum number of tweets (going backward in time) was 7,500. The tweets in the network were tweeted over the 2-day, 16-hour, 57-minute period from Saturday, 19 September 2020 at 07:03 UTC to Tuesday, 22 September 2020 at 00:00 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods.


Top Data Science Influencers to Follow in 2020

#artificialintelligence

Social Media and information sharing is something every internet user will know about. The presence and popularity of Twitter, LinkedIn, and many other platforms have made it convenient to spread knowledge all around the globe in a couple of clicks. It is because of the extensive usage of these networking sites by various Thought leaders, achievers, and change-makers that Data Science and AI knowledge has spread across the globe. IPFC online has recently come up with a list of Top 50 Digital influencers to follow out of which we are going to talk about the ones concerned with Machine Learning and AI. Additionally, we have provided some more influencers worth following.


According to Facebook, Chatbots are Retail's Killer App (Infographic) : Fanatics Media

#artificialintelligence

Facebook took major steps to announce its all out committment to Chatbots. The first is a a chatbot training ground called ParlAI--a play on words which stems from its primarily French-speaking researchers. Moreover, Facebook is sharing ParlAI with the world as an open source tool. Facebook is offering the training software so that developers and researchers can use it to train their chatbot "agents."


Google's Area 120 launches Tables, a rules-based automation platform for documents

#artificialintelligence

Google's Area 120 incubator today launched Tables, a work-tracking tool with IFTTT-like automation features and support for Google products, including Google Groups, Google Sheets, and more. Currently in beta in the U.S., Tables automates actions like collating data, checking multiple sources of data, and pasting data into other docs for handoff. "Tracking work with existing tech solutions meant building a custom in-house solution or purchasing an off-the-shelf product, but these options are time-consuming, inflexible, and expensive," Tables general manager Tim Gleason explained in a blog post. "Tables helps teams track work and automate tasks to save time and supercharge collaboration -- without any coding required." Using Tables, teams can program bots to schedule recurring email reminders when tasks are overdue, message a Slack or Google Chat room when new form submissions are received, or move a task to someone else's work queue when the status changes.


futureofwork _2020-09-21_11-35-07.xlsx

#artificialintelligence

The graph represents a network of 4,813 Twitter users whose tweets in the requested range contained "futureofwork ", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Monday, 21 September 2020 at 18:47 UTC. The requested start date was Monday, 21 September 2020 at 00:01 UTC and the maximum number of days (going backward) was 14. The maximum number of tweets collected was 7,500. The tweets in the network were tweeted over the 3-day, 16-hour, 29-minute period from Thursday, 17 September 2020 at 07:31 UTC to Monday, 21 September 2020 at 00:00 UTC.