Facebook's artificial intelligence researchers have a plan to make algorithms smarter by exposing them to human cunning. They want your help to supply the trickery. Thursday, Facebook's AI lab launched a project called Dynabench that creates a kind of gladiatorial arena in which humans try to trip up AI systems. Challenges include crafting sentences that cause a sentiment-scoring system to misfire, reading a comment as negative when it is actually positive, for example. Another involves tricking a hate speech filter--a potential draw for teens and trolls.
Benchmarking is a crucial step in developing ever more sophisticated artificial intelligence. It provides a helpful abstraction of the AI's capabilities and allows researchers a firm sense of how well the system is performing on specific tasks. But they are not without their drawbacks. Once an algorithm masters the static dataset from a given benchmark, researchers have to undertake the time-consuming process of developing a new one to further improve the AI. As AIs have improved over time, researchers have had to build new benchmarks with increasing frequency.
In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
Social Media and information sharing is something every internet user will know about. The presence and popularity of Twitter, LinkedIn, and many other platforms have made it convenient to spread knowledge all around the globe in a couple of clicks. It is because of the extensive usage of these networking sites by various Thought leaders, achievers, and change-makers that Data Science and AI knowledge has spread across the globe. IPFC online has recently come up with a list of Top 50 Digital influencers to follow out of which we are going to talk about the ones concerned with Machine Learning and AI. Additionally, we have provided some more influencers worth following.
Facebook took major steps to announce its all out committment to Chatbots. The first is a a chatbot training ground called ParlAI--a play on words which stems from its primarily French-speaking researchers. Moreover, Facebook is sharing ParlAI with the world as an open source tool. Facebook is offering the training software so that developers and researchers can use it to train their chatbot "agents."
Google's Area 120 incubator today launched Tables, a work-tracking tool with IFTTT-like automation features and support for Google products, including Google Groups, Google Sheets, and more. Currently in beta in the U.S., Tables automates actions like collating data, checking multiple sources of data, and pasting data into other docs for handoff. "Tracking work with existing tech solutions meant building a custom in-house solution or purchasing an off-the-shelf product, but these options are time-consuming, inflexible, and expensive," Tables general manager Tim Gleason explained in a blog post. "Tables helps teams track work and automate tasks to save time and supercharge collaboration -- without any coding required." Using Tables, teams can program bots to schedule recurring email reminders when tasks are overdue, message a Slack or Google Chat room when new form submissions are received, or move a task to someone else's work queue when the status changes.
New York – Social media giant Twitter said Monday it would investigate its image-cropping function after users complained it favored white faces over Black ones. The image preview feature of Twitter's mobile app automatically crops pictures that are too big to fit on the screen, selecting which parts of the image to display and which to conceal. Prompted by a graduate student who found an image he was posting cropped out the face of a Black colleague, a San Francisco-based programmer found Twitter's system would crop out images of President Barack Obama when posted together with images of Republican Senate Leader Mitch McConnell. "Twitter is just one example of racism manifesting in machine learning algorithms," the programmer, Tony Arcieri, wrote on Twitter. Twitter is one of the world's most popular social networks, with nearly 200 million daily users.
Concerns about bias or unfair results in AI systems have come to the fore in recent years as the technology has infiltrated hiring, insurance, law enforcement, advertising, and other aspects of society. Prejudiced code may be a source of indignation on social media but it affects people's access to opportunities and resources in the real world. It's something that needs to be dealt with on a national and international level. A variety of factors go into making insufficiently neutral systems, such as unrepresentative training data, lack of testing on diverse subjects at scale, lack of diversity among research teams, and so on. But among those who developed Twitter's cropping algorithm, several expressed frustration about the assumptions being made about their work. Ferenc Huszár, former Twitter employee, one of the co-authors of Twitter's image pruning research, and now a senior lecturer on machine-learning at University of Cambridge, acknowledged there's reason to look into the results people have been reporting though cautioned against jumping to conclusions about negligence or lack of oversight. Some of the outrage was based on a small number of reported failure cases. While these failures look very bad, there's work to be done to determine the degree to which they are associated w/ race or gender.
There is no doubt that on the whole, the economic impacts from the lockdown and pandemic will be devastating. But while most leisure activities were throttled by the lockdown, others thrived -- just ask any of your friends that did Yoga With Adrienne (probably the same mates that brew their own kombucha). Tinder and Bumble usage alone spiked by over 20%, with Tinder registering 3 billion swipes on 28 March alone. However, the pandemic only accelerated a trend that was already in full force: finding love via apps. "Met online" is now the most common way that people report finding their significant other, streets ahead of boring old classics like "met in church" or "met in the neighbourhood". While there are a range of massively popular dating apps, including Bumble and Grindr, Tinder continues to be the most popular platform by a significant margin.