Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear. Font was the manager of Twitter's machine learning platforms teams -- part of Twitter Cortex, the company's central ML organization -- at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.
The world's countries should pay attention to the ocean ecosystem collapse to prevent ecological catastrophe. Technological Disruption is a continuing unannounced world war. The A.I. and Quantum Computing capabilities are the two major branches. Since 1951, A.I. applications have been operating on the University of Manchester's Ferranti Mark 1 computer. The main issue has been a lack of processing capacity to execute and calculate increasingly complicated lines of A.I. algorithms in a shorter amount of time.
Years ago, LinkedIn discovered that the recommendation algorithms it uses to match job candidates with opportunities were producing biased results. The algorithms were ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities. LinkedIn discovered the problem and built another AI program to counteract the bias in the results of the first. Meanwhile, some of the world's largest job search sites--including CareerBuilder, ZipRecruiter, and Monster--are taking very different approaches to addressing bias on their own platforms, as we report in the newest episode of MIT Technology Review's podcast "In Machines We Trust." Since these platforms don't disclose exactly how their systems work, though, it's hard for job seekers to know how effective any of these measures are at actually preventing discrimination.
In the years since the ubiquitous dating app Tinder launched in 2012, its interface has stayed largely the same. Today Tinder is launching a slew of new features to serve Gen Z, which accounts for more than half of their users according to their press release. With the introduction of video and an Explore page, the app is going to look a lot more like TikTok and Snapchat. Tinder wants to "bring the main character energy" to the app, invoking a popular TikTok meme, by adding the ability for users to put videos on their profiles. They say this is part of making Tinder a "multi-dimensional experience."
Facebook's new Artificial Intelligence technology not only identifies Deepfakes, it can also gives hints about their origin Artificial intelligence (AI) created videos and pictures have become much popular and that can create some serious problems as well, because you can create fake videos, and manipulated images of any type to put anyone in trouble. Deepfakes use deep learning models to create fictitious photos, videos, and events. These days, deepfakes look so realistic that it becomes very difficult to identify the real picture from the fake one with a normal human eye, therefore, Facebook's AI team has created a model in collaboration with a group of Michigan State University that has the ability to identify not only the fabricated picture or videos, but it can even trace the origin. The latest technology of Facebook checks the resemblances from a compilation of deepfakes datasets to find out if they have a common basis, looking for a distinctive model such as small specks of noise or minor quirks in the color range of a photo. By spotting the small finger impressions in the photo, the new AI model is capable to distinguish particulars of how the impartial network that produced the photo was invented, such as how large the prototype is and how it was prepared.
The annual production of data follows an exponential curve, the assimilation by a person or even by a group of persons of this data is no longer possible. To get the most out of it, it is necessary to be helped by computers. But as this data is mostly unstructured, classical algorithms are unable to do this job. Only Artificial Intelligence and in particular NLP with Sentiment Analysis can do it. We created ElligencIA with the aim of giving meaning to this ocean of data and taking advantage of this collective intelligence. ElligencIA, operational since January 1st 2021, is an AI consulting and solutions company for the BFSI.
If you're into the market and investment side of things, how does a Series F funding round as part of a $325 million investment led by Eurazeo and GV (formerly Google Ventures), bringing Neo4j's valuation to over $2 billion sound? If you're into the technology and applications side of things, how does a Neo4j demo of a social network application with 3 billion people, running queries designed to test the limits of graph query languages and databases across a 1000 node cluster sound? Graph database vendor Neo4j CEO and co-founder Emil Eifrem is announcing the funding and showcasing the demo today at the company's annual virtual conference NODES. We caught up with Eifrem to get a taste of things to come. Truth be told, we were not entirely surprised to learn about Neo4j's funding round.
Facebook has developed an artificial intelligence that it claims can detect deepfake images and even reverse-engineer them to figure out how they were made and perhaps trace their creators. Deepfakes are wholly artificial images created by an AI. Facebook's new AI looks at similarities among a collection of deepfakes to see if they have a shared origin, looking for unique patterns such as small speckles of noise or slight oddities in the colour spectrum of an image. By identifying the minor fingerprints in an image, Facebook's AI is able to discern details of how the neural network that created the image was designed, such as how large the model is or how it was trained. "I thought there's no way this is going to work," says Tal Hassner at Facebook.