US researchers found attractiveness and race preferences were the top predictors of whether people would swipe left or right – and nearly twice as important as any other factors. Other individual characteristics – such as personality and hobbies – were poor predictors of which way someone would swipe. On dating apps, a swipe left means you're not interested in the person, while a swipe right means you are interested. The average time for swiping right was just below one second. However, if a swiper didn't like someone, this time got even shorter to about half a second.
An artificial intelligence system has been developed that can delve into your mind and learn which faces and types of visage you find most attractive. Finnish researchers wanted to find out whether a computer could identify facial features we find attractive without any verbal or written input guiding it. The team strapped 30 volunteers to an electroencephalography (EEG) monitor that tracks brain waves, then showed them images of'fake' faces generated from 200,000 real images of celebrities stitched together in different ways. They didn't have to do anything - no swiping right on the ones they like - as the team could determine their'unconscious preference' through their EEG readings. They then fed that data into an AI which learnt the preferences from the brain waves and created whole new images tailored to the individual volunteer.
Influencer marketing has grown significantly due to the pervasive use of social media platforms in promoting products and services. In 2019 the practice reached $6.5 billion and is projected to reach $15 billion by 2022. Marketing today is all about algorithms, data and analytics to gain a targeted audience rather than the traditional spray-and-pray approach. The major success factor is figuring out how influencer marketing can become more effective by targeting the right audience to increase customer engagement. Technological advancements such as machine learning (ML), natural processing languages (NLPs) and artificial intelligence (AI) are changing how brands enhance influencer marketing. ML tech is assisting organizations in three areas: Creating relevant copy to reach the intended audience, identifying the right content creators for various marketing segments and recommending impactful workflow processes.
Most artificial intelligence is still built on a foundation of human toil. Peer inside an AI algorithm and you'll find something constructed using data that was curated and labeled by an army of human workers. Now, Facebook has shown how some AI algorithms can learn to do useful work with far less human help. The company built an algorithm that learned to recognize objects in images with little help from labels. The Facebook algorithm, called Seer (for SElf-supERvised), fed on more than a billion images scraped from Instagram, deciding for itself which objects look alike. Images with whiskers, fur, and pointy ears, for example, were collected into one pile.
The technology sector has been hit hard as of late, as the impending economic reopening has gotten more attention, and rising long-term bond rates have hit growth stocks particularly hard. As rates go up, future earnings are discounted more, harming valuations for growth stocks and increasing attention on value stocks that make profits today. And yet, technology will still play an ever-increasing role in society even post-pandemic. AI helps businesses make sense of their vast troves of data, glean insights, and react quickly in an automated fashion. As AI helps grow revenue and cut costs at the same time, it will be a mission-critical capability for any large company, even post-pandemic. But are there really any AI stocks that still trade at reasonable valuations, and which can handle the market's current value rotation?
The news: Facebook revealed a self-supervised artificial intelligence model it claims can accurately learn to categorize Instagram images with less human assistance than before. Here's how it works: Researchers at Facebook fed the AI, called SEER, over 1 billion unlabeled images extracted from public IG accounts. Using self-supervised learning--a method where a machine learns to train itself without human data labeling--SEER achieved a classification accuracy score of 84.2%, outperforming "the most advanced, state-of-the-art self-supervised systems," per Facebook. What's next?: While SEER is still in its early stages, Facebook believes it can bring about real-world benefits. Here are some of SEER's possible use cases: The bigger picture: Ever-increasing data sharing by users will likely lead to rapid AI advancement.
The company also employs a Pin model trained using a mathematical, model-friendly representation of Pins based on their keywords and images, aggregated with another model to generate scores that indicate which Pinterest boards might be in violation. When enforcing policies across Pins, the platform groups together Pins with similar images and identifies them by a unique hash called "image-signature." Models generate scores for each image-signature, and based on these scores, the same content moderation decision is applied to all Pins with the same image-signature. Since users usually save thematically related Pins together as a collection on boards around topics like recipes, Pinterest deployed a machine learning model to produce scores for boards and enforce board-level moderation. A Pin model trained using only embeddings -- i.e., representations -- generates content safety scores for each Pinterest board.
Artificial intelligence researchers at Facebook claim they have developed software that can predict the likelihood of a Covid patient deteriorating or needing oxygen based on their chest X-rays. Facebook, which worked with academics at NYU Langone Health's predictive analytics unit and department of radiology on the research, says that the software could help doctors avoid sending at-risk patients home too early, while also helping hospitals plan for oxygen demand. The 10 researchers involved in the study -- five from Facebook AI Research and five from the NYU School of Medicine -- said they have developed three machine-learning "models" in total, that are all slightly different. One tries to predict patient deterioration based on a single chest X-ray, another does the same with a sequence of X-rays, and a third uses a single X-ray to predict how much supplemental oxygen (if any) a patient might need. "Our model using sequential chest X-rays can predict up to four days (96 hours) in advance if a patient may need more intensive care solutions, generally outperforming predictions by human experts," the authors said in a blog post published Friday.
When Krish Ashok, a technologist and food science enthusiast based out of Chennai, shared pictures of his father and great-grandmother, animated through a new artificial intelligence (AI) tool with the senior members in his family, they were deeply moved. "They were shaken by what they saw," Ashok told Zenger News. Deep Nostalgia, which animates old photos, is the latest AI tool that has taken the Internet by storm. But, it has also sparked a conversation about the ethical implications of such tools -- and not everyone is comfortable with it. The video reenactment technology's web-based version was overrun by eager users within a week of its launch on Feb. 25, and the site had to undergo maintenance to keep up.