Goto

Collaborating Authors

Results


NotCo taps AI to develop new plant-based alternatives - Verdict

#artificialintelligence

Chilean food-tech start-up NotCo uses artificial intelligence (AI) to identify the optimum combinations of plant proteins when creating vegan alternatives to animal-based food products. The company, set up in 2015, has attracted investment from Amazon founder Jeff Bezos and Future Positive, a US investment fund founded by Biz Stone, the co-founder of Twitter. NotCo's machine learning algorithm compares the molecular structure of dairy or meat products to plant sources, searching for proteins with similar molecular components. NotCo has a database containing over 400,000 different plants, including macronutrient breakdown and chemical composition. These factors are used to predict novel food combinations with the target flavour, texture, and functionality.


Machine Learning Market Outlook 2021: Big Things are Happening - Digital Journal

#artificialintelligence

Global Machine Learning Market Report 2021 is latest research study released by HTF MI evaluating the market risk side analysis, highlighting opportunities and leveraged with strategic and tactical decision-making support. The report provides information on market trends and development, growth drivers, technologies, and the changing investment structure of the Global Machine Learning Market. Some of the key players profiled in the study are Microsoft Corporation, IBM Corporation, SAP SE, SAS Institute, Google, Amazon Web Services, Baidu, BigML, Fair Isaac Corporation (FICO), Hewlett Packard Enterprise Development LP (HPE), Intel Corporation, KNIME.com AG, RapidMiner, Angoss Software Corporation, H2O.ai, Alpine Data, Domino Data Lab, Dataiku, Luminoso Technologies, TrademarkVision, Fractal Analytics, TIBCO Software, Teradata, Dell, Oracle Corporation. The study provides comprehensive outlook vital to keep market knowledge up to date segmented by SMEs & Large Enterprises,, Cloud Deployment & On-premise Deployment and 18 countries across the globe along with insights on emerging & major players.


Case Study 1: Customer satisfaction prediction on Olist Brazillian Dataset

#artificialintelligence

The Olist store is an e-commerce business headquartered in Sao Paulo, Brazil. This firm acts as a single point of contact between various small businesses and the customers who wish to buy their products. Recently, they uploaded a dataset on Kaggle that contains information about 100k orders made at multiple marketplaces between 2016 to 2018. What we purchase on e-commerce websites is affected by the reviews which we read about the product posted on that website. This firm can certainly leverage these reviews to remove those products which consistently receive negative reviews.


Watershed of Artificial Intelligence: Human Intelligence, Machine Intelligence, and Biological Intelligence

arXiv.org Artificial Intelligence

This article reviews the "Once learning" mechanism that was proposed 23 years ago and the subsequent successes of "One-shot learning" in image classification and "You Only Look Once - YOLO" in objective detection. Analyzing the current development of Artificial Intelligence (AI), the proposal is that AI should be clearly divided into the following categories: Artificial Human Intelligence (AHI), Artificial Machine Intelligence (AMI), and Artificial Biological Intelligence (ABI), which will also be the main directions of theory and application development for AI. As a watershed for the branches of AI, some classification standards and methods are discussed: 1) Human-oriented, machine-oriented, and biological-oriented AI R&D; 2) Information input processed by Dimensionality-up or Dimensionality-reduction; 3) The use of one/few or large samples for knowledge learning.


Gaining the Enterprise Edge in AI Products - insideBIGDATA

#artificialintelligence

In this contributed article, Taggart Bonham, Product Manager of Global AI at F5 Networks, discusses last June, OpenAI released GPT-3, their newest text-generating AI model. As seen in the deluge of Twitter demos, GPT-3 works so well that people have generated text-based DevOps pipelines, complex SQL queries, Figma designs, and even code. In the article, Taggart explains how enterprises need to prepare for the AI economy by standardizing their data collection processes across their organizations like GPT-3 so it can then be properly leveraged.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Unlocking eCommerce growth with machine learning and behavioural psychology

#artificialintelligence

Olist is the largest eCommerce website in Brazil. It connects small retailers from all over the country to sell directly to customers. The business has generously shared a large dataset containing 110k orders on its site from 2016 to 2018. The SQL-style relational database includes customers and their orders in the site, which contains around 100k unique orders and 73 categories. It also includes item prices, timestamps, reviews, and gelocation associated with the order.


High-level Approaches to Detect Malicious Political Activity on Twitter

arXiv.org Artificial Intelligence

Our work represents another step into the detection and prevention of these ever-more present political manipulation efforts. We, therefore, start by focusing on understanding what the state-of-the-art approaches lack -- since the problem remains, this is a fair assumption. We find concerning issues within the current literature and follow a diverging path. Notably, by placing emphasis on using data features that are less susceptible to malicious manipulation and also on looking for high-level approaches that avoid a granularity level that is biased towards easy-to-spot and low impact cases. We designed and implemented a framework -- Twitter Watch -- that performs structured Twitter data collection, applying it to the Portuguese Twittersphere. We investigate a data snapshot taken on May 2020, with around 5 million accounts and over 120 million tweets (this value has since increased to over 175 million). The analyzed time period stretches from August 2019 to May 2020, with a focus on the Portuguese elections of October 6th, 2019. However, the Covid-19 pandemic showed itself in our data, and we also delve into how it affected typical Twitter behavior. We performed three main approaches: content-oriented, metadata-oriented, and network interaction-oriented. We learn that Twitter's suspension patterns are not adequate to the type of political trolling found in the Portuguese Twittersphere -- identified by this work and by an independent peer - nor to fake news posting accounts. We also surmised that the different types of malicious accounts we independently gathered are very similar both in terms of content and interaction, through two distinct analysis, and are simultaneously very distinct from regular accounts.


Google Maps Keep Getting Better, Thanks To DeepMind's Machine Learning

#artificialintelligence

Google users contribute more than 20 million pieces of information on Maps every day – that's more than 200 contributions every second. The uncertainty of traffic can crash the algorithms predicting the best ETA. There is also a chance of new roads and buildings being built all the time. Though Google Maps gets its ETA right most of the time, there is still room for improvement. Researchers at Alphabet-owned DeepMind have partnered with the Google Maps team to improve the accuracy of the real-time ETAs by up to 50% in places like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C.


Google Maps and DeepMind enhance AI capabilities to improve route calculations

ZDNet

It has been nearly 13 years since Google Maps started providing traffic data to help people navigate their way around, alongside providing detail about whether the traffic along the route is heavy or light, the estimated travel time, and the estimated time of arrival (ETAs). In a bid to further enhance those traffic prediction capabilities, Google and Alphabet's AI research lab DeepMind have improved real-time ETAs by up to 50% in places such as Sydney, Tokyo, Berlin, Jakarta, Sao Paulo, and Washington DC by using a machine learning technique known as graph neural networks. Google Maps product manager Johann Lau said Google Maps uses aggregate location data and historical traffic patterns to understand traffic conditions to determine current traffic estimates, but it previously did not account for what traffic may look like if a traffic jam were to occur while on the journey. "Our ETA predictions already have a very high accuracy bar -- in fact, we see that our predictions have been consistently accurate for over 97% of trips … this technique is what enables Google Maps to better predict whether or not you'll be affected by a slowdown that may not have even started yet," he said in a blog post. The researchers at DeepMind said by using graph neural networks, this allowed Google Maps to incorporate "relational learning biases to model the connectivity structure of real-world road networks."