Goto

Collaborating Authors

Machine Learning


How to Use AI & Machine Learning to Make Social Media Marketing Decisions

#artificialintelligence

Northern Light CEO C. David Seuss presented a virtual session at The Market Research Event (TMRE) Digital Week on June 24, about the value of new, AI-driven tools for "decision-oriented analysis" of social media posts to help set and refine an organization's product marketing strategy. Seuss' talk, entitled "Using Machine Learning to Make Social Media Marketing Decisions," focused on analyzing Twitter – the most text content-rich social media platform – for the specific purpose of gleaning business insights valuable to marketing professionals. "Assessing simple co-occurrence of Twitter hashtags is insufficient, and often downright misleading, for marketers of complex products," Seuss asserted in his presentation. "Understanding the context of the social media conversation is vital to derive a truly meaningful analysis of hashtag and keyword overlaps." Seuss explained that using AI and machine learning techniques to measure the semantic similarity of hashtags leads to far more accurate analysis that gets at the importance, from a business perspective, of seemingly related terms.


Stanford 'SIRENs' Apply Periodic Activation Functions to Implicit Neural Representations

#artificialintelligence

The challenge of how best to represent signals is at the core of a host of science and engineering problems. In a new paper, Stanford University researchers propose that implicit neural representations offer a number of benefits over conventional continuous and discrete representations and could be used to address many of these problems. The researchers introduce sinusoidal representation networks (SIRENs) as a method for leveraging periodic activation functions for implicit neural representations and demonstrate their suitability for representing complex natural signals and their derivatives. Traditionally, discrete representations for signals are used when modelling different types of signals in images and videos, processing audio sound waves, performing 3D shape representations via point clouds, etc. The approach can also be used to solve more general boundary value problems such as the Poisson, Helmholtz, or wave equations.


AI For All: The US Introduces New Bill For Affordable Research

#artificialintelligence

Yesterday, AIM published an article on how difficult it is for the small labs and individual researchers to persevere in the high compute, high-cost industry of deep learning. Today, the policymakers of the US have introduced a new bill that will ensure deep learning is affordable for all. The National AI Research Resource Task Force Act was introduced in the House by Representatives Anna G. Eshoo (D-CA) and her colleagues. This bill was met with unanimous support from the top universities and companies, which are engaged in artificial intelligence (AI) research. Some of the well-known supporters include Stanford University, Princeton University, UCLA, Carnegie Mellon University, Johns Hopkins University, OpenAI, Mozilla, Google, Amazon Web Services, Microsoft, IBM and NVIDIA amongst others.


Hot papers on arXiv from the past month – June 2020

AIHub

We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.


Learn to analyze and visualize data with Python during this $30 training

Mashable

As 2020 has clearly shown us, nobody can actually predict the future. But there are some people who come pretty close, and their profession may surprise you. Data scientists (yes, you read that right) can practically predict the future of certain industries using big data and a coding language called Python. Knowing that, it's not hard to see why Glassdoor named data scientists the third most desired job in the US, with over 6,500 openings, a median base salary of $107,801, and a job satisfaction rate of 4.0. If you're looking for a new career path with a handsome salary and the ability to basically predict the future, check out this e-book and course bundle to get started.


50 Machine Learning and Data Science Companies That Are Revolutionizing Industries

#artificialintelligence

Nowadays it's hard to find a single industry where machine learning and data science aren't being used to improve productivity and deliver results. Indeed that is why people are so excited about the promise of artificial intelligence: it can be applied to so many diverse problem spaces effectively and it works! This list has been aggregated after analyzing over 200 company descriptions, and we've broadly organized them by the problem domain being tackled and have included a brief description of their mission. TLDR: A framework for providing data integrations and web interfaces for trained machine learning models. TLDR: Develops medical imaging tools powered by AI to help improve the efficacy of radiologists in detecting illnesses.


How Intelligent Is Your AI? – MIT Sloan Management Review

#artificialintelligence

Ask these four questions to tell if your AI solution is really AI. In a world where buzzwords come and go, artificial intelligence has been remarkably durable. Since it first emerged as a concept in the 1950s, there has been a relatively constant flow of technologies, products, services, and companies that purport to be AI. It is quite likely that a solution you are investing in today is being referred to as AI-enabled or machine-learning-driven. The reality today for most organizations is that AI and machine learning form a rather small piece of the overall analytics pie.


How to build a machine learning model in 7 steps

#artificialintelligence

All types of organizations are implementing AI projects for numerous applications in a wide range of industries. These applications include predictive analytics, pattern recognition systems, autonomous systems, conversational systems, hyper-personalization activities and goal-driven systems. Each of these projects has something in common: They're all predicated on an understanding of the business problem and that data and machine learning algorithms must be applied to the problem, resulting in a machine learning model that addresses the project's needs. Deploying and managing machine learning projects typically follow the same pattern. However, existing app development methodologies don't apply because AI projects are driven by data, not programming code.


Security Think Tank: AI cyber attacks will be a step-change for criminals

#artificialintelligence

Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.


COVID-19: China's digital health strategies against the global pandemic

#artificialintelligence

Digital health technologies are critical tools in the ongoing fight against the global COVID-19 pandemic. Artificial Intelligence (AI), big data, 5G and robotics can provide valuable and innovative solutions for patient treatment, frontline protection, risk reduction, communications and improved quality of living under lockdown as the world continues to battle the COVID-19 pandemic. Last week's AI for Good webinar, 'COVID-19: China's digital health strategies against the global pandemic,' presented different use cases from China's digital health strategy, and provided context for how AI and information and communication technologies (ICT) has supported healthcare and citizen needs for the world's most populous nation. Following the start of the COVID-19 outbreak in January 2020, China implemented a wide-reaching strategy to control and contain the virus. "With various available technologies, we [ICT engineers] can actually play a very positive supporting role in fighting the current virus," said Shan Xu, an engineer in the Smart Health Department at the China Academy of Information and Communications Technology (CAICT).