Goto

Collaborating Authors

optimization


IoT-Enhanced Processors Increase Performance, AI, Security

#artificialintelligence

What's New: Today at the Intel Industrial Summit 2020, Intel announced new enhanced internet of things (IoT) capabilities. The 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series bring new artificial intelligence (AI), security, functional safety and real-time capabilities to edge customers. With a robust hardware and software portfolio, an unparalleled ecosystem and 15,000 customer deployments globally, Intel is providing robust solutions for the $65 billion edge silicon market opportunity by 2024. "By 2023, up to 70% of all enterprises will process data at the edge.1 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors represent our most significant step forward yet in enhancements for IoT, bringing features that address our customers' current needs, while setting the foundation for capabilities with advancements in AI and 5G." –John Healy, Intel vice president of the Internet of Things Group and general manager of Platform Management and Customer Engineering Why It's Important: Intel works closely with customers to build proofs of concept, optimize solutions and collect feedback along the way. Innovations delivered with 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors are a response to challenges felt across the IoT industry: edge complexity, total cost of ownership and a range of environmental conditions.


Can RL from pixels be as efficient as RL from state?

Robohub

A remarkable characteristic of human intelligence is our ability to learn tasks quickly. Most humans can learn reasonably complex skills like tool-use and gameplay within just a few hours, and understand the basics after only a few attempts. This suggests that data-efficient learning may be a meaningful part of developing broader intelligence. On the other hand, Deep Reinforcement Learning (RL) algorithms can achieve superhuman performance on games like Atari, Starcraft, Dota, and Go, but require large amounts of data to get there. Achieving superhuman performance on Dota took over 10,000 human years of gameplay. Unlike simulation, skill acquisition in the real-world is constrained to wall-clock time.


Automation Continuum - Leveraging AI and ML to Optimise RPA

#artificialintelligence

Over the past year, the appropriation of robotic process automation, especially progressed macros or "robotic workers" intended to automate the most ordinary, dull and time-exorbitant tasks has seen significant growth. As the technology develops alongside artificial intelligence and machine learning, the most encouraging future for knowledge workers is one where the simplicity of arrangement of RPA and the raw power of machine learning join to make more productive, more intelligent robotic workers. One of the keys for adoption is companies would prefer not to trouble individuals with a lot of new tools and permit their environments to learn. For each situation, what companies attempt to do is, whatever UI they're working in, possibly they make a widget, perhaps there's a dashboard that we can add a panel to that that would contain the data that is required. Adding to their current UI or including a stage over that routes things to the correct individual so they don't see 80% of the cases that they would have seen because they were automatically delegated and never got there.


Quantifying Quantum computing's value in financial services - Fintech News

#artificialintelligence

The next great leap for computing may be a bit closer with the help of joint efforts between the U.S. government, the private sector -- and hundreds of millions of dollars. And along the way, we might see a benefit for the financial services sector in the form of reduced false positives in fraud detection. The U.S. Department of Energy said this week that it will spend $625 million over the next five years to develop a dozen research centers devoted to artificial intelligence (AI) and quantum computing. Another $340 million will come from the private sector and academia, bringing Uncle Sam together with the likes of IBM, Amazon and Google to apply the highest of high tech to a variety of verticals and applications. In an interview with Karen Webster, Dr. Stefan Wörner, global leader for quantum finance and optimization at IBM, said we're getting closer to crossing the quantum-computing Rubicon from concept to real-world applications. The basic premise behind quantum computing is that it can tackle tasks with blinding speed and pinpoint accuracy that aren't possible with "regular" computers.


Executive Forum: Machine Learning & AI

#artificialintelligence

Although machine learning and artificial intelligence (AI) are terms that are often used interchangeably, they are quite different. That difference becomes more important as applications for these technologies become more prevalent. Tech Briefs posed questions to machine learning/AI industry executives to get their views on issues such as machine learning platform selection, interpreting data created by these platforms, and pros and cons of implementing machine learning. Our participants are Dr. Florian Baumann, Chief Technology Officer - Automotive & AI, at Dell Technologies; Mario Bergeron, Technical Marketing Engineer at Averna Technologies; Zach Mayer, Vice President of Data Science at Data Robot; George Rendell, Senior Director of NX Design at Siemens Digital Industries Software; and Rajesh Ramachandran, Chief Digital Officer - Industrial Automation, at ABB Inc. Tech Briefs: Machine learning is a term that has confused many people, partly because its definition has taken on multiple forms. How do you define machine learning and how do you see it being used in manufacturing, medical, transportation, or other industrial applications?


Can RL from pixels be as efficient as RL from state?

AIHub

A remarkable characteristic of human intelligence is our ability to learn tasks quickly. Most humans can learn reasonably complex skills like tool-use and gameplay within just a few hours, and understand the basics after only a few attempts. This suggests that data-efficient learning may be a meaningful part of developing broader intelligence. On the other hand, Deep Reinforcement Learning (RL) algorithms can achieve superhuman performance on games like Atari, Starcraft, Dota, and Go, but require large amounts of data to get there. Achieving superhuman performance on Dota took over 10,000 human years of gameplay. Unlike simulation, skill acquisition in the real-world is constrained to wall-clock time.


4 Python AutoML Libraries Every Data Scientist Should Know

#artificialintelligence

With the use of recent methods like Bayesian Optimization, the library is built to navigate the space of possible models and learns to infer if a specific configuration will work well on a given task. Created by Matthias Feurer, et al., the library's technical details are described in a paper, Efficient and Robust Machine Learning. In addition to discovering data preparation and model selections for a dataset, it learns from models that perform well on similar datasets. Top-performing models are aggregated in an ensemble. On top of an efficient implementation, auto-sklearn requires minimal user interaction.


These 10 AI Companies Are Transforming Marketing In 2020

#artificialintelligence

Traditional marketing tools lack the flexibility, scalability, and comprehensiveness to address many of the challenges faced by modern companies. With growing digitization and an always-online audience, more marketing teams now require artificial intelligence (AI) to stay competitive. Before rushing to hire a data science team, you need to evaluate the third-party AI solutions already available on the market. Many vendors use "AI" in their sales pitches, but lack credible research and engineering teams that can productize and operationalize cutting-edge AI research. In this article, we feature companies with proven AI and ML expertise that are transforming marketing activities with state-of-the-art AI-driven solutions.


HyperOpt for Automated Machine Learning With Scikit-Learn

#artificialintelligence

Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement. HyperOpt is an open-source library for large scale AutoML and HyperOpt-Sklearn is a wrapper for HyperOpt that supports AutoML with HyperOpt for the popular Scikit-Learn machine learning library, including the suite of data preparation transforms and classification and regression algorithms. In this tutorial, you will discover how to use HyperOpt for automatic machine learning with Scikit-Learn in Python. HyperOpt for Automated Machine Learning With Scikit-Learn Photo by Neil Williamson, some rights reserved. HyperOpt is an open-source Python library for Bayesian optimization developed by James Bergstra.


Robotics is the Future of Manufacturing: Impactful Tech on the Horizon

#artificialintelligence

Industrial manufacturing as a sector has been an early adopter of robotics and other forms of technological improvements for decades. Robotics have been one of the best options to increase production efficiency for large and often highly repetitive manufacturing processes. But the era of producing large quantities of just a few products with low mix is coming to an end, giving way to increased product personalization requiring a more flexible production process with less waste than ever before. Fortunately, the future of manufacturing is brimming with opportunity. It is full of new technologies designed to reduce waste and maximize process efficiency and flexibility through software and hardware capabilities.