challenge


Machine Learning at HPC User Forum: Drilling into Specific Use Cases

@machinelearnbot

Dr. Weng-Keen Wong from the NSF echoed much the same distinction between the specific and general case algorithm during his talk "Research in Deep Learning: A Perspective From NSF" and was also mentioned by Nvidia's Dale Southard during the disruptive technology panel. Tim Barr's (Cray) "Perspectives on HPC-Enabled AI" showed how Cray's HPC technologies can be leveraged for Machine and Deep Learning for vision, speech and language. Fresh off their integration of SGI technology into their technology stack, the talk not only highlighted the newer software platforms which the learning systems leverage, but demonstrated that HPE's portfolio of systems and experience in both HPC and hyper scale environments is impressive indeed. Stand-alone image recognition is really cool, but as expounded upon above, the true benefit from deep learning is having an integrated workflow where data sources are ingested by a general purpose deep learning platform with outcomes that benefit business, industry and academia.


Artificial Intelligence: The Challenge to Keep It Safe - Future of Life Institute

#artificialintelligence

Safety Principle: AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible. "Applying traditional security techniques to AI gives us a concrete path to achieving AI safety," Goodfellow explains. Any consideration of AI safety must also include value alignment: how can we design artificial intelligence that can align with the global diversity of human values, especially taking into account that, often, what we ask for is not necessarily what we want? It's a good start, it's a good big picture goal to make AI safe, and the technical element is a big part of it; but again, I think safety also means policy and norm-setting."


The Complete Amazon Machine Learning Developer Course

@machinelearnbot

The complexity of discovering, understanding, performing analysis, and predicting outcomes on the data using machine learning algorithms is a challenge. It begins with the Amazon Machine Learning platform and will implement core data science concepts such as classification, regression, regularization, overfitting, model selection, and evaluation. Then, you will learn to leverage the Amazon Web Service (AWS) ecosystem for extended access to data sources, implement realtime predictions, and run Amazon Machine Learning projects via the command line and the Python SDK. At the end of this course, you will be a master at Amazon machine learning and have enough expertise to be able to build complex machine learning projects using AWS.


14 Benefits and Forces That Are Driving The Internet of Things

@machinelearnbot

For example: it is already true that sensors on a single Boeing aircraft jet engine can generate 20 terabytes of data per hour; the future astronomy optical telescope LSST (Large Synoptic Survey Telescope) will produce about 200 petabytes of data in its survey lifetime; and the future astronomy radio telescope ensemble SKA (Square Kilometer Array) will alone produce several exabytes per day as it senses the changes and behaviors of objects in the Universe. Supply Chain Analytics – delivering just-in-time products at the point of need (including the use of RFID-based tracking). One of the major developers of IoT in the industrial environment is GE – check out the excellent recent article on "GE's Vision for the Industrial Internet of Things". Several big data platforms are beginning to investigate the data challenges, communication standards, analytics requirements, and technology responses that the Internet of Things will bring to operational analytics and supply chain environments, but very few are architected to handle IoT.


The Sky's No Limit

@machinelearnbot

Terwilliger – an Embry-Riddle Aeronautical University Worldwide assistant professor of aeronautics and program chair for the Master of Science in Unmanned Systems degree in the College of Aeronautics – investigates outreach and engagement efforts between airports and the unmanned aerial systems (UAS) operational communities. Richard Stansbury, an associate professor and unmanned and autonomous systems engineering master's program coordinator at Embry-Riddle's Daytona Beach Campus in Florida, serves as the university's primary investigator on the project, while Terwilliger leads stakeholder engagement efforts and more. After graduation, he worked for more than 10 years in aviation and aerospace – leading integration testing, simulation and training development – and he developed documentation as a software/test engineer at Rockwell Collins Simulation and Training Solutions and ENSCO, Inc. Named Embry-Riddle Worldwide Campus Faculty Member of the Year (2013-2014), Terwilliger served as the lead for the Real World Design Challenge Development Team (2013-2015). He currently chairs the UAS subcommittee for the National Business Aviation Association's Business Aviation Management Committee, and he sits on the editorial board for the Journal of Unmanned Aerial Systems.



The current state of applied data science

#artificialintelligence

Moreover, since we're dealing mainly with supervised learning, it's no surprise that lack of training data remains the primary bottleneck in machine learning projects. There are some good research projects and tools for quickly creating large training data sets (or augmenting existing ones). Preliminary work on generative models (by deep learning researchers) have produced promising results in unsupervised learning in computer vision and other areas. With the recent rise of deep learning, I'm seeing companies use tools that explain how models produce their predictions and tools that can explain where a model comes from by tracing predictions from the learning algorithm and training data.


Can you solve these mathematical / statistical problems?

@machinelearnbot

I recently posted an article featuring a non traditional approach to find large prime numbers. The research section of this article offers interesting challenges, both for data scientists interested in mathematics, and for mathematicians interested in data science and big data. In this article, we show how big data, statistical science (more specifically, pattern recognition) and the use of new efficient, distributed algorithms, could lead to an original research path to discover large primes. For another interesting challenge, read the section "Potential Areas of Research" in my article How to detect if numbers are random or not.


Machine learning and Industrial IoT: Now and into the future

@machinelearnbot

Support vector machines (SVM), logistic regression, and artificial neural networks are commonly used supervised ML algorithms. By using multiple hidden layers, DL algorithms learn the features that need to be extracted from the input data without the need to explicitly input the features to the learning algorithm. DL has seen recent success in IIoT applications mainly because of the coming of age of technological components, such as more compute power in hardware, large repositories of labeled training data, breakthroughs in learning algorithms and network initialization, and the availability of open source software frameworks. Using transfer learning, you can start with a pre-trained neural network (most DL software frameworks provide fully trained models that you can download) and fine-tune it with data from your application.


The AI that can turn any selfie into a 3D image

Daily Mail

Typically, 3D face reconstruction poses'extraordinary difficulty,' as it requires multiple images and must work around the varying poses and expressions, along with differences in lightning, according to the team. The system developed by researchers at the University of Nottingham and Kingston University relies on a convolutional neural network (CNN) to overcome some of the challenges of 3D face reconstruction. The system developed by researchers at the University of Nottingham and Kingston University relies on a convolutional neural network (CNN) to overcome some of the challenges of 3D face reconstruction. Typically, 3D face reconstruction poses'extraordinary difficulty,' as it requires multiple images and must work around the varying poses and expressions, along with differences in lightning, according to the team.