Collaborating Authors


Building a Self-Service, Secure, & Continually Compliant Environment on AWS


If you're an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers' expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement. With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers' agendas, being able to rapidly innovate and stay secure is essential.

LOOCV for Evaluating Machine Learning Algorithms


The Leave-One-Out Cross-Validation, or LOOCV, procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model. It is a computationally expensive procedure to perform, although it results in a reliable and unbiased estimate of model performance. Although simple to use and no configuration to specify, there are times when the procedure should not be used, such as when you have a very large dataset or a computationally expensive model to evaluate. In this tutorial, you will discover how to evaluate machine learning models using leave-one-out cross-validation. LOOCV for Evaluating Machine Learning Algorithms Photo by Heather Harvey, some rights reserved.

5G And Machine Learning: Taking Cellular Base Stations From Smart To Genius


An illuminated 5G sign hangs behind a weave of electronic cables on the opening day of the MWC ... [ ] Barcelona in Barcelona, Spain, on Monday, Feb. 25, 2019. At the wireless industry's biggest conference, over 100,000 people are set to see the latest innovations in smartphones, artificial intelligence devices and autonomous drones exhibited by more than 2,400 companies. At the core of this evolutionary step is the use of machine learning algorithms. The ability to be more dynamic with real-time network optimization capabilities such as resource loading, power budget balancing and interference detection is what made networks "smart" in the 4G era. While there are many uses of machine learning across all layers of a 5G network from the physical layer through to the application layer, the base station is emerging as a key application for machine learning.

The Race for Quantum Supremacy and the Quantum Artificial Intelligence of Things


Both races are setting the stage for the next dominant world power. While research into AI and quantum technologies is being developed on a worldwide scale, with advances coming from different countries, China and the United States (US) are at the forefront of both races, with these technologies forming important stepping stones for geopolitical power accumulation. Indeed, China is currently playing the game for supremacy on both quantum technologies and AI, trying to surpass the US and become the leading world power (Smith-Goodson, 2019). If China wins the race for quantum supremacy then it will be in a leading geostrategic position, since it will become the major dominant power in the next technological infrastructure, if, along with quantum supremacy, China achieves AI supremacy (both classical and quantum), then it may topple the US, Russia, Europe and Asian geopolitical competition vectors. On the other hand, this race is not restricted to countries, it is a global geostrategic and geoeconomic race that includes cooperative networks involving the academia and the private sectors as well, indeed, the US geostrategic position depends strongly upon the private sector's US-based large technology companies' investment in quantum technologies. Regarding the issue of quantum supremacy, it is relevant to consider Kirkland (2020)'s reflection, quoting: "(…) One thing remains unchanged (…) and that is the glaring reality that those who manage to successfully harness the power of quantum mechanics will have supremacy over the rest of the world. How do you think they will use it?"

This AI Can Detect If Planets Will Collide Into Each Other


In the last two decades, since the first exoplanet has been discovered, scientists have identified more than 4000 planets orbiting other stars, of which half are in multi-planet systems. Out of these, at least 700 of them have planets which can be at potential risk of devastating collision. Researchers even believe that there are possibilities of many collisions that have already taken place that we are not aware of. Several questions, such as how planets organise themselves, prevent themselves from colliding into each other or how they remain stable have been the centre of research for many years. One of the requirements to be able to detect these are to make sure that a planetary system is stable.

Navigating the potential of Artificial Intelligence (AI) in Space Sciences


While it was a sci-fi concept, then, it is no longer a fiction anymore. Scientists around the world are using AI algorithms to predict the life of other planets in the solar system, detecting the presence of water, finding out the possibility of a Blackhole, or determining the orbital curve of a celestial object. According to NASA officials, AI could also aid in the search for life on alien planets and the detection of nearby asteroids in space. What took years for earlier astronomers to discover can now be done in a shorter time duration by using machine learning models of AI. Now researchers from Princeton University have claimed to have found a way to predict if a planet will clash with another in its path.

AI chip startup Graphcore enters the system business claiming economics vastly better than Nvidia's


Graphcore's 1U rack-mounted M2000 "IPU Machine" is a server dedicated to AI algorithm processing. For $32,450, you get a petaflop of processing power in 4 chips, a networking connection of 2.8 terabits per second, and up to 450 gigabytes of memory. In what appears to be a trend in the world of artificial intelligence hardware and software, AI chip designer Graphcore on Wednesday morning said that its latest very large chip for AI will be sold in a four-chip server computer that sits in a rack, putting Graphcore into the expanding market for dedicated AI server computers. Graphcore, based in Bristol, U.K., which has received over $300 million in venture capital, unveiled what it calls the Mk2 GC200, or Mark-2, as the company refers to it, its latest processor dedicated to handling machine learning operations of neural networks. It also said that it will begin selling a four-chip computer called the M2000 that is housed in a standard 1U pizza box chassis.

Artificial Intelligence Predicts Which Planetary Systems Will Survive 100,000 Times Faster


While three planets have been detected in the Kepler-431 system, little is known about the shapes of their orbits. On the left are a large number of superimposed orbits for each planet that are consistent with observations. An international team of astrophysicists led by Princeton's Daniel Tamayo removed all the unstable configurations that would have already collided and couldn't be observed today. Doing this with previous methods would take over a year of computer time. With their new model SPOCK, it takes 14 minutes.

Artificial intelligence predicts which planetary systems will survive


How do planetary systems--like our solar system or multi-planet systems around other stars--organize themselves? Of all of the possible ways planets could orbit, how many configurations will remain stable over the billions of years of a star's life cycle? Rejecting the large range of unstable possibilities--all the configurations that would lead to collisions--would leave behind a sharper view of planetary systems around other stars, but it's not as easy as it sounds. "Separating the stable from the unstable configurations turns out to be a fascinating and brutally hard problem," said Daniel Tamayo, a NASA Hubble Fellowship Program Sagan Fellow in astrophysical sciences at Princeton. To make sure a planetary system is stable, astronomers need to calculate the motions of multiple interacting planets over billions of years and check each possible configuration for stability--a computationally prohibitive undertaking.

Why Tesla Invented A New Neural Network


Recently, Tesla filed a patent called'Systems and methods for adapting a neural network on a hardware platform.' In the patent, they described the systems and methods to select a neural network model configuration that satisfies all constraints. According to the patent, the constraints mainly include an embodiment that computes a list of valid configurations and a constraint satisfaction solver to classify valid configurations for the particular platform, where the neural network model will run efficiently. Neural network models are increasingly relied upon for different problems due to the ease at which they can label or classify the input data. Different neural networks are trained with different hyperparameters, and then they are used to analyse the same validation training set.