This is a guest post by Kirk Borne, Ph.D., Chief Science Officer at DataPrime.ai, Kirk is also a consultant, astrophysicist, data scientist, blogger, data literacy advocate and renowned speaker, and is one of the most recognized names in the industry. A survey of 1,100 data practitioners and business leaders reported that 84% of organizations consider data literacy to be a core business skill, agreeing with the statement that the inability of the workforce to use and analyze data effectively can hamper their business success. In addition, 36% said data literacy is crucial to future-proofing their business. Another survey found that 75% of employees are not comfortable using data.
Its 2030 and a SUV driven by an Autonomous Driving System (ADS) is heading west on a highway. The SUV contains two parents in the front seats and two small children in the back seat. The SUV is going the speed limit of 100 km/hour. The SUV drives through a tight corner and as the SUV makes the final turn a large bull moose weighing over six hundred kilograms shambles onto the road. The autonomous driving system driving the SUV was trained to select the best alternative out of as set of possible outcomes and so the SUV abruptly swerves into the left lane currently occupied by a small sedan going the same speed as the SUV. The SUV ADS had determined that saving the lives of two adults and two children was the greater good even though there was a significant risk that the small sedan would be forced into oncoming traffic travelling East putting the two adult occupants at mortal risk.
In a perfect world, what you see is what you get. If this were the case, the job of Artificial Intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action--steer right, steer left, or continue straight--to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called'adversarial inputs,' it might take unnecessary and potentially dangerous action.
The connectivity benefits of 5G are expected to make businesses more competitive and give consumers access to more information faster than ever before. Connected cars, smart communities, industrial IoT, healthcare, immersive education--they all will rely on unprecedented opportunities that 5G technology will create. The enterprise market opportunity is driving many telecoms operators' strategies for, and investments in, 5G. Companies are accelerating investment in core and emerging technologies such as cloud, internet of things, robotic process automation, artificial intelligence and machine learning. IoT (Internet of Things), as an example, improving connectivity and data sharing between devices, enabling biometric based transactions; with blockchain, enabling use cases, trade transactions, remittances, payments and investments; and with deep learning and artificial intelligence, utilization of advanced algorithms for high personalization.
Turbine maintenance is an expensive, high-risk task. According to a recent analysis from the news website, wind farm owners are expected to spend more than $40 billion on operations and maintenance over a decade. Another recent study finds by using drone-based inspection instead of traditional rope-based inspection, you can reduce the operational costs by 70% and further decrease revenue lost due to downtime by up to 90%. This blog post will present how drones, machine learning (ML), and Internet of Things (IoT) can be utilized on the edge and the cloud to make turbine maintenance safer and more cost effective. First, we trained the machine learning model on the cloud to detect hazards on the turbine blades, including corrosion, wear, and icing.
Wind River today revealed a waterfall of new features available designed to automate and accelerate DevSecOps and other "pipelines" across the lifecycle of intelligent systems. The latest release of their platform is focused on transformational automation technologies, including a customizable automation engine, digital feedback loop, enhanced security, and analytics with machine learning capabilities. The announcement also included industry-proven technologies from ecosystem partners to the Wind River Studio Marketplace, which makes solutions available that are developed and delivered on the Wind River Studio "cloud-native platform for the development, deployment, operations, and servicing of mission-critical intelligent systems from devices to cloud." The company claims the platform "enables dramatic improvements in productivity, agility, and time-to-market, with seamless technology integration that includes far edge cloud compute, data analytics, security, 5G, and AI/ML." "The next generation of cloud-connected intelligent systems require the right software infrastructure to securely capture and process real-time machine data with digital feedback from a multitude of embedded systems, enabling advanced automated and autonomous scenarios," said Kevin Dallas, president, and CEO, Wind River.
Artificial intelligence (AI) is proving very adept at certain tasks – like inventing human faces that don't actually exist, or winning games of poker – but these networks still struggle when it comes to something humans do naturally: imagine. Once human beings know what a cat is, we can easily imagine a cat of a different color, or a cat in a different pose, or a cat in different surroundings. For AI networks, that's much harder, even though they can recognize a cat when they see it (with enough training). To try and unlock AI's capacity for imagination, researchers have come up with a new method for enabling artificial intelligence systems to work out what an object should look like, even if they've never actually seen one exactly like it before. "We were inspired by human visual generalization capabilities to try to simulate human imagination in machines," says computer scientist Yunhao Ge from the University of Southern California (USC).
Explaining, interpreting, and understanding the human mind presents a unique set of challenges. Doing the same for the behaviors of machines, meanwhile, is a whole other story. As artificial intelligence (AI) models are increasingly used in complex situations -- approving or denying loans, helping doctors with medical diagnoses, assisting drivers on the road, or even taking complete control -- humans still lack a holistic understanding of their capabilities and behaviors. Existing research focuses mainly on the basics: How accurate is this model? Oftentimes, centering on the notion of simple accuracy can lead to dangerous oversights.
Over the last few months here at Carnegie Mellon University (Australia campus) I've been giving a set of talks on AI and the great leaps it has made in the last 5 or so years. I focus on disruptive technologies and give examples ranging from smart fridges and jackets to autonomous cars, robots, and drones. The title of one of my talks is "AI and the 4th Industrial Revolution". Indeed, we are living in the 4th industrial revolution – a significant time in the history of mankind. The first revolution occurred in the 18th century with the advent of mechanisation and steam power; the second came about 100 years later with the discovery of electrical energy (among other things); and the big one, the 3rd industrial revolution, occurred another 100 years after that (roughly around the 1970s) with things like nuclear energy, space expeditions, electronics, telecommunications, etc. coming to the fore. So, yes, we are living in a significant time.