"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
We take a closer looking at some of the more unusual security research that was presented at this year's virtual Hacker Summer Camp The annual Hacker Summer Camp traversed from Las Vegas into the wilds of cyberspace this year, thanks to the coronavirus pandemic, but security researchers still rose to the challenge of maintaining the traditions of the event in 2020. As well as tackling core enterprise and web security threats, presenters at both Black Hat and DEF CON 2020 took hacking to weird and wonderful places. Anything with a computer inside was a target – a definition that these days includes cars, ATMs, medical devices, traffic lights, voting systems and much, much more. Security researcher Alan Michaels brought a new meaning to the phrase "insider threat" with a talk about the potential risk posed by implanted medical devices in secure spaces at Black Hat 2020. An aging national security workforce combined with the burgeoning, emerging market for medical devices means that the risk is far from theoretical.
In Deloitte's third edition of the "State of AI in the Enterprise" survey, conducted between October and December 2019, the authors suggest that businesses are now entering an age of Pervasive AI, where its use is becoming more and more widespread. In fact, 74% of the businesses surveyed think that AI will be fully integrated into all aspects of their business in the next three years, and 64% say it enables them to gain a competitive edge. As AI becomes more pervasive, Deloitte's survey claims that we are now moving from the "early adopter" phase of AI's use, to the "early majority" phase, where many more businesses are starting to invest in AI and are increasingly convinced of its benefits. The businesses surveyed were split into three types of AI adopter: starters (27%), skilled (47%) and seasoned (26%). So how do different adopters use AI, and what are their reasons for integrating it into their business operations?
Imagine a few days before an election, a video of a candidate is released, showing them using hate speech, racial slurs, and epithets that undercut their image as pro minorities. Imagine a teenager watching embarrassingly an explicit video of themselves going viral on social media. Imagine a CEO on the road to raise money when an audio clip stating her fears and anxieties about the product is sent to the investors, ruining her chances of success. All the above scenarios are fake, made up, and not actual, but can be made real by AI-generated synthetic media, also called deepfakes. The same technology that can enable a mother, losing her voice to Lou Gehrig's disease to talk to her family using a synthetic voice can also be used to generate a political candidate's fake speech to damage their reputation.
When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006. Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades.
Artificial Intelligence (AI) made leapfrogs of development and saw broader adoption across industry verticals when it introduced machine learning (ML). ML helps in learning the behavior of an entity using patterns detection and interpretation methods. However, despite its unlimited potential, the conundrum lies in how machine learning algorithms arrive at a decision in the first place. Questions like, "What are the processes they adopted, and at what speed? How did they make such autonomous decision?"
AI Outside In is a series of columns from PAIR's writer-in-residence, David Weinberger, who offers his outsider perspective on key ideas in machine learning. His opinions are his own and do not necessarily reflect those of Google. AI Outside In is a column by PAIR's writer-in-residence, David Weinberger, who offers his outsider perspective on key ideas in machine learning. His opinions are his own and do not necessarily reflect those of Google. When we humans argue over what's fair, sometimes it's about principles, sometimes about consequences, and sometimes about trade-offs.
One of the most important reasons business, especially consumer facing business, wants to have lots of data is to know as much about the market, us, as possible. Artificial intelligence (AI) has made that focus on customers more and more accurate. While business has been becoming more invasive, governments have begun to look at and pass regulations that begin to provide certain limits. Privacy matters to the electorate, and smart business looks at how to use data to find out information while remaining in compliance with regulatory rules. Almost ten years ago, Target created an algorithm that figured out if people were pregnant based on purchase patterns, and the company then sent coupons to the addresses of those customers.
The interaction between AI and automation could transform the way we work. In healthcare, these tools offer much more than commercial value. Here, there's a potential to improve the cost, accessibility, and efficacy of patient care and have a positive impact on people's quality of life. Of course, automation is not a new concept in the medical field. Robots have been laboring away in labs and operating theaters for quite some time now.
Splice Machine develops a machine learning-enabled SQL database that is based on a closely engineered collection of distributed components, including HBase, Spark, and Zookeeper, not to mention H2O, TensorFlow, and Jupyter. Customers use it to build complex AI apps that include transactional, analytical, and ML components. The company just announced a Kubernetes operator for customers running in private cloud environments. Zweben said during a demo of Splice Machine's Kubernetes Ops Center. "When you pause on Splice Machine, it drains Kubernetes nodes and makes them available for other applications to use." Support for Kubernetes is not new at Splice Machine.
By combining purpose-built materials and neural networks, researchers at EPFL have shown that sound can be used in high-resolution imagery. Imaging allows us to depict an object through far-field analysis of the light- and sound-waves that it transmits or radiates. However, the level of detail is limited by the size of the wavelength in question--until now. Researchers at EPFL's Laboratory of Wave Engineering have successfully proven that a long, and therefore imprecise, wave (in this case a sound wave) can elicit details that are 30 times smaller than its length. Their research, which has just been published in Physical Review X, is creating exciting new possibilities, particularly in the fields of medical imaging and bioengineering.