If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The code has been copied to your clipboard. One of the most popular tools on Apple's new iPhone X is its facial recognition system. This latest iPhone gives users the power to open the device just by looking at it. The smartphone has performed well in tests set up to trick it into opening for an unapproved user. The same kind of facial recognition system is also used for other purposes.
Apple Inc. has had a "will they or won't they" relationship with self-driving car development over the last few years, and unlike companies like Alphabet Inc. or Uber Technologies Inc., Apple has been fairly tight lipped about most of its work. From what little is known, Apple has been focusing more on the software side rather than on hardware, and a new research paper published Friday by the company on Cornell University's arXiv repository seems to confirm that theory. In the paper, Apple describes a new method for getting more out of a self-driving car's LiDAR sensors using machine learning. LiDAR uses pulses of laser light to create digital maps of objects using point clouds, which are a sort of 3D version of a connect the dots puzzle. Denser point clouds offer a clearer, more accurate picture of an object, but Apple's researchers say their new method, which they call VoxelNet, makes even sparse point clouds useful for object detection.
Computer scientists at Apple have released a research paper online on how autonomous cars can better detect cyclists and pedestrians while using fewer sensors, Reuters first spotted. The paper comes after Apple CEO Tim Cook clarified in June that the company was not building its own self-driving vehicle, but was instead focusing on an autonomous car system. It also follows a recent spotting of Apple's self-driving test Lexus SUV last month. Apple has been low-key about its autonomous technology plans, but the research paper finally shows how invested the company is in self-driving cars. The research paper, submitted last week to the journal arXiv, was written by Apple scientists Yin Zhou and Oncel Tuzel.
Fraugster, a German and Israeli startup that has developed Artificial Intelligence (AI) technology to help eliminate payment fraud, has raised $5 million in funding. Earlybird led the round, alongside existing investors Speedinvest, Seedcamp and an unnamed large Swiss family office. The new capital will be used to add to Fraugster's headcount as it expands internationally. Founded in 2014 by Max Laemmle, who previously co-founded payment gateway company Better Payment, and Chen Zamir, who I'm told has spent more than a decade in different analytics and risk management roles including five years at PayPal, Fraugster says it's already handling almost $15 billion in transaction volume for "several thousand" international merchants and payment service providers, including (and most notably) Visa. Its AI-powered fraud detection technology learns from each transaction in real-time and claims to be able to anticipate fraudulent attacks even before they happen.
Apple's work on self-driving cars has been more secretive than just about every other project in the autonomous car space -- but now, two of the company's scientists have published some of their auto-focused research for the first time. The paper, authored by Apple engineers Yin Zhou and Oncel Tuzel and published in the independent journal arXiv, details a new computer imaging software technique called "VoxelNet" that could improve a driverless car system's ability to detect pedestrians and cyclists. The scientists claim their new method could be even more effective than the two-tiered LiDAR and camera systems that have become the industry standard for object detection in self-driving cars. Those expensive systems depend on cameras to help determine the small or faraway objects (like pedestrians or cyclists) detected by LiDAR sensors, which use light beams to detect and map 3D obstacles in the world around the the vehicle. The VoxelNet system -- which was named after the "voxel" unit of value for a point in a three-dimensional grid -- eliminates the need for a camera to help identify the objects detected by LiDAR sensors, allowing the autonomous platform to work on LiDAR alone.
The terms "artificial intelligence" and "machine learning" are often used interchangeably, but there's a huge technical difference between them. While the first is used by Hollywood when depicting self-aware machines, the latter is comprised of finely tuned single-task algorithms that are nowhere near self-aware. In cyber security, machine learning algorithms can learn by themselves to make predictions based on previous experience and from daily analysis of millions of malicious programs. Practically, a machine learning algorithm is trained to identify a new or unknown threat based on similarities with known threats. For example, feeding a machine learning algorithm with all known variants of the CryptoLocker ransomware family will give it the ability to estimate whether an unknown sample is statistically likely – based on the features it shares with known CryptoLocker samples – to be part of the same ransomware family.
WEST LAFAYETTE, Ind. – A system under development at Purdue University uses artificial intelligence to detect cracks captured in videos of nuclear reactors and represents a future inspection technology to help reduce accidents and maintenance costs. "Regular inspection of nuclear power plant components is important to guarantee safe operations," said Mohammad R. Jahanshahi, an assistant professor in Purdue's Lyles School of Civil Engineering. "However, current practice is time-consuming, tedious, and subjective and involves human technicians reviewing inspection videos to identify cracks on reactors." Complicating the inspection process is that nuclear reactors are submerged in water to maintain cooling. Consequently, direct manual inspection of a reactor's components is not feasible due to high temperatures and radiation hazards.
An algorithm developed by researchers at Stanford University proved more effective than human radiologists in diagnosing cases of pneumonia. Much research has been shared on the potential of Artificial Intelligence applied to medicine, and in some cases, can reach a level of accuracy that exceeds the performance of professionals. Following this line, Stanford researchers published a document on CheXNet, the convolutional neuronal network, which they developed with the ability to detect pneumonia symptoms. To do this, he uses the traditional method, chest radiographs. It works with 112,120 images of chest X-rays referring to 14 types of diseases.
Apple started using deep learning for face detection in iOS 10. With the release of the Vision framework, developers can now use this technology and many other computer vision algorithms in their apps. We faced significant challenges in developing the framework so that we could preserve user privacy and run efficiently on-device. This article discusses these challenges and describes the face detection algorithm. Apple first released face detection in a public API in the Core Image framework through the CIDetector class.
Recently there has been a great buzz around the words "neural network" in the field of computer science and it has attracted a great deal of attention from many people. But what is this all about, how do they work, and are these things really beneficial? Essentially, neural networks are composed of layers of computational units called neurons, with connections in different layers. These networks transform data until they can classify it as an output. Each neuron multiplies an initial value by some weight, sums results with other values coming into the same neuron, adjusts the resulting number by the neuron's bias, and then normalizes the output with an activation function.