Results


AI's impact on network engineering now and in the future

#artificialintelligence

If nothing else, AI continues to climb the technology hype curve. It was impossible to read the news, browse the web, attend a conference, or even watch television without seeing a reference to how AI is making our lives better. Since Alan Turing declared "what we want is a machine that can learn from experience" in a 1947 lecture to the London Mathematical Society, the imaginations of computer scientists and engineers have run wild with visions of a computer that can answer questions on par with a human. Today, almost everyone in business is looking at how to leverage AI, and there is no shortage of vendors looking to capitalize on the trend. Venture Scanner currently tracks more than 2,000 AI startups that have received more than $26 billion in funding.


Scaling deep learning for science

#artificialintelligence

Deep neural networks--a form of artificial intelligence--have demonstrated mastery of tasks once thought uniquely human. Their triumphs have ranged from identifying animals in images, to recognizing human speech, to winning complex strategy games, among other successes. Now, researchers are eager to apply this computational technique--commonly referred to as deep learning--to some of science's most persistent mysteries. But because scientific data often looks much different from the data used for animal photos and speech, developing the right artificial neural network can feel like an impossible guessing game for nonexperts. To expand the benefits of deep learning for science, researchers need new tools to build high-performing neural networks that don't require specialized knowledge.


At least 16 companies developing Deep Learning chips NextBigFuture.com

@machinelearnbot

There are many established and startup companies developing deep learning chips. Google and Wave Computing have working silicon and are conducting customer trials. Chinese AI chip startup has received $100 million in funding. Cambricon Technologies aims to have one billion smart devices using its AI processor and own 30% of China's high-performance AI chip market in three years. Huawei estimates Cambricon chips are six times faster for deep-learning applications like training algorithms to identify images than a GPU.


Conventional computer vision coupled with deep learning makes AI better

#artificialintelligence

Computer vision is fundamental for a broad set of Internet of Things (IoT) applications. Household monitoring systems use cameras to provide family members with a view of what's going on at home. Robots and drones use vision processing to map their environment and avoid obstacles in flight. Augmented reality glasses use computer vision to overlay important information on the user's view, and cars stitch images from multiple cameras mounted in the vehicle to provide drivers with a surround or "bird's eye" view which helps prevent collisions. Over the years, exponential improvements in device capabilities including computing power, memory capacity, power consumption, image sensor resolution, and optics have improved the performance and cost-effectiveness of computer vision in IoT applications.


With IBM POWER9, we're all riding the AI wave - IBM Systems Blog: In the Making

#artificialintelligence

There's a big connection between my love for water sports and hardware design -- both involve observing waves and planning several moves ahead. Four years ago, when we started sketching the POWER9 chip from scratch, we saw an upsurge of modern workloads driven by artificial intelligence and massive data sets. We are now ready to ride this new tide of computing with POWER9. It is a transformational architecture and an evolutionary shift from the archaic ways of computing promoted by x86. POWER9 is loaded with industry-leading new technologies designed for AI to thrive.


Lenovo says AI crucial for enterprise as it announces new tech for training machine-learning systems

ZDNet

Lenovo has announced new hardware and software for firms building machine-learning systems, as the Chinese tech giant double down on AI. Lenovo expects firms will increasingly rely on AI systems to make rapid decisions based on the vast amount of data being generated, predicting will be 44 trillion gigabytes of data will exist by 2020. To serve the fast-growing market, Lenovo today announced new hardware and software for streamlining machine-learning on high-performance computer systems. The ThinkSystem SD530, a two-socket server in a 0.5U rack form factor, is now available with the latest NVIDIA GPU accelerators and Intel Xeon Scalable family CPUs. By including the option of adding NVIDIA's Tesla V100 GPU accelerator, Lenovo is giving businesses the ability to massively boost the performance of AI-related tasks.


Dell EMC high performance computing bundles aimed at AI, deep learning

ZDNet

Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine. The server is aimed at industries such as scientific imaging, oil and gas and financial services. Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine.


The Role of Hadoop in Digital Transformations and Managing the IoT

@machinelearnbot

The digital transformation underway at Under Armour is erasing any stale stereotypes that athletes and techies don't mix. While hardcore runners sporting the company's latest microthread singlet can't see Hadoop, Apache Hive, Apache Spark, or Presto, these technologies are teaming up to track some serious mileage. Under Armour is working on a "connected fitness" vision that connects body, apparel, activity level, and health. By combining the data from all these sources into an app, consumers will gain a better understanding of their health and fitness, and Under Armour will be able to identify and respond to customer needs more quickly with personalized services and products. The company stores and analyzes data about food and nutrition, recipes, workout activities, music, sleep patterns, purchase histories, and more.


AI-driven Diagnostics for Network of Boston-based Healthcare Providers

#artificialintelligence

Founded by Massachusetts General Hospital and later joined by Brigham & Women's Hospital, CCDS today announced it has received what it calls a purpose-built AI supercomputer from the portfolio of Nvidia DGX systems with Volta, said by Nvidia to be the biggest GPU on the market. Later this month, CCDS will also receive a DGX Station, which Nvidia calls "a personal AI supercomputer," that the organization will use to develop new training algorithms "and bring the power of AI directly to doctors" in the form of a desk-side system. With 640 Tensor Cores (8 per SM), the Tesla V100 delivers 120 teraflops of deep learning performance, providing 6-12 times higher peak teraflops for Tensor operations compared with previous-generation silicon, according to Nvidia. Nvidia said the new DGX-1 with Volta delivers AI computing performance three times faster than the prior DGX generation, providing the performance of up to 800 CPUs in a single system.


Running Hadoop on a Raspberry Pi 2 cluster ZDNet

@machinelearnbot

It is a shared-nothing cluster, which means that as you add cluster nodes, performance scales up smoothly. Raspberry Pi: Hands-on with the Pi-Desktop kit Raspberry Pi's smaller, cheaper rival: NanoPi Neo Plus2 weighs in at $25 This is why you need to learn the Raspberry Pi 3 (ZDNet Academy) Building a 300 node Raspberry Pi supercomputer Raspberry Pi: Google plans more AI projects to follow DIY voice recognition kit Raspberry Pi computing cluster: What I'm using it for, and what I've added to it In the paper, Performance of a Low Cost Hadoop Cluster for Image Analysis, researchers Basit Qureshia, Yasir Javeda, Anis Kouba, Mohamed-Foued Sritic, and Maram Alajlan, built a 20 node RPi Model 2 cluster, brought up Hadoop on it, and used it for surveillance drone image analysis. Here's a super-simple kit you can build yourself The 20 silliest Raspberry Pi projects Windows 10 face-off: Raspberry Pi thin client vs modern laptop Raspberry Pi: Build your own turbo-charged cluster with OctaPi How to give your Raspberry Pi'state-of-the art computer vision' using Intel's Neural Compute Stick Raspberry Pi add-on lets you build your own AI assistant powered by Amazon, Google and Microsoft Raspberry Pi Zero W: The smart person's guide You'd expect a cluster of 64-bit, 3GHz x86 CPUs to be much faster than 700MHz, 32-bit ARM CPUs, and you'd be right. The team ran a series of tests that were a) compute-intensive (calculating Pi), b) I/O intensive (document word counts), and, c) both (large image file pixel counts).