Results


Artificial Intelligence

AI Magazine

These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades. This article was written for inclusion in the booklet "Computing Research: A National Investment for Leadership in the 21st Century," available from the Computing Research Association, cra.org/research.impact. Early work in AI focused on using cognitive and biological models to simulate and explain human information processing skills, on "logical" systems that perform commonsense and expert reasoning, and on robots that perceive and interact with their environment. This early work was spurred by visionary funding from the Defense Advanced Research Projects Agency (DARPA) and Office of Naval Research (ONR), which began on a large scale in the early 1960s and continues to this day. By the early 1980s an "expert systems" industry had emerged, and Japan and Europe dramatically increased their funding of AI research.


AI - Technology of the year

#artificialintelligence

As 2017 comes to a close, I have been noodling about what deserves the title of "Technology of the year." Clearly, Artificial Intelligence (AI) is the winner! Quite a few terms are used interchangeably when discussing the subject of AI, including Deep Learning, Machine Learning, Neural Networks, Graph Theory, Random Forests, and the list goes on. AI is the broad subject, describing how intelligence is gained through machine learning using various algorithmic options like graph theory, neural networks, random forests, etc. Deep learning is a specialized form of machine learning which expands the sample data sets to multi-layer learning. I first worked on Artificial Intelligence during my final semester of engineering school.


We want to democratise artificial intelligence: Google exec Fei-Fei Li - ETtech

#artificialintelligence

Google, a pioneer in AI, has been focusing on four key components - computing, algorithms, data and expertise -- to organise all the data and make it accessible. Google as a company has always been at the forefront of computing AI," Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a press briefing. Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US. The company offers computing power including graphics processing unit (GPUs), central processing units (CPUs) and tensor processing units (TPUs) to power machine learning.


Avoiding industrial IoT digital exhaust with machine learning - IoT Agenda

#artificialintelligence

With the Industry 4.0 factory automation trend catching on, data-driven artificial intelligence promises to create cyber-physical systems that learn as they grow, predict failures before they impact performance, and connect factories and supply chains more efficiently than we could ever have imagined. To avoid IIoT digital exhaust and preserve the potential latent value of IIoT data, enterprises need to develop long-term IIoT data retention and governance policies that will ensure they can evolve and enrich their IoT value proposition over time and harness IIoT data as a strategic asset. A practical compromise IoT architecture must first employ some centralized (cloud) aggregation and processing of raw IoT sensor data for training useful machine learning models, followed by far-edge execution and refinement of those models. A multi-tiered architecture (involving far-edge, private cloud and public cloud) can provide an excellent balance between local responsiveness and consolidated machine learning, while maintaining privacy for proprietary data sets.


Artificial Intelligence and Moore's law - Technowize

#artificialintelligence

From 1958, since the invention of the first integrated circuit till 1965, the number of components or transistor density in an integrated circuit has doubled every year, marked Gordon Moore. So when Intel, the pioneer of chip developments adapted Moore's law as standard principle for advancing the computing power, the whole semi-conductor industry followed this outline on their chips. But then with the constant advancement, the electronics industry benefited from the Moore's standard method of designing processor chips till 50 years. The technology today is tending to design artificial intelligence technology that matches the super intelligence of human brain.


Exponential Intelligence: Microsoft Is Building the First AI Supercomputer

#artificialintelligence

Artificial intelligence (AI) is the next big thing in the world of computing, and it is expected to change the way we look at computers. Even the tools and methods used to create AI (machine learning, deep neural networks, etc.) are starting to power everything from search engines to your Facebook feed. Microsoft wants to stay ahead in that game and has announced the way it plans to do so: via the cloud. Microsoft CEO Satya Nadella has boasted that Microsoft's Azure Cloud will be the world's first AI supercomputer, and that sounds like a match made in heaven. AI, machine learning, neural networks…all of those require massive processing power, and the cloud is the perfect vehicle to deliver that power as it can utilize multiple processors and devices instead of relying on one big processor.


Azure is becoming the first AI supercomputer, says Microsoft

ZDNet

You may have thought it was just a cloud computing service, but Microsoft's Azure Cloud is on its way to become the first artificial intelligence supercomputer, according to the company's CEO Satya Nadella. At an event in Dublin, Nadella discussed how Microsoft's cloud computing offering underpins a new wave of applications that use AI technologies. "Ultimately the cloud is about powering the next generation of applications," he said. "It is always the next generation applications that have driven infrastructure and when we look at this current generation of applications that people are building, the thing that is going to define these applications, that characterises these applications, is machine learning and artificial intelligence. Therefore we are building out Azure as the first AI supercomputer."


AWS Announces Availability of P2 Instances for Amazon EC2

@machinelearnbot

With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud. "The massive parallel floating point performance of Amazon EC2 P2 instances, combined with up to 64 vCPUs and 732 GB host memory, will enable customers to realize results faster and process larger datasets than was previously possible." P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments. To offer the best performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.


Microsoft Azure networking is speeding up, thanks to custom hardware

PCWorld

Networking among virtual machines in Microsoft Azure is going to get a whole lot faster thanks to some new hardware that Microsoft has rolled out across its fleet of data centers. The company announced Monday that it has deployed hundreds of thousands of FPGAs (Field-Programmable Gate Arrays) across servers in 15 countries and five different continents. The chips have been put to use in a variety of first-party Microsoft services, and they're now starting to accelerate networking on the company's Azure cloud platform. In addition to improving networking speeds, the FPGAs (which sit on custom, Microsoft-designed boards connected to Azure servers) can also be used to improve the speed of machine-learning tasks and other key cloud functionality. Microsoft hasn't said exactly what the contents of the boards include, other than revealing that they hold an FPGA, static RAM chips and hardened digital signal processors.


Microsoft's FPGA-powered supercomputers can translate Wikipedia faster than you can blink

PCWorld

Microsoft's servers are now powered by optimized custom chips that joined together to translate the entirety of Wikipedia in literally less than a blink of an eye. In a demonstration at Microsoft's Ignite conference on Orlando, Microsoft tapped what it called its "global hyperscale" cloud to translate 3 billion words across 5 million articles in less than a tenth of a second. Microsoft helped custom-design the programmable logic components, or Field Programmable Gate Arrays (FPGAs), that it has added to each of its computing nodes. The company recognizes that smarter, more computationally intensive technologies will require more computing power on the back end, whether those technologies revolve around Microsoft's own Cortana digital assistant--which can now intelligently reschedule your workout to meet your fitness goals--or something that can recognize a distracted drivers, as the automobile manufacturer Volvo is researching. Microsoft's Cortana now includes health-specific information.