Results


To democratise artificial intelligence, Intel launches educational programme for developers

#artificialintelligence

Reiterating its commitment to boost adoption of artificial intelligence (AI), Intel India today announced a developer community initiative – AI Developer Education Programme, aimed at educating 15,000 scientists, developers, analysts, and engineers. The educational programme is also aimed at deep learning and machine learning, the tech major said in a statement. The programme was announced at the first AI Day held in Bengaluru where thought-leaders from government, industry, and the academia congregated and discussed the potential of accelerating the AI revolution in the country. Under the programme, Intel will run 60 programmes across the year, ranging from workshops, roadshows, user group and senior technology leader round-tables. Announcing the programme, Intel South Asia managing director Prakash Mallya said data center and the intelligence behind the data collected can enable government and industry to make effective decisions based on algorithms.


Wanted: Toolsmiths

Communications of the ACM

"As we honor the more mathematical, abstract, and scientific' parts of our subject more, and the practical parts less, we misdirect the young and brilliant minds away from a body of challenging and important problems that are our peculiar domain, depriving these problems of the powerful attacks they deserve." I have the privilege of working at the Defense Advanced Research Projects Agency (DARPA) and currently serve as the Acting Deputy Director of the Defense Sciences Office (DSO). Our goal at DARPA is to create and prevent technological surprise through investments in science and engineering, and our history and contributions are well documented. The DSO is sometimes called "DARPA's DARPA," because we strive to be at the forefront of all of science--on the constant lookout for opportunities to enhance our national security and collective well-being, and our projects are very diverse. One project uses cold atoms to measure time with 10 18th precision; another is creating amazing composite materials that can change the way in which we manufacture.


The Algorithms Of Life

#artificialintelligence

In fact, similar kinds of biological algorithms might exist in people and govern not only how we learn and act but also how our species evolved. That is the firm belief of Professor Leslie Valiant, whose ground-breaking research has been fundamental to the development of machine learning, artificial intelligence and the broader field of computer science. The PAC model has become one of the most important contributions to machine learning, and is the foundation of the modern field of computational learning theory, where scientists study the design and analysis of machine learning algorithms. Cracking the codes of such ecorithms could also help scientists to develop more advanced robots that can better learn from their environment, allowing the machines to evolve in a manner similar to people and become more useful.


The best technology breakthroughs in 2016 from quantum computing to AI

#artificialintelligence

This year has been rollercoaster crash for many with numerous tragedies and crises occurring all over the world, but it doesn't mean that everything was grim in 2016. Join IBTimes UK as we take a closer look at the many new developments across various fields of technological research, each with the potential to revolutionise human life for the better. This section is devoted to the computer science research into replicating the human mind and helping computers solve complex tasks. For developments concerning the machines themselves, see our articles on robotics. In 2016, computer scientists have begun concentrating more efforts on building deep learning neural networks which are large webs of artificially intelligent classical computers that are trained using computer algorithms to solve complex problems in a similar way to the human central nervous system, and where different layers examine different parts of the problem to combine to produce an answer.


The best technology breakthroughs in 2016 from quantum computing to AI

#artificialintelligence

For example, Google researchers have been working with physicists around the world in order to build a quantum version of a neural network that could help accurately predict chemical reactions. This year alone, Australian scientists have managed to develop the first ever qubit Fredkin gate (a universal gate required in all computing), while Oxford University has managed to develop quantum Fredkin gates that perform with record-breaking 99.9% precision. And finally, as mentioned above, Google researchers have teamed up with US and UK scientists to demonstrate the first ever completely scalable quantum simulation of a chemical reaction, showcasing a real-world use for quantum computers which could revolutionise multiple areas of research into medicine and materials. There are also researchers who have set records for achieving the highest speeds of data transmission over millimetre wave ever achieved, such as electrical engineers from the University of Southern California Viterbi School of Engineering, who twisted radio beams in order to transmit data at a speed of 32Gbps, even though the distance was only 2.5m across a laboratory.


The death of the statistician

@machinelearnbot

Given that in general, algorithmic (and ML) approaches to extracting information could be more easily iterated into smaller chunks than the production of a robust statistical model, naturally industry was quick to adopt the idea. Someone who is sufficiently trained in statistics *should* be able to answer questions like that clearly, as statistical work includes developing an expertise in the subject under study. With the introduction of new data architectures (column based, streaming, batch/file-based, direct I/O) and the exponential increase in power of computer hardware (cloud computing, GPU computing, RAM speeds, solid-state storage, CPU capacity, etc. PMML helped create model portability, computing algorithms began to leverage GPU processing, multiple CPU threads, and manage memory better.


The Pint-Sized Supercomputer That Companies Are Scrambling to Get

MIT Technology Review

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


This is why dozens of companies have bought Nvidia's $129,000 deep-learning supercomputer in a box

#artificialintelligence

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


WTF is machine learning?

#artificialintelligence

While not well understood, neural networks, deep learning, and reinforcement learning are all machine learning. Each layer of a deep learning model lets the computer identify another level of abstraction of the same object. Reinforcement learning, takes ideas from game theory, and includes a mechanism to assist learning through rewards. Researchers refer to this challenge as the black box problem of machine learning.


WTF is machine learning?

#artificialintelligence

While not well understood, neural networks, deep learning, and reinforcement learning are all machine learning. Each layer of a deep learning model lets the computer identify another level of abstraction of the same object. Reinforcement learning, takes ideas from game theory, and includes a mechanism to assist learning through rewards. Researchers refer to this challenge as the black box problem of machine learning.