Neural Networks


Deep Learning on the Edge

#artificialintelligence

Scalable Deep Learning services are contingent on several constraints. Depending on your target application, you may require low latency, enhanced security or long-term cost effectiveness. Hosting your Deep Learning model on the cloud may not be the best solution in such cases. Deep Learning on the edge alleviates the above issues, and provides other benefits. Edge here refers to the computation that is performed locally on the consumer's products.


Microsoft acquires Lobe, an AI startup working on easy-to-use deep-learning development tools

#artificialintelligence

Artificial intelligence tools will never be widely used if it takes decades of expertise to put them into action, which is why cloud companies have been working hard to make them easier to use and more accessible. Microsoft took another step in that direction Thursday with the acquisition of San Francisco-based Lobe. Terms of the deal were not disclosed in a blog post written by Kevin Scott, Microsoft's executive vice president and chief technology officer, announcing the acquisition. Founded by Mike Matas, Adam Menges, and Markus Beissinger in 2015, Lobe created visual tools that can build deep-learning models with a drag-and-drop user interface, rather than lines of code. Lobe will continue to operate its service under Microsoft, according to its web site.


Artificial intelligence: The king of disruptors

#artificialintelligence

He predicts computers will have human-level intelligence by 2029, and that by 2045 computers will surpass human intelligence. He and I agree that artificial intelligence is a positive force to augment human capacity. Like eye glasses and hearing aids, we will come to see AI as an extension of the human experience. AI may be the biggest disruptor society has ever experienced. But it's not just a disruptor; AI is also an accelerant with the potential to enrich human learning, discovery, and productivity personally and professionally.


Harvard scientists probe aftershocks with AI

#artificialintelligence

In the weeks and months following a major earthquake, the surrounding area is often wracked by powerful aftershocks that can leave an already damaged community reeling and can significantly hamper recovery efforts. While scientists have developed empirical laws to describe the likely size and timing of those aftershocks, such as Bath's Law and Ohmori's Law, forecasting their location has been harder. But sparked by a suggestion from researchers at Google, Brendan Meade, professor of earth and planetary sciences, and Phoebe DeVries, a postdoctoral fellow working in his lab, are using artificial intelligence technology to try to get a handle on the problem. Using deep-learning algorithms, the pair analyzed a database of earthquakes from around the world to try to predict where aftershocks might occur, and developed a system that, while still imprecise, was able make significantly better forecasts than random assignment. The work is described in an Aug. 30 paper published in the journal Nature.


5 tech trends that blur the lines between human and machine

#artificialintelligence

"CIOs and technology leaders should always be scanning the market along with assessing and piloting emerging technologies to identify new business opportunities with high impact potential and strategic relevance for their business," says Gartner research vice president Mike J. Walker. In Gartner's latest Hype Cycle for Emerging Technologies, Walker reports on these must-watch technologies, listing five that will "blur the lines" between human and machine. They will profoundly create new experiences, with unrivaled intelligence, and offer platforms that allow organisations to connect with new business ecosystems, he states. AI technologies will be virtually everywhere over the next 10 years, reports Gartner. While these technologies enable early adopters to adapt to new situations and solve problems that have not been encountered previously, these technologies will become available to the masses -- democratised.


The Artificial Neural Networks Handbook: Part 1 - DZone AI

#artificialintelligence

I have written several articles on Artificial Neural Networks, but they were just random articles on random concepts. This series of articles will give you a detailed idea of Artificial Neural Networks and concepts related to it. The resources and references to all the contents will be mentioned at the end of the series so you can study all of the concepts in depth. So, let's start with a very basic question, "What is AI and what are Artificial Neural Networks?" In this very first article in this series I will try to answer this basic questions and then we will go ahead in depth in further articles.


The Artificial Neural Networks handbook: Part 1

#artificialintelligence

I have written several articles on Artificial Neural Networks earlier but they were just random articles on random concepts. This series of articles will give you a detailed idea about Artificial neural networks and concepts related to it. The resources and references to all the contents will be mentioned at the end of series so you can study all concepts in depth. So, let's start with a very basic question what is AI and what are artificial neural networks? In this very first article in this series I will try to answer this basic questions and then we will go ahead in depth in further articles.


Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding

arXiv.org Machine Learning

In this paper, we propose Dynamic Self-Attention (DSA), a new self-attention mechanism for sentence embedding. We design DSA by modifying dynamic routing in capsule network (Sabouretal.,2017) for natural language processing. DSA attends to informative words with a dynamic weight vector. We achieve new state-of-the-art results among sentence encoding methods in Stanford Natural Language Inference (SNLI) dataset with the least number of parameters, while showing comparative results in Stanford Sentiment Treebank (SST) dataset.


Gartner says AI and biohacking will shape the future of tech - SiliconANGLE

#artificialintelligence

Artificial intelligence and "biohacking" will be among the key trends guiding the future of technology, according to one of Gartner Inc.'s most eagerly anticipated reports. The report, released Monday, is based on Gartner's famous "hype cycle," which plots the lifespan of new technologies as they emerge from mere concepts, all the way through to their mass adoption, at which point they're finally considered to be mainstream. But that only happens if they survive what is typically a roller-coaster ride. In this year's report, Gartner's researchers are pretty confident that AI, at least, will emerge from the hype cycle unscathed. And it won't be just data scientists and other nerdy types who get to enjoy it, as Gartner is confidently predicting that the "democratization" of AI will take place within the next few years.


Deep Learning Stretches Up to Scientific Supercomputers

#artificialintelligence

Researchers delivered a 15-petaflop deep-learning software and ran it on Cori, a supercomputer at the National Energy Research Scientific Computing Center, a Department of Energy Office of Science user facility. Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that situation.