Goto

Collaborating Authors

Agence Is A Fascinating VR Deep Dive Into Evolving AI

#artificialintelligence

I watch as he toys with the small group of lifeforms named Agents that curiously ramble around a tiny planet. They're odd three-legged things that go from goldfishing their way from one side of their tennis ball-sized existence to another to staring at me in bemusement to accidentally – and then angrily – bumping into each other. When the call ends, Gagliano – perhaps unknowingly – leaves the stream going for another 10 or so minutes. I sit, slightly transfixed, continuing to observe the Agents that go on existing in the absence of their newfound virtual deity. Agence is a hard thing to pin down. Gagliano, the piece's director, and Oppenheim, the creative producer, label it as a'looping' and'dynamic film', something that starts right back up again the moment it ends.


Artificial intelligence: deep learning

#artificialintelligence

In last month's column, I tackled the subject of machine learning (ML) in Artificial intelligence: machine learning and I suppose one obvious question would be "What's the difference between machine and deep learning?" Well, both terms are subsets of artificial intelligence (AI), although deep learning is a subset of machine learning. In the meantime, let's touch upon machine learning so as to re-establish what we have already understood and, that is, ML is a broad field of study, where the development of software (a computer program) automatically improves through experience, based on the data that it has received. We nowadays associate the term with "big data," "data modelling" or "data science," where retailers, for example, like to collect information about your shopping habits so, in turn, they can more accurately target their advertising. And then there are the likes of Amazon and Netflix, who use data analytics to predict or suggest what you might be interested in viewing or purchasing next.


Adobe's DL-Based 'HDMatt' Handles Image Details Thinner Than Hair

#artificialintelligence

Image matting plays a key role in image and video editing and composition. Although existing deep learning approaches can produce acceptable image matting results, their performance suffers in real-world applications, where the input images are mostly high resolution. To address this, a group of researchers from UIUC, Adobe Research and the University of Oregon have proposed HDMatt, the first deep learning-based image matting approach for high-resolution image inputs. Generally, deep learning approaches take an entire input image and an associated trimap to infer the alpha matte using convolutional neural networks. Such methods however may fail when dealing with high-resolution input images in sizes of 5000 5000 pixels or higher due to hardware limitations. The researchers designed HDMatt to crop an input image and trimap into patches, then estimate the alpha values of each patch.


Eight Things Differentiating Rasa From Other Chatbot Platforms

#artificialintelligence

While I was building prototypes with different chatbot platforms and environments, I saw clear patterns starting to emerge. One could see how most platforms have a very similar approach to Conversational AI. Even though one might lead the other in certain elements, each were trying to solve common problems in a very similar fashion. In this pursuit of solving Conversational AI problems, Rasa stands alone in many areas with their unique approach. Here are eight things they do differently, and do exceptionally well.


Deep learning to estimate RECIST in patients with NSCLC treated with PD-1 blockade – IAM Network

#artificialintelligence

This article was originally published here Cancer Discov. ABSTRACT Real-world evidence (RWE) – conclusions derived from analysis of patients not treated in clinical trials – is increasingly recognized as an opportunity for discovery, to reduce disparities, and to contribute to regulatory approval. Maximal value of RWE may be facilitated through machine learning techniques to integrate and interrogate large and otherwise underutilized data sets. In cancer research, an ongoing challenge for RWE is the lack of reliable, reproducible, scalable assessment of treatment-specific outcomes. We hypothesized a deep learning model could be trained to use radiology text reports to estimate gold-standard Response Evaluation Criteria in Solid Tumors (RECIST)-defined outcomes.


Feature Extraction for Graphs

#artificialintelligence

Heads up: I've structured the article similarly as in the Graph Representation Learning book by William L. Hamilton [1]. One of the simplest ways to capture information from graphs is to create individual features for each node. These features can capture information both from a close neighbourhood, and a more distant, K-hop neighbourhood using iterative methods. Node degree is a simple metric and can be defined as a number of edges incident to a node. This metric is often used as initialization of algorithms to generate more complex graph-level features such as Weisfeiler-Lehman Kernel.


Why "AI" Is A Fraud

#artificialintelligence

Who will be crowned the world's first trillionaire? Mark Cuban says there's one disruptive trend that'll bring a whole new meaning to the word "rich." You likely know Cuban as a star investor on ABC's Shark Tank. The billionaire also owns the Dallas Mavericks. "The world's first trillionaires are going to come from somebody who masters AI." Artificial Intelligence (AI) is the most hyped up trend on the planet.


Gartner Top 10 Strategic Technology Trends for 2020

#artificialintelligence

Human augmentation conjures up visions of futuristic cyborgs, but humans have been augmenting parts of the body for hundreds of years. Glasses, hearing aids and prosthetics evolved into cochlear implants and wearables. Even laser eye surgery has become commonplace. But what if scientists could augment the brain to increase memory storage, or implant a chip to decode neural patterns? What if exoskeletons became a standard uniform for autoworkers, enabling them to lift superhuman weights?


10 Best Python Libraries For Computer Vision Tasks

#artificialintelligence

One of the most favourite languages amongst the developers, Python is well-known for its abundance of tools and libraries available for the community. The language also provides several computer vision libraries and frameworks for developers to help them automate tasks, which includes detections and visualisations. Below here, we are listing down 10 best Python libraries that developers can use for Computer Vision. It also provides researchers with low-level components that can be mixed and matched to build new approaches. IPSDK is an image processing library in C and Python.


Transformers in NLP: Creating a Translator Model from Scratch

#artificialintelligence

Transformers have now become the defacto standard for NLP tasks. Originally developed for sequence transduction processes such as speech recognition, translation, and text to speech, transformers work by using convolutional neural networks together with attention models, making them much more efficient than previous architectures. And although transformers were developed for NLP, they've also been implemented in the fields of computer vision and music generation. However, for all their wide and varied uses, transformers are still very difficult to understand, which is why I wrote a detailed post describing how they work on a basic level. It covers the encoder and decoder architecture, and the whole dataflow through the different pieces of the neural network.