AI-Alerts


Time Series Analysis - Theory and Practice SkillsCast

#artificialintelligence

Tetiana is a mathematician turned data scientist currently working with NanoTechGalaxy on developing machine learning algorithms for medical image processing. She is also working on AI risk research as part of the Pareto Fellowship awarded by the Centre of Effective Altruism.


Siri has some fresh thoughts on love just in time for Valentine's Day

Mashable

If you're the kind of person who likes to talk love on Valentine's Day, you can now stop annoying your Valentine's Day-hating peers (we know they're out there). Apple has equipped Siri with some custom answers to Valentine's Day queries, from booking a romantic restaurant or choosing an appropriately saccharine playlist. It can even give you some ideas for (intentionally cringeworthy) pick-up lines, like: "Is your name Bluetooth? Things take a turn for the awkward and strange when you ask Siri for its thoughts on love. When we asked, "Who is your valentine?"


Joint Attention and Brain Functional Connectivity in Infants and Toddlers Cerebral Cortex

#artificialintelligence

Initiating joint attention (IJA), the behavioral instigation of coordinated focus of 2 people on an object, emerges over the first 2 years of life and supports social-communicative functioning related to the healthy development of aspects of language, empathy, and theory of mind. Deficits in IJA provide strong early indicators for autism spectrum disorder, and therapies targeting joint attention have shown tremendous promise. However, the brain systems underlying IJA in early childhood are poorly understood, due in part to significant methodological challenges in imaging localized brain function that supports social behaviors during the first 2 years of life. Herein, we show that the functional organization of the brain is intimately related to the emergence of IJA using functional connectivity magnetic resonance imaging and dimensional behavioral assessments in a large semilongitudinal cohort of infants and toddlers. In particular, though functional connections spanning the brain are involved in IJA, the strongest brain-behavior associations cluster within connections between a small subset of functional brain networks; namely between the visual network and dorsal attention network and between the visual network and posterior cingulate aspects of the default mode network. These observations mark the earliest known description of how functional brain systems underlie a burgeoning fundamental social behavior, may help improve the design of targeted therapies for neurodevelopmental disorders, and, more generally, elucidate physiological mechanisms essential to healthy social behavior development. The emergence of joint attention (JA), the coordinated orienting of 2 people toward an object or event, occurs during the first 2 years of life, arguably the most dynamic and important period of early child development (Scaife and Bruner 1975). It is theorized that engaging in JA lays the foundation for prosocial cooperative behavior, from basic social-communicative functioning and language development (Premack 2004) to sophisticated forms of empathy (Mundy and Jarrold 2010) and theory of mind (Adolphs 2003). In fact, early exhibition of joint attention is strongly associated with later language ability (Morales et al. 2000; Mundy et al. 2007), and atypical development of the initiation of joint attention (IJA) is strongly indicative of autism spectrum disorder (ASD) (Bruinsma et al. 2004). The neural substrates underlying IJA in early childhood are poorly understood (Barak and Feng 2016), due in part to significant methodological challenges in imaging localized brain function that supports social behaviors in children during the first 2 years of life.


Probabilistic Pentesting

@machinelearnbot

Pentesting tools like Metasploit, Burp, ExploitPack, BeEF, etc. are used by security practitioners to identify possible vulnerability points and to assess compliance with security policies. Pentesting tools come with a library of known exploits that have to be configured or customized for your particular environment. This configuration typically takes the form of a DSL or a set of fairly complex UIs to configure individual attacks. There are two major shortcomings with this approach (1) scanning doesn't yield perfect knowledge (2) scanning generates significant network traffic and can run for a very long time on a large network (Sarraute). It is perhaps due to these shortcomings (and maybe 0day exploits) that "most testing tools, provide no guarantee of soundness.


An extensive list of European AI tech startups to watch in 2017

#artificialintelligence

We have seen a fast growing interest in current activities around AI startups and research in the last couple of months. Headlines like "2016 was the year AI came of age", "AI was everywhere in 2016", and "The Great A.I. Awakening" were all over the media in the ending weeks of 2016 and we are curious about what 2017 will bring. I found particularly interesting that the current applications, future potential, and possible risks even attracted interest beyond the tech community through TV shows like Westworld, coverage on traditional media and even Obama's farewell address. Sadly, for many of us tech enthusiasts here in Europe, we sometimes feel like there is way less movement on this side of the Atlantic than in the Silicon Valley. However, with major acquisitions like DeepMind, Magic Pony Technology, Movidius, Vision Factory, and Dark Blue Labs, Europe has shown that it is actually leading the way in AI and machine learning.


Code-Dependent: Pros and Cons of the Algorithm Age

#artificialintelligence

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms. Artificial intelligence (AI) is naught but algorithms. The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms.


What Is Computer Vision?

#artificialintelligence

An introduction to the field of computer vision and image recognition, and how Deep Learning is fueling the fire of this hot topic. Computer Vision is an interdisciplinary field that focuses on how machines or computers can emulate the way in which humans' brains and eyes work together to visually process the world around them. Research on Computer Vision can be traced back to beginning in the 1960s. The 1970's saw the foundations of computer vision algorithms used today being made; like the shift from basic digital image processing to focusing on the understanding of the 3D structure of scenes, edge extraction and line-labelling. Over the years, computer vision has developed many applications; 3D imaging, facial recognition, autonomous driving, drone technology and medical diagnostics to name a few.


TensorFlow Fold: Deep Learning With Dynamic Computation Graphs - ADR Toolbox

#artificialintelligence

In much of machine learning, data used for training and inference undergoes a preprocessing step, where multiple inputs (such as images) are scaled to the same dimensions and stacked into batches. This lets high-performance deep learning libraries like TensorFlow run the same computation graph across all the inputs in the batch in parallel. Batching exploits the SIMD capabilities of modern GPUs and multi-core CPUs to speed up execution. However, there are many problem domains where the size and structure of the input data varies, such as parse trees in natural language understanding, abstract syntax trees in source code, DOM trees for web pages and more. In these cases, the different inputs have different computation graphs that don't naturally batch together, resulting in poor processor, memory, and cache utilization.


Bixby vs. Siri vs. Google Assistant: Samsung Galaxy S8's AI Can't Beat Apple's Technology But Trumps Google's Voice Assistant In This Aspect

International Business Times

It's impossible to talk about Samsung's upcoming Galaxy S8 flagship device without mentioning the South Korea tech giant's advanced AI voice assistant, Bixby, that will come with it. Though Apple's biggest rival already had an intelligent personal assistant, called S Voice, for a number of premium devices it launched in the past, the company decided to develop a more advanced voice assistant that would be a direct competitor to Apple's Siri and Google's Google Assistant. The move isn't surprising at all, for Samsung bought AI and assistant system firm Viv late last year. Viv is a company by people responsible for the creation and success of Apple's Siri. Hence, many reports are putting emphasis on the idea that Bixby will be a strong rival against Apple's famous voice assistant.


oxford-cs-deepnlp-2017/lectures

#artificialintelligence

This repository contains the lecture slides and course description for the Deep Natural Language Processing course offered in Hilary Term 2017 at the University of Oxford. This is an advanced course on natural language processing. Automatically processing natural language inputs and producing language outputs is a key component of Artificial General Intelligence. The ambiguities and noise inherent in human communication render traditional symbolic AI techniques ineffective for representing and analysing language data. This is an applied course focussing on recent advances in analysing and generating speech and text using recurrent neural networks.