video


google-using-artificial-intelligence-fight-terrorist-propaganda

#artificialintelligence

This approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can "change their minds about joining". According to the company, this method has proved with potential recruits clicking through at an "unusually high rate" and watched over half a million minutes of video content that debunks terrorist recruiting messages. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.


Learning about the world through video – twentybn – Medium

#artificialintelligence

A few notable exceptions, like DeepMind's recently released Kinetics dataset, try to alleviate this by focusing on shorter clips, but since they show high-level human activities taken from YouTube videos, they fall short of representing the simplest physical object interactions that will be needed for modeling visual common sense. To generate the complex, labelled videos that neural networks need to learn, we use what we call "crowd acting". Predicting the textual labels from the videos therefore requires strong visual features that are capable of representing a wealth of physical properties of the objects and the world. The videos show human actors performing generic hand gestures in front of a webcam, such as "Swiping Left/Right," "Sliding Two Fingers Up/Down," or "Rolling Hand Forward/Backward."


How video games help improve real-world AI

#artificialintelligence

Take, for example, Artur Filipowicz, an AI researcher at Princeton University who's been trying to develop software for autonomous vehicles. Now, DeepMind can beat just about any top score on any Atari video game. Privately funded organization OpenAI has taken the world of video game-based AI development to new levels, with a piece of software it calls Universe. The future of video games in AI development is rich with potential, and we're just starting to explore its full capabilities.


MIT and Google researchers have made AI that can link sound, sight, and text to understand the world

#artificialintelligence

AI research has typically treated the ability to recognize images, identify noises, and understand text as three different problems, and built algorithms suited to each individual task. But two new papers from MIT and Google explain first steps for making AI see, hear, and read in a holistic way--an approach that could upend how we teach our machines about the world. To train this system, the MIT group first showed the neural network video frames that were associated with audio. One algorithm that can align its idea of an object across sight, sound, and text can automatically transfer what it's learned from what it hears to what it sees.


Video Friday: Self-Driving Potato, NASA at Mars, and Autonomous Sumo Robots

IEEE Spectrum Robotics Channel

This biologically inspired approach, they hope, could help robots navigate dynamic environments without requiring advanced, costly sensors and computationally intensive algorithms. More specifically, we propose a new methodology that enables the first demonstration of high-resolution 3D through-wall imaging of completely unknown areas, using only WiFi signals and unmanned aerial vehicles. From this point cloud, the garment is segmented and a custom Wrinkleness Local Descriptor (WiLD) is computed to determine the location of the present wrinkles. If you haven't seen this 1911 film featuring a humanoid robot driving a car, you've been missing out on learning about all of the potential self-driving car catastrophies that could happen to you: Did you know that if you let a robot drive your car, you could end up in space?


Amazon Echo Show Launching With Alexa Support, Touchscreen, Smart Camera Support June 28

International Business Times

Amazon announced its first touchscreen smart speaker, the Amazon Echo Show in May. The device will cost $229.99 Just like other products from the Echo range, the Echo Show will have artificial–intelligence based voice command support from the company's Alexa voice assistant. This feature makes it capable of multiple functionalities, including letting it function as an intercom and even letting users make hands-free calls to others by simply giving a voice command. Smart camera connectivity: The speaker can be connected to other smart cameras and show you a feed from them.


Amazon Fire 7 tablet review: still a lot of tablet for just £50

The Guardian

It looks quite different to the traditional Android experience from Google, lacks Google apps and only has access to the Amazon App Store, not the Google Play Store. Navigating it is easy with clearly marked panes filled with either apps, games, books, video, music, magazines, audio books etc. The jewel in the crown for Fire OS 5.4 is Alexa – Amazon's voice-enabled smart digital assistant. It's the same Alexa that's found in the company's Fire TV and Echo smart speaker devices, and has access to the same information.


Is Python or Perl faster than R?

@machinelearnbot

Though a lot of statistical / machine learning algorithms are now being implemented in Python - see Python and R articles - and it seems that Python is more appropriate for production code and big data flowing in real time, while R is often used for EDA - exporatory data analysis - in manual mode. My question is, if you make a true apple-to-apple comparison, what kind of computations does Python perform much faster than R, (or the other way around) depending on data size / memory size? Here I have in mind algorithms such as classifying millions of keywords, something requiring trillions of operations and not easy to do with Hadoop, requiring very efficient algorithms designed for sparse data (sometimes called sparse computing). For instance, the following article topic (see data science book pp 118-122) shows a Perl script running 10 times faster than the R equivalent, to produce R videos, but it's not because of a language or compiler issue, it's because the Perl version pre-computes all video frames very fast and load them in memory, then the video is displayed (using R ironically), while the R version produces (and displays) one frame at a time and does the whole job in R. What about accelerating tools, such as the CUDA accelerator for R?


Great Artists Steal: The Promise Of Creative AI

#artificialintelligence

In the realm of music, the company Jukedeck has advanced its technology to the point that most listeners assume that the short musical pieces created by its AI are, in fact, created the old-fashioned way. And defining creativity in objective terms turns out to be quite difficult. As an expedient oversimplification, we can speak of two types: generative creativity, and combinatorial creativity. Combinatorial Creativity is the novel combination of pre-existing ideas or objects.


VidCon is more than just online videos

USATODAY

USA TODAY's Jefferson Graham talks to people attending VidCon about how the annual convention of online video content makers has evolved. A link has been sent to your friend's email address. A link has been posted to your Facebook feed. USA TODAY's Jefferson Graham talks to people attending VidCon about how the annual convention of online video content makers has evolved.