video


Some Startups Use Fake Data to Train AI

WIRED

Berlin startup Spil.ly had a problem last spring. The company was developing an augmented-reality app akin to a full-body version of Snapchat's selfie filters--hold up your phone and see your friends' bodies transformed with special effects like fur or flames. To make it work, Spil.ly needed to train machine-learning algorithms to closely track human bodies in video. But the scrappy startup didn't have the resources to collect the tens or hundreds of thousands of hand-labeled images typically needed to teach algorithms in such projects. "It's really hard being a startup in AI, we couldn't afford to pay for that much data," says CTO Max Schneider.


Comcast's Q1 strong as business and high-speed Internet services offset cord cutting

ZDNet

Comcast continued to shed video customers in the first quarter, but is more than offsetting the slide with high-speed Internet and business services. In the first quarter, Comcast reported net income of $3.12 billion, or 66 cents a share, on revenue of $22.79 billion, up 10.7 percent from a year ago. Excluding items, Comcast reported earnings of 62 cents a share in the first quarter. Wall Street was expecting Comcast to report first quarter earnings of 59 cents a share on revenue of $22.75 billion. There are multiple moving parts in Comcast, but Comcast Business is growing the fastest.


Machine Learning Invades Embedded Applications

#artificialintelligence

Two things have moved deep-neural-network-based (DNN) machine learning (ML) from research to mainstream. The first is improved computing power, especially general-purpose GPU (GPGPU) improvements. The second is wider distribution of ML software, especially open-source software. Quite a few applications are driving adoption of ML, including advanced driver-assistance systems (ADAS) and self-driving cars, big-data analysis, surveillance, and improving processes from audio noise reduction to natural language processing. Many of these applications utilize arrays of GPGPUs and special ML hardware, especially for handling training that uses large amounts of data to create models that require significantly less processing power to perform a range of recognition and other ML-related tasks.


Watch: Haunting video of 3D-printed morphing matter that folds to assemble itself

ZDNet

It looks like something out of a dream, and it could be the future of manufacturing. Researchers at Carnegie Mellon University have created a process that allows plastic printed with a cheap 3D printer to fold itself into predetermined shapes with the application of heat. The complexity of the origami-like shapes being produced in the Morphing Matter Lab even in early tests gives researchers hope that the material may one day be used produce flat-pack products that can be assembled quickly with a heat gun. Last week I wrote about a robot that was able to assemble a flat-pack chair from Ikea in minutes. Professor Yao's material would eliminate the need for complex assembly altogether.


Nvidia's AI reconstructs partially erased images with jaw-dropping accuracy

#artificialintelligence

Nvidia this week unveiled its newest AI breakthrough in the form of a mind-blowing computer vision technique that can'inpaint' parts of an image that have been deleted or modified. If you're thinking Photoshop already does this, think again. This is something you have to see to believe. Nvidia's researchers explain the difference between its novel method for inpainting images with deep learning and currently existing tech in a whitepaper published earlier this week: Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. The goal of this work is to propose a model for image inpainting that operates robustly on irregular hole patterns, and produces semantically meaningful predictions that incorporate smoothly with the rest of the image without the need for any additional post-processing or blending operation.


Appian gives low-code software developers new AI capabilities - SiliconANGLE

#artificialintelligence

"Low-code" software development platform provider Appian Corp. is branching out into the contact center with a new version of its product catering specifically to their needs. The company announced it's also updating its main offering, enabling users to add new artificial intelligence capabilities to the applications they build. The announcements were made during a keynote today by Appian Chief Executive Officer Matt Calkin (pictured) at the company's annual Appian World event in Miami. Having launched its initial public offering last year, Appian has become one of the leaders in the rapidly emerging field of low-code software development. The company's main product is a software-as-a-service offering tailored for everyday business users.


YouTube says computers are catching problem videos

#artificialintelligence

In December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms. The vast majority of videos removed from YouTube toward the end of last year for violating the site's content guidelines had first been detected by machines instead of humans, the Google-owned company said. YouTube said it took down 8.28 million videos during the fourth quarter of 2017, and about 80 per cent of those videos had initially been flagged by artificially intelligent computer systems. The new data highlighted the significant role machines, not just users, government agencies and other organisations, were taking in policing the service as it faced increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organisations. Those videos are sometimes promoted by YouTube's recommendation system and unknowingly financed by advertisers, whose ads are placed next to them through an automated system.


Frontier AI: How far are we from artificial "general" intelligence, really?

#artificialintelligence

Some call it "strong" AI, others "real" AI, "true" AI or artificial "general" intelligence (AGI)... whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human -- possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences. This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we'd think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a "new life form" on planet earth that is about to "wrap its tentacles" around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google's AI getting "highly aggressive" However, as an investor who spends a lot of time in the "trenches" of AI, I have been experiencing a fair amount of cognitive dissonance on this topic.


YouTube Touts Machine Learning In Battle Over Inappropriate Content

#artificialintelligence

YouTube parent company Google on Monday released what it said would be the first quarterly report outlining efforts to enforce its community guidelines. The report, which looked at the last quarter of 2017, said that it removed eight million videos from YouTube during the quarter, adding that the videos it removed "were mostly spam or people attempting to upload adult content." Of note, however, is that YouTube's machine-learning algorithm spotted the overwhelming majority of the content. During the company's quarterly earnings call Monday, Google CEO Sundar Pichai said that "over six million videos removed in Q4 were first flagged by our machine systems, and over 75% of those videos were removed before receiving a single view." The company introduced its machine flagging in June, 2017.


Is AI-powered video search becoming inevitable to security? - asmag.com

#artificialintelligence

Given the increasing affordability of equipment and growing awareness of security requirements, more and more cameras are being installed across the globe every day. While this is a good thing, the sheer volume of footages that come in makes it difficult for operators to find specific objects or people when needed. This is one area where artificial intelligence (AI) is all set to play a key role. Several security companies are already working on this. Make searching through videos as simple as using Google.