video


Rosetta: Understanding text in images and videos with machine learning - Facebook Code

#artificialintelligence

Understanding the text that appears on images is important for improving experiences, such as a more relevant photo search or the incorporation of text into screen readers that make Facebook more accessible for the visually impaired. Understanding text in images along with the context in which it appears also helps our systems proactively identify inappropriate or harmful content and keep our community safe. A significant number of the photos shared on Facebook and Instagram contain text in various forms. It might be overlaid on an image in a meme, or inlaid in a photo of a storefront, street sign, or restaurant menu. Taking into account the sheer volume of photos shared each day on Facebook and Instagram, the number of languages supported on our global platform, and the variations of the text, the problem of understanding text in images is quite different from those solved by traditional optical character recognition (OCR) systems, which recognize the characters but don't understand the context of the associated image.


Digging Deep into the Deepfake Appeal - OxGadgets

#artificialintelligence

The term Deepfake itself is a combination of'Deep Learning' and'Fake'. Belonging to the larger body of Machine Learning, Deep Learning depends on artificial neural networks to process raw information. Deepfake is AI-dependent technology which is used to create or modify video that implies false situations. This term first came into being back in 2017. That is when a Reddit user (called deepfakes) began applying deep learning technology to swap celebrity faces onto people performing in pornographic videos.


Google and Our Collective AI Future

#artificialintelligence

The pace of change in the artificial intelligence (AI) and machine learning arena is already breathtaking, and it promises to continue to upend conventional wisdom and surpass some of our wildest expectations as it proceeds on what appears at times to be an unalterable and pre-ordained course. Along the way, much of what we now consider to be "normal" or "acceptable" will change. Some technology companies are already envisioning what our collective AI future will look like and just how far the boundaries of normality and acceptability can be stretched. In 2016, for example, Google produced a video that provided a stunningly ambitious and unsettling look at how some people within the company envision using the information it collects in the future. Shared internally at the time within Google, the video imagines a future of total data collection, where Google subtly nudge users into alignment with the company's own objectives, custom-prints personalized devices to collect more data, and even guides the behavior of entire populations to help solve global challenges such as poverty and disease.


Autocompletion with deep learning

#artificialintelligence

Update (July 18): We've sent beta invites to all existing customers of TabNine who signed up for the beta. Follow us on Twitter for more updates. TL;DR: TabNine is an autocompleter that helps you write code faster. We're adding a deep learning model which significantly improves suggestion quality. You can see videos below and you can sign up for it here.


TRUTH

#artificialintelligence

This item has been hidden Liked videos Play all 17:33 Artificial Intelligence: it will kill us Jay Tuck TEDxHamburgSalon - Duration: 17 minutes. For more information on Jay Tuck, please visit our website www.tedxhamburg.de US defense expert Jay Tuck was news director of the daily news program ARD-Tagesthemen and combat correspondent for G... 29:22 Depression, the secret we share Andrew Solomon - Duration: 29 minutes. "The opposite of depression is not happiness, but vitality, and it was vitality that seemed to seep away from me in that moment." If the wholly unnatural manmade origin of the multiple insta-fires wasn't obvious enough yesterday, it most certainly is today.


Is FaceApp an evil plot by 'the Russians' to steal your data? Not quite Arwa Mahdawi

The Guardian

Over the last few days the #faceappchallenge has taken over social media. This "challenge" involves downloading a selfie-editing tool called FaceApp and using one of its filters to digitally age your face. You then post the photo of your wizened old self on the internet and everyone laughs uproariously. You get a small surge of dopamine from gathering a few online likes before existential ennui sets in once again. On Monday, as the #faceappchallenge went viral, Joshua Nozzi, a software developer, warned people to "BE CAREFUL WITH FACEAPP….it Some media outlets picked this claim up and privacy concerns about the app began to mount. Concern escalated further when people started to point out that FaceApp is Russian. "The app that you're willingly giving all your facial data to says the company's location is in Saint-Petersburg, Russia," tweeted the New York Times's Charlie Warzel. And we all know what those Russians are like, don't we? They want to harvest your data for nefarious ...


This AI magically removes moving objects from videos

#artificialintelligence

We've previously seen developers harness the power of artificial intelligence (AI) to turn pitch black pics into bright colorful photos, flat images into complex 3D scenes, and selfies into moving avatars. Now, there's an AI-powered software that effortlessly removes moving objects from videos. All you need to do to wipe an object from footage is draw a box around it, and the software takes care of the rest for you. As you will notice, while the algorithm gets rid of the person crossing the street in a rather convincing way, it leaves some traces of foul play. The software was built by a developer going by the pseudonym zllrunning, who has since uploaded it to GitHub.



How Artificial Intelligence Can Detect – And Create – Fake News - Liwaiwai

#artificialintelligence

When Mark Zuckerberg told Congress Facebook would use artificial intelligence to detect fake news posted on the social media site, he wasn't particularly specific about what that meant. Given my own work using image and video analytics, I suggest the company should be careful. Despite some basic potential flaws, AI can be a useful tool for spotting online propaganda – but it can also be startlingly good at creating misleading material. Researchers already know that online fake news spreads much more quickly and more widely than real news. My research has similarly found that online posts with fake medical information get more views, comments and likes than those with accurate medical content.


Neuralink Livestream

#artificialintelligence

Want to watch this again later? Sign in to add this video to a playlist. Report Need to report the video? Sign in to report inappropriate content. Report Need to report the video?