Information Technology


Ink Lines and Machine Learning

#artificialintelligence

Pav Grochola is a Effects Supervisor at Sony Pictures Imageworks (SPI) and was co-effects supervisor on the Oscar winning Spiderman: Into the Spiderverse (along with Ian Farnsworth). He was tasked with solving how to produce natural looking line work for the film. A critical visual component for successfully achieving the comic book illustrative style, in CGI, was the creation of line work or "ink lines". SPI in testing discovered any approach that involves creating ink lines based on procedural "rules" (for example toon shaders) were ineffective in achieving the natural look that was wanted. The fundamental problem is that artists simply do not draw based on limited'rule sets' or guidelines.


When AI Becomes a Part of Our Daily Lives

#artificialintelligence

As we live longer and technology continues its rapid arc of development, we can imagine a future where machines will augment our human abilities and help us make better life choices, from health to wealth. Instead of conducting a question and answer with a device on the countertop, we will be able to converse naturally with our virtual assistant that is fully embedded in our physical environment. Through our dialogue and digital breadcrumbs, it will understand our life goals and aspirations, our obligations and limitations. It will seamlessly and automatically help us budget and save for different life events, so we can spend more time enjoying life's moments. While we can imagine this future, the technology itself is not without challenges -- at least for now.


Artists Machine Intelligence Grants by Google Arts & Culture Lab, Google AI Experiments with Google

#artificialintelligence

As part of Google's ongoing commitment to support ambitious computer science research and the arts, Google Arts & Culture, in collaboration with Google AI, invite proposals from contemporary artists working with machine learning in their art practices. Artists Machine Intelligence (AMI) grants will support six artists with technical mentorship, core Google research, and funding. Artists will have the opportunity to work with Google creative technologists to develop and produce artworks over the course of a five-month period. Mentorship may cover technical processes like data collection and analysis, to pipeline design, and model deployment, and includes access to core Google U/X and technical research in generative and decentralized machine learning, computer vision, and natural language processing. Apart from any Google background IP (if relevant), artists will own IP of their artwork.


How AI Could Track Allergens on Every Block NVIDIA Blog

#artificialintelligence

As seasonal allergy sufferers will attest, the concentration of allergens in the air varies every few paces. A nearby blossoming tree or sudden gust of pollen-tinged wind can easily set off sneezing and watery eyes. But concentrations of airborne allergens are reported city by city, at best. A network of deep learning-powered devices could change that, enabling scientists to track pollen density block by block. Researchers at the University of California, Los Angeles, have developed a portable AI device that identifies levels of five common allergens from pollen and mold spores with 94 percent accuracy, according to the team's recent paper.


IBM's AI performs state-of-the-art broadcast news captioning

#artificialintelligence

Two years ago, researchers at IBM claimed state-of-the-art transcription performance with a machine learning system trained on two public speech recognition data sets, which was more impressive than it might seem. The AI system had to contend not only with distortions in the training corpora's audio snippets, but with a range of speaking styles, overlapping speech, interruptions, restarts, and exchanges among participants. In pursuit of an even more capable system, researchers at the Armonk, New York-based company recently devised an architecture detailed in a paper ("English Broadcast News Speech Recognition by Humans and Machines") that will be presented at the International Conference on Acoustics, Speech, and Signal Processing in Brighton this week. They say that in preliminary experiments it achieved industry-leading results on broadcast news captioning tasks. The system came with its own set of challenges, like audio signals with lots of background noise and presenters speaking on a wide variety of news topics.


What Roles Will Human Workers Play In The AI Economy Of The Future?

#artificialintelligence

What roles will human workers play in the AI economy of the future? The question is no longer if AI can coexist with humans, it's now a question of how (and what) they can achieve together. As we assess the economy of the future, it's important to understand the value people bring to AI – and that's data. So first of all, creating royalties and residuals based on the value and information that people are adding to the AI – paying them for their data, information, knowledge, and expertise – is the fundamental shift needed to account for how people will work alongside AI in the economy of the future. Otherwise, the economy will be split between those who are working with AI, and those who are not, and it's imperative our economic models avoid that.


The future of AI is collaborative

#artificialintelligence

Jordan French is a multi-media journalist on the editorial staff at TheStreet.com He is also the Founder and Executive Editor at Grit Daily News. Formerly an engineer and attorney he represented the "People of the United States" in energy market manipulation cases as an enforcement attorney at the Federal Energy Regulatory Commission. As an engineer he worked on the Mars Gravity Biosatellite Program and later co-founded BeeHex, Inc., the personalized nutrition and robotics company that popularized 3D-printed pizza. The author of forthcoming book, The Gritty Entrepreneur, he is a frequent public speaker, technology evangelist and media moderator.


Google deploys AI software to clean 'trashy videos' from YouTube - Express Computer

#artificialintelligence

YouTube is littered with extreme and misleading videos, and the company has been criticised for not doing enough to limit the dreck. But one place the Google unit has managed to clean up is YouTube's homepage. Behind the scenes, Google has deployed artificial intelligence software that analyses reams of video footage without human help, deciphers troubling clips and blocks them from the homepage and home screen of the app. Its internal name is the "trashy video classifier," according to three people familiar with the project. The system, which has not been reported before, plays a key role in attracting and keeping viewers on YouTube's homepage, building a foundation for a flurry of new advertising coming to the video service.



How AI Is Powering Customer Service, Engagement, And The Customer Experience

#artificialintelligence

AI is making inroads into enhancing customer service and the overall customer experience, sometimes in a directly customer-facing role; sometimes behind the scenes in support of human customer service agents; and often in a hybridization of these roles. Paddy Srinivasan is professionally involved bringing AI to all things customer service- and customer experience-related, and I enjoyed speaking with him recently. Micah Solomon: How much of a mark is AI making today in customer service and support, and how is it helping to optimize the customer experience overall? Paddy Srinivasan, SVP and GM, Customer Engagement and Support at LogMeIn: AI is changing the face of customer service as we know it. Whether it's a customer-facing chatbot helping customers answer simple questions or an agent-facing [internal] bot offering assistance with more complicated questions, almost every interaction can now benefit from having some level of AI around it – even if the customer doesn't directly see it.