Goto

Collaborating Authors

 hanif


YouTube's new tool can automatically dub videos into other languages

Engadget

YouTube has plans to go beyond translated subtitles by allowing creators to dub videos into other spoken languages. At VidCon, the company announced yesterday that it's testing an AI-powered dubbing service called Aloud, developed at Google's Area 120 incubator, The Verge reported. The tool would eliminate the time and often great expense required to create a dub the usual way (with human translators and narrators), allowing creators to reach a wider global audience. Aloud promises a "quality dub in just a few minutes" using AI. The tool first creates a text-based translation that creators can check and edit, then generates a dub.


Best practices for bolstering machine learning security

MIT Technology Review

ML security has the same goal as all cybersecurity measures: reducing the risk of sensitive data being exposed. If a bad actor interferes with your ML model or the data it uses, that model may output incorrect results that, at best, undermine the benefits of ML and, at worst, negatively impact your business or customers. "Executives should care about this because there's nothing worse than doing the wrong thing very quickly and confidently," says Zach Hanif, vice president of machine learning platforms at Capital One. And while Hanif works in a regulated industry--financial services--requiring additional levels of governance and security, he says that every business adopting ML should take the opportunity to examine its security practices. Devon Rollins, vice president of cyber engineering and machine learning at Capital One, adds, "Securing business-critical applications requires a level of differentiated protection. It's safe to assume many deployments of ML tools at scale are critical given the role they play for the business and how they directly impact outcomes for users."


Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework

Shafique, Muhammad, Marchisio, Alberto, Putra, Rachmad Vidya Wicaksana, Hanif, Muhammad Abdullah

arXiv.org Artificial Intelligence

The security and privacy concerns along with the amount of data that is required to be processed on regular basis has pushed processing to the edge of the computing systems. Deploying advanced Neural Networks (NN), such as deep neural networks (DNNs) and spiking neural networks (SNNs), that offer state-of-the-art results on resource-constrained edge devices is challenging due to the stringent memory and power/energy constraints. Moreover, these systems are required to maintain correct functionality under diverse security and reliability threats. This paper first discusses existing approaches to address energy efficiency, reliability, and security issues at different system layers, i.e., hardware (HW) and software (SW). Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation. To address reliability threats (like permanent and transient faults), we highlight cost-effective mitigation techniques, like fault-aware training and mapping. Moreover, we briefly discuss effective detection and protection techniques to address security threats (like model and data corruption). Towards the end, we discuss how these techniques can be combined in an integrated cross-layer framework for realizing robust and energy-efficient Edge AI systems.


Technology is the engine of change, say Tech Leaders Summit CTOs

#artificialintelligence

"Technology is the engine of change," opened the CTO panel's moderator, Sandeep Roychowdhury, founder and director at SMEinnovation, during Information Age's Tech Leaders Summit. To navigate this "dark forest," as Roychowdhury put it, industry leaders are diversifying. Sharon Moore, CTO for Public Sector at IBM, explained that while IBM is a leader in cloud and infrastructure, "there was public opinion that we were behind with regards to public cloud, and I believe the Red Hat acquisition has assisted with removing that perception," she said. IBM has used collaboration to stake a claim in this environment and in Moore's view it's clear that "multi is the way forward. "Businesses must manage across multiple cloud providers.


Capital One Machine Learning Lead on Lessons at Scale

#artificialintelligence

Machine learning has moved from prototype to production across a wide range of business units at financial services giant Capital One due in part to a centralized approach to evaluating and rolling out new projects. This is no easy task given the scale and scope of the enterprise but according to Zachary Hanif who is director of Capitol One's machine learning "center for excellence", the trick is to define use cases early that touch as broad of a base within the larger organization as possible and build outwards. This is encapsulated in the philosophy Hanif spearheads--locating machine learning talent in one repository that can branch out and work with the experts across the many business divisions. Hanif shared these and other lessons for building a machine learning hub inside a large enterprise where purely machine learning experts work with the different domain and departmental efforts to roll new services into production at the GPU Technology Conference (GTC18). While GPUs were not necessarily the topic of the talk by any means, Hanif did say they have quite a number along with standard CPU based clusters and just like any other enterprise or academic center with a wide range of mission-critical R&D projects on the burner, resource contention is a constant struggle, especially when it comes to the more rare and expensive GPUs they have.