Goto

Collaborating Authors

google


Why Facial Recognition Providers Must Take Consumer Privacy Seriously

#artificialintelligence

Consumer privacy has made big headlines in the recent years with the Facebook Cambridge Analytica Scandal, Europe's GDPR and high-profile breaches by companies like Equifax. It's clear that the data of millions of consumers is at risk every day, and that companies that wish to handle their data must do so with the highest degree of protection around both security and privacy of that data, especially for companies that build and sell AI-enabled facial recognition solutions. As CEO of an AI-enabled software company specializing in facial recognition solutions, I've made data security and privacy among my top priorities. Our pro-privacy stance goes beyond mere privacy by design engineering methodology. We regularly provide our customers with education and best practices, and we have even reached out to US lawmakers, lobbying for sensible pro-privacy regulations governing the technology we sell.


The 5 Components Towards Building Production-Ready Machine Learning System

#artificialintelligence

The biggest issue facing machine learning is how to put the system into production. To conceptualize this framework, there is a significant paper from Google called ML Test Score -- A Rubric for Production Readiness and Technical Debt Reduction -- which is an exhaustive framework/checklist from practitioners at Google. It is a follow-up to previous work from Google, such as (1) Hidden Technical Debt in ML Systems, (2) ML: The High-Interest Credit Card of Technical Debt, and (3) Rules of ML: Best Practices for ML Engineering. As seen in Figure 1 from the paper above, ML system testing is more complex a challenge than testing manually coded systems, since ML system behavior depends strongly on data and models that cannot be sharply specified a priori. One way to see this is to consider ML training as analogous to the compilation, where the source is both code and training data.


Introduction to Federated Learning

#artificialintelligence

There are over 5 billion mobile device users all over the world. Such users generate massive amounts of data--via cameras, microphones, and other sensors like accelerometers--which can, in turn, be used for building intelligent applications. Such data is then collected in data centers for training machine/deep learning models in order to build intelligent applications. However, due to data privacy concerns and bandwidth limitations, common centralized learning techniques aren't appropriate--users are much less likely to share data, and thus the data will be only available on the devices. This is where federated learning comes into play. According to Google's research paper titled, Communication-Efficient Learning of Deep Networks from Decentralized Data [1], the researchers provide the following high-level definition of federated learning: A learning technique that allows users to collectively reap the benefits of shared models trained from [this] rich data, without the need to centrally store it.


How Having Bigger AI Models Can Have A Detrimental Impact On Environment

#artificialintelligence

The COVID crisis has skyrocketed the applications of artificial intelligence -- from tackling this global pandemic, to being a vital tool in managing various business processes. Despite its benefits, AI has always been scrutinised for its ethical concerns like existing biases and privacy issues. However, this technology also has some significant sustainability issues – it is known to consume a massive amount of energy, creating a negative impact on the environment. As AI technology is getting advanced in predicting weather, understanding human speech, enhancing banking payments, and revolutionising healthcare, the advanced models are not only required to be trained on large datasets, but also require massive computing power to improve its accuracy. Such heavy computing and processing consumes a tremendous amount of energy and emits carbon dioxide, which has become an environmental concern. According to a report, it has been estimated that the power required for training AI models emits approximately 626,000 pounds (284 tonnes) of carbon dioxide, which is comparatively five times the lifetime emissions of the average US car.


Voice + AI Is Coming To The Workplace Loud And Clear

#artificialintelligence

Virtual assistants turn 16 this year and you don't have to look too hard – or speak too loudly – to find them. In fact, there will be around 8 billion voice-based devices by 2023 – more than the world's population today. From Amazon's Echo and Google's Assistant to Apple's Siri, Samsung's Bixby and Microsoft's Cortana, billions of people around the world are using their voices every day to schedule appointments, get directions, play music or get answers quickly-- all things that once required us to tediously type or write. Even Twitter recently announced that users can now audio tweet their inner musings. And yet, despite widespread adoption of voice-based devices in our personal lives, applications based on voice are nowhere as pervasive in our professional lives as they are in our homes.


R&D Roundup: Tech giants unveil breakthroughs at computer vision summit – TechCrunch

#artificialintelligence

Computer vision summit CVPR has just (virtually) taken place, and like other CV-focused conferences, there are quite a few interesting papers. More than I could possibly write up individually, in fact, so I've collected the most promising ones from major companies here. Facebook, Google, Amazon and Microsoft all shared papers at the conference -- and others too, I'm sure -- but I'm sticking to the big hitters for this column. Redmond has the most interesting papers this year, in my opinion, because they cover several nonobvious real-life needs. One is documenting that shoebox we or perhaps our parents filled with old 3x5s and other film photos.


Three Ways Artificial Intelligence Is Changing Medicine

#artificialintelligence

We may not be at the point where you overhear your surgeon saying, "Hey, Google, pass the scalpel," but artificial intelligence (AI) is gradually making its way into the healthcare industry and, by extension, dermatology and plastic surgery practices. Even in its limited use, AI is already helping providers offer their patients better care, whether it's preop, in the OR or during the recovery process. Your experience with a medical practice starts as soon as you look for information online. You might have questions for the practitioner or want to book an appointment. In the past, you would have emailed or called the practice, but you may now find yourself speaking to an AI assistant on the practice's website.


YouTube to let creators use Google AI to automatically reply to comments

The Independent - Tech

YouTube is rolling out its "SmartReply" technology to YouTube, meaning that comments you see on the site might not actually have been written by a human. The technology analyses messages and then uses artificial intelligence to guess what a person might want to say in response to them. Users can then select that response and post it, without ever having to write anything out for themselves. It has already appeared within Gmail and Android's Messages app, and is open to developers who can integrate it within their own app. But it is now coming to YouTube, which represents the most public place any messages written by the SmartReply software will be seen.


Computer vision(CV): Leading public companies named

#artificialintelligence

CV is a nascent market but it contains a plethora of both big technology companies and disruptors. Technology players with large sets of visual data are leading the pack in CV, with Chinese and US tech giants dominating each segment of the value chain. Google has been at the forefront of CV applications since 2012. Over the years the company has hired several ML experts. In 2014 it acquired the deep learning start-up DeepMind. Google's biggest asset is its wealth of customer data provided by their search business and YouTube.


New AI technique speeds up language models on edge devices

#artificialintelligence

Researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and MIT-IBM Watson AI Lab recently proposed Hardware-Aware Transformers (HAT), an AI model training technique that incorporates Google's Transformer architecture. They claim that HAT can achieve a 3 times inferencing speedup on devices like the Raspberry Pi 4 while reducing model size by 3.7 times compared with a baseline. Google's Transformer is widely used in natural language processing (and even some computer vision) tasks because of its cutting-edge performance. Nevertheless, Transformers remain challenging to deploy on edge devices because of their computation cost; on a Raspberry Pi, translating a sentence with only 30 words requires 13 gigaflops (1 billion floating-point operations per second) and takes 20 seconds. This obviously limits the architecture's usefulness for developers and companies integrating language AI with mobile apps and services.