smartphone


Uncle Sam Wants Your Deep Neural Networks

#artificialintelligence

Earlier this year, Kaggle ran a $1 million contest to build algorithms capable of identifying signs of lung cancer in CT scans, helping to fuel a larger effort to apply neural networks to health care.


Minority Report at 15 years: what did it get right?

#artificialintelligence

When "Minority Report" hit theaters on June 21, 2002, it arrived to an America -- and a world -- that feels equal parts familiar and alien. There are little minidiscs everywhere, from Anderton's home-based video player to the video records the precrime operatives use in their office. Perhaps the best vision of our future (or present) that "Minority Report" offers is its all-seeing world of eye-scanning subways, ads, store windows and cars. Amazon's test convenience store lets you walk in and out, automatically using facial recognition and automated cameras to charge your account.


31 Must Know Stats About Mobile Voice Usage Trends

#artificialintelligence

We included in the survey a block of 10 questions focused on understanding how perceived social pressure impacts people's willingness to use voice commands with their smartphones. It is also interesting to see the impact of the respondent's age on their propensity to use voice commands: In most venues, there doesn't appear to be much difference, but when you get to more public areas, such as at a restaurant with friends, at the gym, in a public restroom, or in a theater, there is a definite tendency for those under 24 to use voice commands quite a bit more than the other age groups (51.6% vs. 38.6% The willingness of those with an income over $100K to use voice commands with their devices in public places, as compared to other income categories is startling! Single males also skew high for using voice commands to play music at 54% vs. 38.6% Those with high income are more likely to get annoyed by people using voice commands with their phone in public (50.8% vs. 41.8% for all responses), but in stark contrast they're also far more likely to do it (42.5% vs. 26.9% Note that 65.9% of women use spoken commands to text, where 54.6% of men do so. Most people agree or strongly agree that voice commands make their smartphone easier to use, with men coming in at 63.8% and women at 56%.


Interact or Die Trying – ThoughtWorks Featured Insights – Medium

#artificialintelligence

Fast forward another 20 years to the advent of personal computers and then to the internet -- technologies that utterly transformed our human ability to perceive, act on and share information and content of all kinds. We believe 2017 will see both voice and text-based interaction start to fulfill on the promise of conversational interfaces -- approaching the naturalness we expect from human interactions. As the timeline illustrates, the answer is clearly not one thing -- it's an accelerating diversification of interaction technologies, each contributing to the ongoing convergence of digital and physical. The opportunity this represents is ultimately to create profoundly natural human interaction with technology as a whole -- even as everything in our world, including ourselves, is increasingly augmented with this technology.


IBM Watson and LivePerson Partner to Transform Customer Care

#artificialintelligence

NEW YORK CITY - 15 Jun 2017: LivePerson, Inc. (Nasdaq: LPSN), a leading provider of cloud mobile and online business messaging solutions, and IBM (NYSE: IBM) have announced LiveEngage with Watson, the first global, enterprise-scale, out-of-the-box integration of Watson-powered bots with human agents. The new offering combines IBM's Watson Virtual Agent technology with LivePerson's LiveEngage platform, allowing brands to rapidly and easily deploy conversational bots that get smarter with each interaction, and lets consumers message those brands from their smartphone - via the brand's app, SMS, Facebook Messenger, or even the brand's mobile site - instead of having to call an 800 number. IBM Global Business Services, the company's consulting unit, is providing a set of strategy and implementation services to help companies integrate LiveEngage with Watson as part of their broader business transformation. LivePerson, Inc. (NASDAQ: LPSN) is the leading provider of mobile and online messaging business solutions, enabling a meaningful connection between brands and consumers.


Face recognition system 'K-Eye'

#artificialintelligence

A research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. To accomplish this, the research team proposed two key technologies: an image sensor with "Always-on" face detection and the CNNP face recognition chip. The face detection sensor combines analog and digital processing to reduce power consumption. The second key technology, CNNP, achieved incredibly low power consumption by optimizing a convolutional neural network (CNN) in the areas of circuitry, architecture, and algorithms.


Meet the Chinese finance giant that's secretly an AI company

#artificialintelligence

It shows how the company--which already operates a hugely successful smartphone payments business in China--aims to upend many areas of personal finance using machine learning and AI. The e-commerce giant Alibaba created Ant in 2014 to operate Alipay, a ubiquitous mobile payments service in China. It has become possible to automate this kind of image processing in recent years using a machine-learning technology known as deep learning. Once there, he developed Alibaba's first voice-recognition system for automating customer calls.


How Google is powering its next-generation AI

#artificialintelligence

The type of AI Google is most interested in is known as machine learning, where computers learn for themselves based on huge banks of sample data. We've talked about machine learning but there's a branch of machine learning that Google engineers are specifically interested in called deep learning - that's where AI systems try and mimic the human brain to deal with vast amounts of information. Deep learning means machine learning that relies less on code and instructions written by humans, and deep learning systems are known as neural networks, named after the neurons in the human brain. We've also been treated to a glimpse of an app called Google Lens, a smart camera add-on that means your phone will know what it's looking at and be able to make decisions at all times.


Google releases new TensorFlow Object Detection API

#artificialintelligence

Google is releasing a new TensorFlow object detection API to make it easier for developers and researchers to identify objects within images. This spring at I/O, Google released TensorFlow lite, it's version of a streamlined machine learning framework. And most recently at WWDC, Apple pushed out CoreML, its attempt to reduce the difficulty of running machine learning models on iOS devices. Today's TensorFlow object detection API can be found here.


Meet the Chinese finance giant that's secretly an AI company

#artificialintelligence

It shows how the company--which already operates a hugely successful smartphone payments business in China--aims to upend many areas of personal finance using machine learning and AI. The e-commerce giant Alibaba created Ant in 2014 to operate Alipay, a ubiquitous mobile payments service in China. It has become possible to automate this kind of image processing in recent years using a machine-learning technology known as deep learning. Once there, he developed Alibaba's first voice-recognition system for automating customer calls.