Google opens first developer hub in Singapore

ZDNet

Google is offering a physical space that provides developers in Southeast Asia the resources they need to build products and grow their business, including access to the vendor's technologies and engineers, hands-on mentorship, and networking opportunities. Occupying 7,200 square feet within its Singapore office, the Developer Space @ Google Singapore is its first such facility worldwide that is "dedicated to developers", according to the US tech giant. The new hub would support training workshops Google had hosted for developers and startups in the region, said Sami Kizilbash, Google's developer relations program manager. He pointed to a four-day machine learning bootcamp held last November, which provided a platform for participants to understand how Google Cloud could be tapped to better structure data for analytics purposes. Amid Alibaba's increased efforts to build up its cloud footprint, Google also is beefing up its coverage in Asia-Pacific where it says it will operate seven cloud regions by early-2019, up from just one region two years ago.


What Bank Customers Actually Want From Big Data

#artificialintelligence

Say the phrase "big data," and people tend to picture the TV show Black Mirror. They imagine a creepy dystopian future in which robot overlords control everything. But those fears are overblown. What people should think of when they think of big data is Netflix or Amazon: personalized recommendations and a customized experience that make it easier and faster for the consumer to find what they're looking for. In fact, you could say that, when it comes to big data, consumers worry about Black Mirror but hope for more Netflix.


SearchChat Podcast: Is AI Bigger than the Internet? - Biznology

#artificialintelligence

In a recent study, 63% of CEOs agreed that AI will have more impact on their business than the internet. Think about that for a minute. And yet, 23% said they had no plans to do anything about it. Why? Partially, people tend to overestimate how much data they need to get to a reliable result for utilizing AI. Steve and I think it's possible for most businesses to start implementing machine learning.


A peek at living room decor suggests how decorations vary around the world

#artificialintelligence

In a study that used artificial intelligence to analyze design elements, such as artwork and wall colors, in pictures of living rooms posted to Airbnb, a popular home rental website, the researchers found that people tended to follow cultural trends when they decorated their interiors. In the United States, where the researchers had economic data from the U.S. Census, they also found that people across socioeconomic lines put similar efforts into interior decoration. "We were interested in seeing how other cultures decorated," said Clio Andris, assistant professor of geography, Penn State and an Institute for CyberScience associate. "We see maps of the world and wonder, 'What's it like living there,' but we don't really know what it's like to be in people's living rooms and in their houses. This was like people around the world inviting us into their homes."


Better Language Models and Their Implications

#artificialintelligence

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper. GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data.


Computers are getting better than humans at reading

#artificialintelligence

The robots are coming, and they can read. Artificial intelligence programs built by Alibaba (BABA) and Microsoft (MSFT) have beaten humans on a Stanford University reading comprehension test. "This is the first time that a machine has outperformed humans on such a test," Alibaba said in a statement Monday. The test was devised by artificial intelligence experts at Stanford to measure computers' growing reading abilities. Alibaba's software was the first to beat the human score.


Coursera Coupons Min 10% off 100% Free Courses Student Offer

#artificialintelligence

Learn Machine Learning Stanford University Professor and earn certification to full proof your career. Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.


Understanding Supply & Demand in Ride-hailing Through the Lens of Grab Data

#artificialintelligence

Grab's ride-hailing business in its simplest form is about matchmaking Passengers looking for a comfortable mode of transport and Drivers looking for a flexible earning opportunity. Over the last 6 years, Grab has repeatedly fine-tuned its machine learning algorithms with the goal of ensuring that passengers get a ride when they want it, and that they are matched to the drivers that are closest to them. But drivers are constantly on the move, and at any one point there could be hundreds of passengers requesting a ride within the same area. This means that sometimes, the closest available drivers might still be too far away. The Analytics team at Grab attempts to analyze these instances at scale via clearly-defined metrics.


AI Biweekly: Trending StyleGAN Raises Concerns; AI Applications in the Finance Industry

#artificialintelligence

StyleGAN is an open-source, hyperrealistic human face generator with easy-to-use tools and models. An Uber engineer has now used StyleGan to create the website ThisPersonDoesNoteExist.com, which generates a new fake face every time the top page is refreshed. This site is trending, but the tech behind it not without controversy. GANs (Generative Adversarial Networks) are one of the most interesting technologies in Artificial Intelligence and Computer Vision. GANs have, however, also been used to build "deepfakes" in which individuals' faces are realistically superimposed on target videos, effectively making them appear to be another person -- and that other person is very often appearing in a porn film.


The risks and advantages of artificial intelligence

#artificialintelligence

What should companies know about AI and the future of work, and what risks can harm the advantages of artificial intelligence?