artificial intelligence

Will Artificial Intelligence (AI) Steal Our Jobs? GetSmarter Blog


As artificial intelligence develops and disrupts more industries, more working professionals are becoming increasingly concerned about its implications for the future of work. According to a Pew Research Center survey completed in 2017, 72% of Americans fear AI technology is capable of replacing jobs, with 25% feeling exceptionally worried.1 The industries most at risk are predicted to be jobs within science, healthcare, security, farming, construction, transport, and banking.2 While it's speculated AI will take over 1.8 million human jobs by the year 2020,4 the technology is also expected to create a 2.3 million new kinds of jobs, many of which will involve the collaboration between humans and AI.5 Research shows artificial intelligence is capable of performing several tasks better than humans in specific occupations, but it's not capable of performing all tasks required for the job better than humans.6 In other words, most jobs will be affected by AI but in such a way that a partnership is formed between humans and machines, a more powerful alliance compared to either working individually.7 What will this look like?

U.K. Government To Fund AI University Courses With £115m


The U.K. government is planning to fund thousands of postgraduate students that want to study a Masters or a PhD in artificial intelligence as it looks to keep pace with the U.S. and China. AI is poised to become the most significant technology for a generation but there are only so many people that know how to develop the technology, which could have a huge impact on industries such as healthcare, energy, and automotive. Business Secretary Greg Clark and Digital Secretary Jeremy Wright announced on Thursday that the government will commit up to £115 million towards training the next generation of AI talent. In a press release, the government said 1,000 students will receive funding to enable them to complete PhDs at 16 U.K. Research and Innovation AI Centres for Doctoral Training (CDTs), located across the country. The full list of centres can be found at the end of this article.

k-NN Embedding Stability for word2vec Hyper-Parametrisation in Scientific Text


Word embeddings are increasingly attracting the attention of researchers dealing with semantic similarity and analogy tasks. However, finding the optimal hyper-parameters remains an important challenge due to the resulting impact on the revealed analogies mainly for domain-specific corpora. While analogies are highly used for hypotheses synthesis, it is crucial to optimise word embedding hyper-parameters for precise hypothesis synthesis. Therefore, we propose, in this paper, a methodological approach for tuning word embedding hyper-parameters by using the stability of k-nearest neighbors of word vectors within scientific corpora and more specifically Computer Science corpora with Machine learning adopted as a case study. This approach is tested on a dataset created from NIPS (Conference on Neural Information Processing Systems) publications, and evaluated with a curated ACM hierarchy and Wikipedia Machine Learning outline as the gold standard.

Artificial Intelligence: The Revolution for SMEs - A Business Knowledge Network Event


AI - 'artificial intelligence' - promises to bring revolution to many parts of our lives: Smart assistants, fully robotic workplaces, driverless cars, "fake news" propaganda. As the digital world around us becomes smarter, what are the implications socially & economically? And what does the future really hold for us in a world of AI? This interesting and informative talk is delivered by Sven Latham from Noggin. Sven is a self-confessed data and computer geek, using big data & AI to analyse town centres.

Scientists call for rules on evaluating predictive artificial intelligence in medicine


The FDA tells Axios it is working on developing a framework to handle advances in AI and medicine, as pointed out by Commissioner Scott Gottlieb last year. Meanwhile, Ravi B. Parikh, co-author of the paper and a fellow at University of Pennsylvania's School of Medicine, tells Axios that the FDA needs to set standards to evaluate the "staggering" pace of AI development. Why it matters: Advanced algorithms present both opportunities and challenges, says Amol S. Navathe, co-author and assistant professor at Penn's School of Medicine. Details: The authors list the following as recommended standards... Outside comment: Eric Topol, founder and director of Scripps Research Translational Institute, who was not part of this paper, says the timing of these proposed standards is "very smart" before advanced algorithms are placed into too many devices. What's next: The scientists hope the FDA considers integrating the proposed standards alongside its current pre-certification program under the Digital Health Innovation Act to study clinical outcomes of AI-based tools, Ravi says.

Google opens first developer hub in Singapore


Google is offering a physical space that provides developers in Southeast Asia the resources they need to build products and grow their business, including access to the vendor's technologies and engineers, hands-on mentorship, and networking opportunities. Occupying 7,200 square feet within its Singapore office, the Developer Space @ Google Singapore is its first such facility worldwide that is "dedicated to developers", according to the US tech giant. The new hub would support training workshops Google had hosted for developers and startups in the region, said Sami Kizilbash, Google's developer relations program manager. He pointed to a four-day machine learning bootcamp held last November, which provided a platform for participants to understand how Google Cloud could be tapped to better structure data for analytics purposes. Amid Alibaba's increased efforts to build up its cloud footprint, Google also is beefing up its coverage in Asia-Pacific where it says it will operate seven cloud regions by early-2019, up from just one region two years ago.

What Bank Customers Actually Want From Big Data


Say the phrase "big data," and people tend to picture the TV show Black Mirror. They imagine a creepy dystopian future in which robot overlords control everything. But those fears are overblown. What people should think of when they think of big data is Netflix or Amazon: personalized recommendations and a customized experience that make it easier and faster for the consumer to find what they're looking for. In fact, you could say that, when it comes to big data, consumers worry about Black Mirror but hope for more Netflix.

SearchChat Podcast: Is AI Bigger than the Internet? - Biznology


In a recent study, 63% of CEOs agreed that AI will have more impact on their business than the internet. Think about that for a minute. And yet, 23% said they had no plans to do anything about it. Why? Partially, people tend to overestimate how much data they need to get to a reliable result for utilizing AI. Steve and I think it's possible for most businesses to start implementing machine learning.

A peek at living room decor suggests how decorations vary around the world


In a study that used artificial intelligence to analyze design elements, such as artwork and wall colors, in pictures of living rooms posted to Airbnb, a popular home rental website, the researchers found that people tended to follow cultural trends when they decorated their interiors. In the United States, where the researchers had economic data from the U.S. Census, they also found that people across socioeconomic lines put similar efforts into interior decoration. "We were interested in seeing how other cultures decorated," said Clio Andris, assistant professor of geography, Penn State and an Institute for CyberScience associate. "We see maps of the world and wonder, 'What's it like living there,' but we don't really know what it's like to be in people's living rooms and in their houses. This was like people around the world inviting us into their homes."

Better Language Models and Their Implications


Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper. GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data.