Goto

Collaborating Authors

Big Data


Why businesses must prepare for hyper automation now

#artificialintelligence

Automation has been used for decades in a wide range of industries to boost efficiency and productivity, reduce waste and ensure quality and safety. Emerging technologies such as Artificial Intelligence (AI), Natural Language Processing (NLP) and big data analytics are now being combined with automation, to deal with more complex problems and bring further improvements to business processes. This convergence of automation and intelligence is known as hyper automation. Also known as cognitive or smart automation, hyper automation is at the forefront of the 4th Industrial Revolution and is gradually making its way into every aspect of business, delivering unprecedented results. There are a number of factors driving the adoption of hyper automation among enterprises, including the ability to improve operational and service performance.


An Essential Component In Any Insurtech Solution Tech-stack - Suyati Technologies

#artificialintelligence

The insurance industry is way past its time when timely response and a balanced price-quality relationship were enough to define customer experience. The advent of Artificial Intelligence, Machine Learning, and Advanced Analytics have disrupted the insurance industry and have reshaped the way it operates. Insurtech firms these days are using their AI and ML capabilities to drive high-quality customer experiences, increased loyalty, generate new revenue while simultaneously reducing the costs. The vision of the insurance firms today and for the future is where customers and customer experience comes first. The combination of AI and ML models built on top of the Customer Data Platform leads to improved customer experience through hyper-personalization.


[L4-BD] Introduction to Big Data with KNIME Analytics Platform - Online

#artificialintelligence

This course focuses on how to use KNIME Analytics Platform for in-database processing and writing/loading data into a database. Get an introduction to the Apache Hadoop ecosystem and learn how to write/load data into your big data cluster running on premise or in the cloud on Amazon EMR, Azure HDInsight, Databricks Runtime or Google Dataproc.. Learn about the KNIME Spark Executor, preprocessing with Spark, machine learning with Spark, and how to export data back into KNIME/your big data cluster. This course lets you put everything you've learnt into practice in a hands-on session based on the use case: Eliminating missing values by predicting their values based on other attributes. This course consists of four, 75-minutes online sessions run by one of our KNIME data scientists. Each session has an exercise for you to complete at home and together, we will go through the solution at the start of the following session.


Global Big Data Conference

#artificialintelligence

A major marketing firm has turned to IBM Watson Studio, and its data, to create an interactive platform that predicts the risk, readiness and recovery periods for counties hit by the coronavirus. Global digital marketing firm Wunderman Thompson launched its Risk, Readiness and Recovery map, an interactive platform that helps enterprises and governments make market-level decisions, amid the coronavirus pandemic. The platform, released May 21, uses Wunderman Thompson's data, as well as machine learning technology from IBM Watson, to predict state and local government COVID-19 preparedness and estimated economic recovery timetables for businesses and governments. The idea for the Risk, Readiness and Recovery map, a free version of which is available on Wunderman Thompson's website, originated two months ago as the global pandemic accelerated, said Adam Woods, CTO at Wunderman Thompson Data. "We were looking at some of the visualizations that were coming in around COVID-19, and we were inspired to really say, let's look at the insight that we have and see if that can make a difference," Woods said.


Global Big Data Conference

#artificialintelligence

B2B software sales and marketing teams love hearing the term "artificial intelligence" (AI). AI has a smoke and mirrors effect. But, when we say "AI is doing this," our buyers often know so little about AI that they don't ask the hard questions. In industries like the DevTools space, it is crucial that buyers understand both what products do and what their limitations are to ensure that these products meet their needs. If the purpose of AI is to make good decisions for humans, to accept that "AI is doing this" is to accept that we don't really know how the product works or if it is making good decisions for us.


Global Big Data Conference

#artificialintelligence

Last Tuesday, Google shared a blog post highlighting the perspectives of three women of color employees on fairness and machine learning. I suppose the comms team saw trouble coming: The next day NBC News broke the news that diversity initiatives at Google are being scrapped over concern about conservative backlash, according to eight current and former employees speaking on condition of anonymity. The news led members of the House Tech Accountability Caucus to send a letter to CEO Sundar Pichai on Monday. Citing Google's role as a leader in the U.S. tech community, the group of 10 Democrats questioned why, despite corporate commitments over years, Google diversity still lags behind the diversity of the population of the United States. The 10-member caucus specifically questioned whether Google employees working with AI receive additional bias training.


Coles shuffles data management into the cloud

ZDNet

Machine learning might be high on the agenda for the data science team at Coles, but according to Richard Glew, Coles head of engineering and operations, they are currently limited by the existing on-premise environment. "Even if we can do something, being able to do something quickly is another matter. We've got a lot of issues [like] where is our data, do we have the right hardware, how long does it take to get it … all the usual stuff with an on-prem environment," he said, speaking as part of the Databricks Data and AI APAC virtual conference. In a move to expand the possibility of enabling machine learning, advanced analytics, and data exchange, the company is currently developing an electronic data processing platform (EDP) to change the way it manages and stores data. "Our EDP platform is designed to be a universal data repository for all the data we want to share internally or externally as an organisation, and we fully catalogue that," Glew said.


Announcing the First ODSC Europe 2020 Virtual Conference Speakers

#artificialintelligence

ODSC's first virtual conference is a wrap, and now we've started planning for our next one, the ODSC Europe 2020 Virtual Conference from September 17th to the 19th. We're thrilled to announce the first group of expert speakers to join. During the event, speakers will cover topics such as NLP machine learning quant finance deep learning data visualization data science for good image classification transfer learning recommendation systems and much, much more. Dr. Jiahong Zhong is the Head of Data Science at Zopa LTD, which facilitates peer-to-peer lending and is one of the United Kingdom's earliest fintech companies. Before joining Zopa, Zhong worked as a researcher on the Large Hadron Collider Project at CERN, focusing on statistics, distributed computing, and data analysis.


Global Big Data Conference

#artificialintelligence

Space-specific silicon company Xilinx has developed a new processor for in-space and satellite applications that records a number of firsts: It's the first 20nm process that's rated for use in space, offering power and efficiency benefits, and it's the first to offer specific support for high performance machine learning through neural network-based inference acceleration. The processor is a field programmable gate array (FPGA), meaning that customers can tweak the hardware to suit their specific needs since the chip is essentially user-configurable hardware. On the machine learning side, Xilinx says that the new processor will offer up to 5.7 tera operations per second of "peak INT8 performance optimized for deep learning," which is an improvement of as much as 25x vs the previous generation. Xilinx's new chip has a lot of potential for the satellite market for a couple of reasons: First, it's a huge leap in terms of processor size, since the company's existing traditional tolerant silicon was offered in a 65nm spec only. That means big improvements in terms of its size, weight and power efficiency, all of which translates to very important savings when you're talking about in-space applications, since satellites are designed to be as lightweight and compact as possible to help defray launch costs and in-space propellant needs, both of which represent major expenses in their operation.


Big Data's Deal with the Devil

#artificialintelligence

I keep thinking of Agnieszka Kurant's liquid crystal paintings. I can't help but wonder if their forms are still changing, their gasoline-rainbow palettes still mutating, or whether they've gone quiet like the rest of us. Kurant's work sits in a central gallery of'Uncanny Valley', the Bay Area's first major exhibition to focus explicitly on how artists today are grappling with technologies that have – for the most part – come out of the region. The show's title is a nod, of course, to nearby Silicon Valley – of which San Francisco has increasingly become an annex – but also reflects the show's intent: to look not broadly at how technology has seeped into art, but at how the definition of what constitutes'humanness' has been blurred by advancements in artificial intelligence and how artists are metabolizing these developments. Kurant's work – which, for me, was the soul of the exhibition – relies heavily on technology that reflects human emotions in real time.