Goto

Collaborating Authors

big data


[L4-BD] Introduction to Big Data with KNIME Analytics Platform - Online

#artificialintelligence

This course focuses on how to use KNIME Analytics Platform for in-database processing and writing/loading data into a database. Get an introduction to the Apache Hadoop ecosystem and learn how to write/load data into your big data cluster running on premise or in the cloud on Amazon EMR, Azure HDInsight, Databricks Runtime or Google Dataproc.. Learn about the KNIME Spark Executor, preprocessing with Spark, machine learning with Spark, and how to export data back into KNIME/your big data cluster. This course lets you put everything you've learnt into practice in a hands-on session based on the use case: Eliminating missing values by predicting their values based on other attributes. This course consists of four, 75-minutes online sessions run by one of our KNIME data scientists. Each session has an exercise for you to complete at home and together, we will go through the solution at the start of the following session.


Global Big Data Conference

#artificialintelligence

A major marketing firm has turned to IBM Watson Studio, and its data, to create an interactive platform that predicts the risk, readiness and recovery periods for counties hit by the coronavirus. Global digital marketing firm Wunderman Thompson launched its Risk, Readiness and Recovery map, an interactive platform that helps enterprises and governments make market-level decisions, amid the coronavirus pandemic. The platform, released May 21, uses Wunderman Thompson's data, as well as machine learning technology from IBM Watson, to predict state and local government COVID-19 preparedness and estimated economic recovery timetables for businesses and governments. The idea for the Risk, Readiness and Recovery map, a free version of which is available on Wunderman Thompson's website, originated two months ago as the global pandemic accelerated, said Adam Woods, CTO at Wunderman Thompson Data. "We were looking at some of the visualizations that were coming in around COVID-19, and we were inspired to really say, let's look at the insight that we have and see if that can make a difference," Woods said.


Global Big Data Conference

#artificialintelligence

B2B software sales and marketing teams love hearing the term "artificial intelligence" (AI). AI has a smoke and mirrors effect. But, when we say "AI is doing this," our buyers often know so little about AI that they don't ask the hard questions. In industries like the DevTools space, it is crucial that buyers understand both what products do and what their limitations are to ensure that these products meet their needs. If the purpose of AI is to make good decisions for humans, to accept that "AI is doing this" is to accept that we don't really know how the product works or if it is making good decisions for us.


Global Big Data Conference

#artificialintelligence

Last Tuesday, Google shared a blog post highlighting the perspectives of three women of color employees on fairness and machine learning. I suppose the comms team saw trouble coming: The next day NBC News broke the news that diversity initiatives at Google are being scrapped over concern about conservative backlash, according to eight current and former employees speaking on condition of anonymity. The news led members of the House Tech Accountability Caucus to send a letter to CEO Sundar Pichai on Monday. Citing Google's role as a leader in the U.S. tech community, the group of 10 Democrats questioned why, despite corporate commitments over years, Google diversity still lags behind the diversity of the population of the United States. The 10-member caucus specifically questioned whether Google employees working with AI receive additional bias training.


Coles shuffles data management into the cloud

ZDNet

Machine learning might be high on the agenda for the data science team at Coles, but according to Richard Glew, Coles head of engineering and operations, they are currently limited by the existing on-premise environment. "Even if we can do something, being able to do something quickly is another matter. We've got a lot of issues [like] where is our data, do we have the right hardware, how long does it take to get it … all the usual stuff with an on-prem environment," he said, speaking as part of the Databricks Data and AI APAC virtual conference. In a move to expand the possibility of enabling machine learning, advanced analytics, and data exchange, the company is currently developing an electronic data processing platform (EDP) to change the way it manages and stores data. "Our EDP platform is designed to be a universal data repository for all the data we want to share internally or externally as an organisation, and we fully catalogue that," Glew said.


Announcing the First ODSC Europe 2020 Virtual Conference Speakers

#artificialintelligence

ODSC's first virtual conference is a wrap, and now we've started planning for our next one, the ODSC Europe 2020 Virtual Conference from September 17th to the 19th. We're thrilled to announce the first group of expert speakers to join. During the event, speakers will cover topics such as NLP machine learning quant finance deep learning data visualization data science for good image classification transfer learning recommendation systems and much, much more. Dr. Jiahong Zhong is the Head of Data Science at Zopa LTD, which facilitates peer-to-peer lending and is one of the United Kingdom's earliest fintech companies. Before joining Zopa, Zhong worked as a researcher on the Large Hadron Collider Project at CERN, focusing on statistics, distributed computing, and data analysis.


Global Big Data Conference

#artificialintelligence

Space-specific silicon company Xilinx has developed a new processor for in-space and satellite applications that records a number of firsts: It's the first 20nm process that's rated for use in space, offering power and efficiency benefits, and it's the first to offer specific support for high performance machine learning through neural network-based inference acceleration. The processor is a field programmable gate array (FPGA), meaning that customers can tweak the hardware to suit their specific needs since the chip is essentially user-configurable hardware. On the machine learning side, Xilinx says that the new processor will offer up to 5.7 tera operations per second of "peak INT8 performance optimized for deep learning," which is an improvement of as much as 25x vs the previous generation. Xilinx's new chip has a lot of potential for the satellite market for a couple of reasons: First, it's a huge leap in terms of processor size, since the company's existing traditional tolerant silicon was offered in a 65nm spec only. That means big improvements in terms of its size, weight and power efficiency, all of which translates to very important savings when you're talking about in-space applications, since satellites are designed to be as lightweight and compact as possible to help defray launch costs and in-space propellant needs, both of which represent major expenses in their operation.


An AI future set to take over post-Covid world

#artificialintelligence

Rabindranath Tagore once said, "Faith is the bird that feels the light when the dawn is still dark". The darkness that looms over the world at this moment is the curse of the COVID-19 pandemic, while the bird of human freedom finds itself caged under lockdown, unable to fly. Enthused by the beacon of hope, human beings will soon start picking up the pieces of a shared future for humanity, but perhaps, it will only be to find a new, unfamiliar world order with far-reaching consequences for us that transcend society, politics and economy. Crucially, a technology that had till now been crawling -- or at best, walking slowly -- will now start sprinting. In fact, a paradigm shift in the economic relationship of mankind is going to be witnessed in the form of accelerated adoption of artificial intelligence (AI) technologies in the modes of production of goods and services.


Big Data's Deal with the Devil

#artificialintelligence

I keep thinking of Agnieszka Kurant's liquid crystal paintings. I can't help but wonder if their forms are still changing, their gasoline-rainbow palettes still mutating, or whether they've gone quiet like the rest of us. Kurant's work sits in a central gallery of'Uncanny Valley', the Bay Area's first major exhibition to focus explicitly on how artists today are grappling with technologies that have – for the most part – come out of the region. The show's title is a nod, of course, to nearby Silicon Valley – of which San Francisco has increasingly become an annex – but also reflects the show's intent: to look not broadly at how technology has seeped into art, but at how the definition of what constitutes'humanness' has been blurred by advancements in artificial intelligence and how artists are metabolizing these developments. Kurant's work – which, for me, was the soul of the exhibition – relies heavily on technology that reflects human emotions in real time.


The Key to Building Data Pipelines for Machine Learning: Support for Multiple Engines - NASSCOM Community The Official Community of Indian IT Industry :))iiğ

#artificialintelligence

As a consumer of goods and services, you experience the results of machine learning (ML) whenever the institutions you rely on use ML processes to run their operations. You may receive a text message from a bank requiring verification after the bank has paused a credit card transaction. The work that happens behind the scenes to facilitate these experiences can be difficult to fully realize or appreciate. An important portion of that work is done by the data engineering teams that build the data pipelines to help train and deploy those ML models. Once focused on building pipelines to support traditional data warehouses, today's data engineering teams now build more technically demanding continuous data pipelines that feed applications with artificial intelligence and ML algorithms.