Big Data


AI Is Compelling, But AI And Data Science Operations Must Improve

#artificialintelligence

AI technology is starting to work really well. Unfortunately, I've found that the management of machine learning code, data sets and models -- and the integration of these into operational processes -- falls well short of enterprise standards. This can create blockers to adoption and reduce successful outcomes, even in organizations that have adopted AI. But organizations can take specific measures to mitigate the difficulties. I'll identify some wish-list items that could improve things.


Scaling versus failing Data Science as a business case

#artificialintelligence

The quest to empower talent turning Data Scientist has led to a follow-on observation: Many companies are struggling to establish the business case for Machine Learning. Failing the data science business case has repercussions. If the company, the management, and the employees keep failing they will lose trust in machine learning approaches, and stop embracing more data-driven approaches. Next, this will lead to the existential risk of not being competitive any longer. So, how to fail forward to success?


6 Tech Trends for the Enterprise in 2019 - InformationWeek

#artificialintelligence

Moshe Kranc, chief technology officer at Ness Digital Engineering, worked with his team to identify macro trends and specific technologies that will be game-changers in 2019. Their prediction: Newer technologies like blockchain and machine learning will make inroads, familiar solutions like the cloud and big data will solidify their standing, and cyber threats will continue to bedevil corporations. Here, Kranc discusses the specific impacts these trends will have on enterprise IT managers. "Machine learning is real," Kranc says. "A number of things on the hype curve are not yet ready for prime time, but machine learning is, and there are a lot of applications for it. I think we're just at the beginning of figuring out all the ways to use it, and there's still a lot of room for improvement in the algorithms themselves."


Seebo pioneers process-based Industrial AI to Predict & Prevent Manufacturing Disruptions Seebo Blog

#artificialintelligence

TEL AVIV, November 15, 2018 – Seebo today announced the launch of its unique process-based artificial intelligence (AI) technology. The new AI-based capabilities for production line data introduce unmatched accuracy and ease of use of the company's predictive quality, predictive maintenance and production line intelligence solutions. Process manufacturers today face rising demands on production capacity and continuous disruptions that affect uptime, quality, and throughput. Increasingly, they are turning to machine-generated data to investigate and solve their production line problems. But finding meaningful insights entails applying sophisticated machine learning technologies to a carefully engineered big data repository – a process beyond the technical and financial reach of most manufacturers.


CSIRO coughs up AU$35m for Australia's space and AI efforts

ZDNet

The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has made AU$35 million available for research into new and emerging technologies. According to CSIRO, the funding will be available specifically for use in the areas of space technology and artificial intelligence, including on the development of advanced imaging of Earth from satellites and data science through AI and machine learning. With AU$16 million invested, the space technology segment will be charged with identifying and developing "science to leapfrog traditional technologies" and find new areas Australia can focus on. CSIRO said it will initially focus on advanced technologies for Earth observation, and then address challenges such as space object tracking, resource utilisation in space, and developing manufacturing and life support systems for missions to the Moon and Mars. AU$19 million will be used to target AI-driven solutions for areas including food security and quality, health and wellbeing, sustainable energy and resources, resilient and valuable environments, and Australian and regional security, CSIRO explained.


Using big data and artificial intelligence to accelerate global development

#artificialintelligence

When U.N. member states unanimously adopted the 2030 Agenda in 2015, the narrative around global development embraced a new paradigm of sustainability and inclusion--of planetary stewardship alongside economic progress, and inclusive distribution of income. This comprehensive agenda--merging social, economic and environmental dimensions of sustainability--is not supported by current modes of data collection and data analysis, so the report of the High-Level Panel on the post-2015 development agenda called for a "data revolution" to empower people through access to information.1 Today, a central development problem is that high-quality, timely, accessible data are absent in most poor countries, where development needs are greatest. In a world of unequal distributions of income and wealth across space, age and class, gender and ethnic pay gaps, and environmental risks, data that provide only national averages conceal more than they reveal. This paper argues that spatial disaggregation and timeliness could permit a process of evidence-based policy making that monitors outcomes and adjusts actions in a feedback loop that can accelerate development through learning. Big data and artificial intelligence are key elements in such a process. Emerging technologies could lead to the next quantum leap in (i) how data is collected; (ii) how data is analyzed; and (iii) how analysis is used for policymaking and the achievement of better results. Big data platforms expand the toolkit for acquiring real-time information at a granular level, while machine learning permits pattern recognition across multiple layers of input. Together, these advances could make data more accessible, scalable, and finely tuned. In turn, the availability of real-time information can shorten the feedback loop between results monitoring, learning, and policy formulation or investment, accelerating the speed and scale at which development actors can implement change.


How artificial intelligence is transforming the insurance sector

#artificialintelligence

The following is an opinion piece written by Carlos Somohano from WHISHWORKS who shares his insights into how big data can bring benefits to insurers, and to the sector as a whole. The views expressed within the article are not necessarily reflective of those of Insurance Business. Big data has the power to revolutionise any industry and it is certainly changing the world of insurance; improving processes and offerings for providers and making it more accessible, accurate and affordable for consumers. Brokers and insurers are having to quickly adapt to an evolving marketplace where technology and data proliferation are radically transforming back end operations and customer facing tasks. The idea of open data, where anyone can access, use and share certain information is increasingly embraced by governments, industry bodies and businesses, revolutionising the way markets operate.


Top 5 Trends in Data Science Data Science Blog Dimensionless

#artificialintelligence

Data Science is a study which deals with the identification, representation, and extraction of meaningful information from data. It can be collected from different sources to be used for business purposes. With an enormous amount of facts generating each minute, the requirement to extract the useful insights is a must for the businesses. It helps them stand out from the crowd. Data engineers set up the data storage in order to facilitate the process of data mining, data munging activities.


Anticipating the next move in data science – my interview with Thomson Reuters

#artificialintelligence

Thomson Reuters has a series, AI experts, where they interview thought leaders from different areas - including technology executives, researchers, robotics experts and policymakers - on what we might expect as we move towards AI. As part of that series I recently spoke to Paul Thies of Thomson Reuters, and here are the excerpts from the interview: Anticipating the next move in data science Thomson Reuters: For timely information concerning developments in data science, data mining and business analytics, KDnuggets is widely regarded as a leading outlet in the field. Created in 1993 by founder, editor and president Gregory Piatetsky-Shapiro, it is frequently cited as one of the top sources of data science news and influence by various industry watchers. Thomson Reuters: What are some use cases of data science that you find to be particularly valuable to organizations in this age of Big Data? GREGORY: Where people typically apply data science, probably not surprisingly, are in the areas of customer relationship management (CRM) and consumer analytics.


Anscombe Quartet and use of Exploratory Data Analysis - WeirdGeek

#artificialintelligence

Whether you are working as Data Scientist or looking to build a career in a Data Science, the pipeline of your work include Extracting dataset, loading dataset, Data Cleansing and munging, finding summary statistics, then do some Exploratory Data analysis (EDA), and after all these things build a model using machine learning. Anscombe Quartet dataset demonstration is one example that shows us, depending only on summary statistics can be troublesome and how badly it can affect our machine learning model. It's a group of four subsets that appear to be similar when using typical summary statistics, but when you plot all the groups using the Matplotlib package, you'll see a different story. Each dataset consists of eleven (x,y) pairs as follows: We have labelled four pairs as (X, Y),(X.1, To Calculate the mean, variance, correlation coefficient we can write a small python function which takes input and returns the mean, variance and correlation coefficient.