Goto

Collaborating Authors

Interview with Prof. Dr. Bart Baesens - Author of Multiple Business Analytics Books

@machinelearnbot

Professor Bart Baesens is a professor at KU Leuven (Belgium), and a lecturer at the University of Southampton (United Kingdom). He has done extensive research on analytics, customer relationship management, web analytics, fraud detection, and credit risk management. His findings have been published in well-known international journals (e.g. Machine Learning, Management Science, IEEE Transactions on Neural Networks, IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Evolutionary Computation, Journal of Machine Learning Research, …) and presented at international top conferences. He is also author of the books Credit Risk Management: Basic Concepts, published by Oxford University Press in 2008; and Analytics in a Big Data World published by Wiley in 2014.


UK Government Invests $28m In AI, IoT And High-Tech Farming Projects

#artificialintelligence

The UK Government has invested $28 million in several high-tech farming projects, which are aimed at cutting down pollution, minimizing waste and producing more food. The investment is part of the Government's modern Industrial Strategy, for which the UK has committed to boost R&D spending to 2.4 percent of GDP by 2027. The projects include Warwickshire-based Rootwave, which will use a $875,000 grant to use electricity instead of chemicals to kill weeds from the roots, avoiding damage to crops. Tuberscan, in Lincolnshire, will use $496,000 to develop ground penetrating radar, underground scans and artificial intelligence (AI) to monitor potato crops and identify when they are ready to harvest. The government hopes the technology will increase the usable crop by an estimated 5 to 10%, as well as reducing food waste with minimal additional costs.


Predictive maintenance for the Oxford Data Science for IoT Course

@machinelearnbot

After my first post on Anomaly Detection for Time Series post, I would like to continue presenting what I did during the course at for the Data Science for IoT Course at Department of Continued Education of the University of Oxford with Ajit Jaokar. In line with what I wrote previously, this second post will be about predictive maintenance. The post will conclude the initial exploration of the topics I covered at Oxford. When researching materials to cover this course, I had a general idea of what to look for. Having worked already in industrial environments, I had a good idea of what predictive maintenance should be and how it could be used.


Bowel cancer: Artificial intelligence can reduce overtreatment and wrong treatment

#artificialintelligence

In 2018, more than 4,500 people in Norway were treated for colon cancer. This is the most common cancer in Norway with a rapid increase over the past 50 years, according to the Norwegian Cancer Society (NCS). A new method, based on artificial intelligence, can now make sure many of these patients don't get overtreated or wrongly treated. "For many people, there is no effect of the treatment and it is just a nuisance," said Håvard Danielsen, professor at the University of Oxford and the Department of informatics at the University of Oslo. "We want to stop treating or give another treatment to these patients."


First Data Science Research Center Created in Seattle

@machinelearnbot

Data Science Central, LLC, creates the first research lab entirely focused on modern data science, big data and business analytics. Issaquah, Washington (PRWEB) June 13, 2014 - Data Science Central has created the first research lab focused entirely on modern data science, big data and business analytics. The research is developed by co-founder Dr. Vincent Granville, a former post-graduate from Cambridge University with more than 20 years of cross-industry experience in large and small companies including eBay, Wells Fargo, Visa, and Microsoft, and a long list of publications and start-ups. The research center, also called data science research lab and abbreviated as DSRC produces intellectual property (open patents that anyone can use), designs API's, machine learning algorithms prototyped on real data, and publishes articles related to Map-Reduce, robust scoring techniques for big data, clustering and creation of large taxonomies with natural language processing, detection of spurious correlations, new types of noise-resistant regression, new synthetic metrics to re-define correlations, variance and other statistical indicators in a way that is more robust, especially in the context of big data. The DSRC's mission is also to automate data science and statistical analyses, by producing efficient, scalable techniques that can be used as black boxes, in batch mode, by non-experts, or integrated to existing platforms.