Goto

Collaborating Authors

Results


Artificial intelligence: a new paradigm in the swine industry - Pig Progress

#artificialintelligence

Machine learning is one of the artificial intelligence models frequently used for modeling, prediction, and management of swine farming. Machine learning models mainly include algorithms of a decision tree, clustering, a support vector machine, and the Markov chain model focused on disease detection, behaviour recognition for postural classification, and sound detection of animals. The researchers from North Carolina State University and Smithfield Premium Genetics* demonstrated the application of machine learning algorithms to estimate body weight in growing pigs from feeding behaviour and feed intake data. Feed intake, feeder occupation time, and body weight information were collected from 655 pigs of 3 breeds (Duroc, Landrace, and Large White) from 75 to 166 days of age. 2 machine learning algorithms (long short-term memory network and random forest) were selected to forecast the body weight of pigs using 4 scenarios. Long short-term memory was used to accurately predict time series data due to its ability in learning and storing long term patterns in a sequence-dependent order and random forest approach was used as a representative algorithm in the machine learning space.


CO2 emissions dataset in USA: a statistical analysis, using Python

#artificialintelligence

Disclaimer: This notebook has not been written by a climate scientist! Everything is exclusively analyzed by a data scientist point of view. All the statistical analysis are meant to be used as tools for a time series analysis of any kind. Let's start by stating the obvious: The job of a data scientist is to extract insights. The complexity of the tool that you are using is not really relevant.


12 Best Deep Learning Courses on Coursera

#artificialintelligence

This is another specialization program offered by Coursera. This specialization program is for both computer science professionals and healthcare professionals. In this specialization program, you will learn how to identify the healthcare professional's problems that can be solved by machine learning. You will also learn the fundamentals of the U.S. healthcare system, the framework for successful and ethical medical data mining, the fundamentals of machine learning as it applies to medicine and healthcare, and much more. This specialization program has 5 courses. Let's see the details of the courses-


Supervised Machine Learning: Regression and Classification

#artificialintelligence

In this beginner-friendly program, you will learn the fundamentals of machine learning and how to use these techniques to build real-world AI applications. This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field. This 3-course Specialization is an updated and expanded version of Andrew's pioneering Machine Learning course, rated 4.9 out of 5 and taken by over 4.8 million learners since it launched in 2012. It provides a broad introduction to modern machine learning, including supervised learning (multiple linear regression, logistic regression, neural networks, and decision trees), unsupervised learning (clustering, dimensionality reduction, recommender systems), and some of the best practices used in Silicon Valley for artificial intelligence and machine learning innovation (evaluating and tuning models, taking a data-centric approach to improving performance, and more.) By the end of this Specialization, you will have mastered key concepts and gained the practical know-how to quickly and powerfully apply machine learning to challenging real-world problems.


Estimating Lake Water Volume With Regression and Machine Learning Methods

#artificialintelligence

The volume of a lake is a crucial component in understanding environmental and hydrologic processes. The State of Minnesota (USA) has tens of thousands of lakes, but only a small fraction has readily available bathymetric information. In this paper we develop and test methods for predicting water volume in the lake-rich region of Central Minnesota. We used three different published regression models for predicting lake volume using available data. The first model utilized lake surface area as the sole independent variable. The second model utilized lake surface area but also included an additional independent variable, the average change in land surface area in a designated buffer area surrounding a lake. The third model also utilized lake surface area but assumed the land surface to be a self-affine surface, thus allowing the surface area-lake volume relationship to be governed by a scale defined by the Hurst coefficient. These models all utilized bathymetric data available for 816 lakes across the region of study. The models explained over 80% of the variation in lake volumes. The sum difference between the total predicted lake volume and known volumes were <2%. We applied these models to predicting lake volumes using available independent variables for over 40,000 lakes within the study region. The total lake volumes for the methods ranged from 1,180,000- and 1,200,000-hectare meters. We also investigated machine learning models for estimating the individual lake volume...


Andrew Ng announces a new ML specialisation on Coursera

#artificialintelligence

Andrew Ng's DeepLearning.AI, in partnership with Stanford Online, recently announced a new Machine Learning Specialisation course on Coursera. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications. The 3-course program is a new version of Ng's pioneering machine learning course, taken by over 4.8 million learners since 2012. The program provides a broad introduction to modern machine learning, including supervised learning (multiple linear regression, logistic regression, neural networks, and decision trees), unsupervised learning (clustering, dimensionality reduction, recommender systems), and some of the best practices used in Silicon Valley for artificial intelligence and machine learning innovation. The new Machine Learning Specialization by @DeepLearningAI_ & @StanfordOnline is now available on @Coursera!


The Mystery of ADASYN is Revealed

#artificialintelligence

This research assumes that you are familiar with class imbalance and the ADASYN algorithm. We strongly encourage our readers to review the conference article that launched ADASYN (just type that into Google Scholar or see the References section of this document), and then read any number of articles in Towards Data Science that discuss class imbalance and ADASYN. Because this is neither a guide nor an overview; it is voyage into uncharted waters with startling discoveries. The answers are 1) surprising, 2) fascinating, and 3) extraordinary, in that order. All models in this research were conducted using the RandomForest and LogisticRegression algorithms in the sci-kit learn library to gain information about both tree and linear structures, respectively. All predictive models were 10-fold cross-validated with stratified sampling using "stratify y" in train_test_split and "cv 10" in GridSearchCV.


Pentagon Calls for New Ideas in 'Third Wave' of AI Evolution

#artificialintelligence

A key research and development agency within the Department of Defense is accepting new contract proposals specifically focused on advancing algorithmic processing within Defense's artificial intelligence projects. The Defense Advanced Research Projects Agency is formally soliciting contracts for its new Enabling Confidence program, a subsect within its Artificial Intelligence Exploration initiative. The AIE focuses on what DARPA defines as its "third wave" of artificial intelligence research, which includes AI theory and application research that examines limitations with rule and statistical learning theories belying AI technologies. "The pace of discovery in AI science and technology is accelerating worldwide," the program announcement says. "AIE will enable DARPA to fund pioneering AI research to discover new areas where R&D programs awarded through this new approach may be able to advance the state of the art."


Linear Regression in Python: Explained with coding examples

#artificialintelligence

Before I go to the implementation of Linear Regression in Python, we'll take a minute to understand what Linear Regression is. Linear Regression is a linear line that predicts the relationship of a dependent variable to an independent variable. From the above simple definition, you might be confused as to how the line predicts this relationship? Suppose we have the per capita income of the US by year, as shown in Figure 1 below. In the figure, each year has a corresponding income with it.


Special Issue! Foundational Algorithms, Where They Came From, Where They're Going

#artificialintelligence

Years ago, I had to choose between a neural network and a decision tree learning algorithm. It was necessary to pick an efficient one, because we planned to apply the algorithm to a very large set of users on a limited compute budget. I went with a neural network. I hadn't used boosted decision trees in a while, and I thought they required more computation than they actually do -- so I made a bad call. Fortunately, my team quickly revised my decision, and the project was successful. This experience was a lesson in the importance of learning, and continually refreshing, foundational knowledge. If I had refreshed my familiarity with boosted trees, I would have made a better decision.