modeling


Test-Driven Machine Learning

#artificialintelligence

First, before I start, I want to say something about what that is, or what I understand from this. So, here is one interpretation. It is about using data, obviously. So, it has relationships to analytics and data science, and it is, obviously, part of AI in some way. This is my little taxonomy, how I see things linking together. You have computer science, and that has subfields like AI, software engineering, and machine learning is typically considered to be subfield of AI, but a lot of principles of software engineering apply in this area. This is what I want to talk about today. It's heavily used in data science. So, the difference between AI and data science is somewhat fluid if you like, but data science tries to understand what's in data and tries to understand questions about data. But then it tries to use this to make decisions, and then we are back at AI, artificial intelligence, where it's mostly about automating decision making. We have a couple of definitions. AI means using intelligence, making machines intelligent, and that means you can somehow function appropriate in an environment with foresight. Machine learning is a field that looks for algorithms that can automatically improve their performance without explicit programming, but by observing relevant data. And yes, I've thrown in data science as well for good measure, the scientific process of turning data into insight for making better decisions. If you have opened any newspaper, you must have seen the discussion around the ethical dimensions of artificial intelligence, machine learning or data science. Testing touches on that as well because there are quite a few problems in that space, and I'm just listing two here. So, you use data, obviously, to do machine learning. Where does this data come from, and are you allowed to use it? Do you violate any privacy laws, or are you building models that you use to make decisions about people? If you do that, then the general data protection regulation in the EU says you have to be able to explain to an individual if you're making a decision based on an algorithm or a machine, if this decision is of any kind of significant impact. That means, in machine learning, a lot of models are already out of the door because you can't do that. You can't explain why a certain decision comes out of a machine learning model if you use particular models.


Machine learning for everyone startup Intersect Labs launches platform for data analysis – TechCrunch

#artificialintelligence

Machine learning is the holy grail of data analysis, but unfortunately, that holy grail oftentimes requires a PhD in Computer Science just to get started. Despite the incredible attention that machine learning and artificial intelligence get from the press, the reality is that there is a massive gap between the needs of companies to solve business challenges and the availability of talent for building incisive models. YC-backed Intersect Labs is looking to solve that gap by making machine learning much more widely accessible to the business analyst community. Through its platform, which is being launched fully publicly, business analysts can upload their data, and Intersect will automatically identify the right machine learning models to apply to the dataset and optimize the parameters of those models. The company was founded by Ankit Gordhandas and Aaron Fried in August of last year.


Unlocking The Value Of Artificial Intelligence For Retailers - Retail TouchPoints

#artificialintelligence

Competition for good workers is tight, and employees' expectations of their jobs have never been higher. They want an inspirational workplace where they feel motivated to be loyal, productive and engaged. Among many things, that means keeping up with technology. Giving retail teams access to leading-edge tech that uses AI and machine learning will provide them -- and you -- insights not previously available, increasing productivity and helping morale. For example, modern workforce management can empower employees with preferred scheduling options and flexible clocking.


Modeling and Interpreting Real-world Human Risk Decision Making with Inverse Reinforcement Learning

arXiv.org Machine Learning

We model human decision-making behaviors in a risk-taking task using inverse reinforcement learning (IRL) for the purposes of understanding real human decision making under risk. To the best of our knowledge, this is the first work applying IRL to reveal the implicit reward function in human risk-taking decision making and to interpret risk-prone and risk-averse decision-making policies. We hypothesize that the state history (e.g. rewards and decisions in previous trials) are related to the human reward function, which leads to risk-averse and risk-prone decisions. We design features that reflect these factors in the reward function of IRL and learn the corresponding weight that is interpretable as the importance of features. The results confirm the sub-optimal risk-related decisions of human-driven by the personalized reward function. In particular, the risk-prone person tends to decide based on the current pump number, while the risk-averse person relies on burst information from the previous trial and the average end status. Our results demonstrate that IRL is an effective tool to model human decision-making behavior, as well as to help interpret the human psychological process in risk decision-making.


Alteryx Releases Assisted Modeling Tool

#artificialintelligence

Recognizing the pervasive talent gap that exists between data scientists and data workers in the line of business, Assisted Modeling helps teach data science with a guided walk-through and aims to help all data workers, regardless of technical acumen, advance their skill sets in the process of building machine learning models. "As we continue to deliver innovations for a smarter platform, it is critical to address the human at the center of analytic intelligence. Our approach in building Assisted Modeling is to advance the skills of the data worker, creating next-level citizen data scientists capable of building the machine learning models required to tackle the advanced analytic challenges of the future," said Kramer. "Assisted Modeling provides users the transparency and control needed to build trustworthy machine learning models that drive business outcomes without writing a line of code. I am thrilled to deliver the newest version of our platform today and to invite our customers and partners to be the first to experience Assisted Modeling." As an output of the application, users can access code-free machine learning tools directly within the Alteryx Designer interface. Assisted Modeling allows any data worker to construct machine learning models, understand how and why their models work, and capture modeling decisions, turning raw data into informed business decisions with unprecedented speed and confidence. Alteryx also announced the general availability of the newest version of the Alteryx Platform (2019.2),


15 Best Machine Learning Course in 2019 MLAIT

#artificialintelligence

Below is the 15 best machine learning course to accelerate your ML journey this year. The holy grail of machine learning online course, Machine Learning by Stanford is considered as the best machine learning course by many. This course is prepared and maintained by Andrew Ng, pioneer machine learning scientist who've led ML research projects for both Google and Chinese giant Baidu. Although the course requires a paid subscription, you can ask for financial aid if you're a student. This online machine learning course from DataCamp is the best machine learning course with a primary emphasis on statistics – the de facto requirement for effective data science projects.


Bayesian Tensor Filtering: Smooth, Locally-Adaptive Factorization of Functional Matrices

arXiv.org Machine Learning

We consider the problem of functional matrix factorization, finding low-dimensional structure in a matrix where every entry is a noisy function evaluated at a set of discrete points. Such problems arise frequently in drug discovery, where biological samples form the rows, candidate drugs form the columns, and entries contain the dose-response curve of a sample treated at different concentrations of a drug. We propose Bayesian Tensor Filtering (BTF), a hierarchical Bayesian model of matrices of functions. BTF captures the smoothness in each individual function while also being locally adaptive to sharp discontinuities. The BTF model is agnostic to the likelihood of the underlying observations, making it flexible enough to handle many different kinds of data. We derive efficient Gibbs samplers for three classes of likelihoods: (i) Gaussian, for which updates are fully conjugate; (ii) Binomial and related likelihoods, for which updates are conditionally conjugate through P{\'o}lya--Gamma augmentation; and (iii) Black-box likelihoods, for which updates are non-conjugate but admit an analytic truncated elliptical slice sampling routine. We compare BTF against a state-of-the-art method for dynamic Poisson matrix factorization, showing BTF better reconstructs held out data in synthetic experiments. Finally, we build a dose-response model around BTF and show on real data from a multi-sample, multi-drug cancer study that BTF outperforms the current standard approach in biology. Code for BTF is available at https://github.com/tansey/functionalmf.


Top 20 Machine Learning Tools and Frameworks - 21Twelve Interactive

#artificialintelligence

Machine learning is expanding its scope to get the title of the trendiest job market across the globe. Techno-experts and various establishments are investing billions into this fleshly coming up industry. As per statista the chief reason for the adoption of machine learning technology according to 33% of individuals is its use in business analysis. Offering a handful of opportunities, freshers of IT as well as experienced individuals are willing to know more about the different programming coding and language tool to establish themselves wholeheartedly in the machine learning software. Among all this, there are various non-programmers who don't possess to have any kind of knowledge about coding and yet desires to walk in the vicinity of machine language and remain functioning in the industry.


Bentley Digs the Digital Twin - Connected World

#artificialintelligence

Sometimes listening to an update call is more than a revenue report update on the recent acquisitions and technology advancements being announced by a company; it's really about the vision of the leader at the helm. And that was my takeaway after the May 13, Bentley Systems Spring Update conference call for press and analysts. If you listened to Greg Bentley, CEO, Bentley Systems, closely enough you were able to read between the lines and grasp how it plans to move the construction industry forward with a highly informed construction professional using only the best in tech tools to drive infrastructure. According to Bentley, with all the right tools in hand from machine learning, reality modeling, drone data acquisition, the future is upon us now. And as a result, what role will the "digital integrator" become in making key buying decisions?


Residual Flows for Invertible Generative Modeling

arXiv.org Machine Learning

Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood. Invertible residual networks provide a flexible family of transformations where only Lipschitz conditions rather than strict architectural constraints are needed for enforcing invertibility. However, prior work trained invertible residual networks for density estimation by relying on biased log-density estimates whose bias increased with the network's expressiveness. We give a tractable unbiased estimate of the log density, and reduce the memory required during training by a factor of ten. Furthermore, we improve invertible residual blocks by proposing the use of activation functions that avoid gradient saturation and generalizing the Lipschitz condition to induced mixed norms. The resulting approach, called Residual Flows, achieves state-of-the-art performance on density estimation amongst flow-based models, and outperforms networks that use coupling blocks at joint generative and discriminative modeling.