Goto

Collaborating Authors

Do All Good Actors Look The Same? Exploring News Veracity Detection Across The U.S. and The U.K

arXiv.org Machine Learning

A major concern with text-based news veracity detection methods is that they may not generalize across countries and cultures. In this short paper, we explicitly test news veracity models across news data from the United States and the United Kingdom, demonstrating there is reason for concern of generalizabilty. Through a series of testing scenarios, we show that text-based classifiers perform poorly when trained on one country's news data and tested on another. Furthermore, these same models have trouble classifying unseen, unreliable news sources. In conclusion, we discuss implications of these results and avenues for future work.


Explorium secures $19M funding to automate data science and machine learning-driven insights ZDNet

#artificialintelligence

Machine learning is a powerful paradigm many organizations are utilizing to derive insights and add features to their applications, but using it requires skills, data, and effort. Explorium, a startup from Israel, has just announced $19 million of funding to lower the barrier on all of the above. The funding announced today comprises a seed round of $3.6 million led by Emerge with the participation of F2 Capital and a $15.5 million Series A led by Zeev Ventures with the involvement of the seed investors. Explorium was founded by Maor Shlomo, Or Tamir, and Omer Har, three Israeli tech entrepreneurs, who previously led large-scale data mining and optimization platforms for big data-based marketing leaders. "We are doing for machine learning data what search engines did for the web," said Explorium co-founder and CEO Maor Shlomo.


Learning Semantic Definitions of Online Information Sources

Journal of Artificial Intelligence Research

The Internet contains a very large number of information sources providing many types of data from weather forecasts to travel deals and financial information. These sources can be accessed via Web-forms, Web Services, RSS feeds and so on. In order to make automated use of these sources, we need to model them semantically, but writing semantic descriptions for Web Services is both tedious and error prone. In this paper we investigate the problem of automatically generating such models. We introduce a framework for learning Datalog definitions of Web sources.


Pentaho Ends Machine Learning Gridlock

#artificialintelligence

Pentaho helps data scientists and engineers easily prepare and blend traditional sources like ERP, EAM and big data sources like sensors and social media. Pentaho also accelerates the notoriously difficult and costly task of feature engineering by automating data onboarding, data transformation and data validation in an easy-to-use drag and drop environment. Model training, tuning and testing -- Data scientists often apply trial and error to strike the right balance of complexity, performance and accuracy in their models. With integrations for languages like R and Python, and for machine learning packages like Spark MLlib and Weka, Pentaho allows data scientists to seamlessly train, tune, build and test models faster. Model deployment and operationalization -- a completely trained, tuned and tested machine learning model still needs to be deployed.


Multi-Source Neural Variational Inference

arXiv.org Machine Learning

Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source's posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.