These venture bets on startups that "returned the fund," making firms and careers, were the result of research, strong convictions, and patient follow-through. Here are the stories behind the biggest VC home runs of all time. In venture capital, returns follow the Pareto principle -- 80% of the wins come from 20% of the deals. Great venture capitalists invest knowing they're going to take a lot of losses in order to hit those wins. Chris Dixon of top venture firm Andreessen Horowitz has referred to this as the "Babe Ruth effect," in reference to the legendary 1920s-era baseball player. Babe Ruth would strike out a lot, but also made slugging records. Likewise, VCs swing hard, and occasionally hit a home run. Those wins often make up for all the losses and then some -- they "return the fund." "If you do the math around our goal of returning the fund with our high impact companies, you will notice that we need these companies to exit at a billion dollars or more," he wrote.
Europe is a hotbed of AI innovation. Here are 25 AI start-ups to watch out for in 2017 and beyond. There are literally hundreds of promising companies pushing the boundaries of artificial intelligence and machine learning in Europe. We've included a number of Israel-based start-ups because they too fall into the sphere of influence of European investors. And, judging by recent acquisitions of Israel-based machine vision and AI companies by players like Apple and Intel, they are definitely producing the goods.
Blockchain may have been the obvious choice when one considers the subcategory of fintech that might gain traction this year, but we should have expected that the rise of a more reliable ledger system would pave the way to the rise of companies that would be able to utilise information gleaned from said ledger. Either through newly collected information or utilising data banks that have been sitting idly in data caches of various big corporations, artificial intelligence (AI) has risen and may pave the way for the future of fintech. McKinsey estimates that AI could potentially create between US$3.5 trillion and US$5.8 trillion in value annually across nine business functions in 19 industries. In Asia, Hong Kong's AI sector is getting a slice of a HK$50 billion of budget 2019, and the Monetary Authority of Singapore has a standing US$27 million AI grant named AIDA furthering the island nation's drive into artificial intelligence. With the world waking up to the scene, we have compiled a list of AI companies to watch that have a stake in fintech.
U.S. stocks inched higher Tuesday in another cautious day of trading as investors kept an eye on central banks in the U.S. and Japan. Healthcare and household goods companies led the way, while energy companies slipped. Major market indexes were higher all day but returned most of those gains at the close of trading. They rose just enough to cancel out Monday's small losses. Drug companies helped healthcare stocks make modest gains, while Exxon Mobil fell on reports that it's being investigated by securities regulators.
Methods for learning Bayesian network structure can discover dependency structure between observed variables, and have been shown to be useful in many applications. However, in domains that involve a large number of variables, the space of possible network structures is enormous, making it difficult, for both computational and statistical reasons, to identify a good model. In this paper, we consider a solution to this problem, suitable for domains where many variables have similar behavior. Our method is based on a new class of models, which we call module networks. A module network explicitly represents the notion of a module - a set of variables that have the same parents in the network and share the same conditional probability distribution. We define the semantics of module networks, and describe an algorithm that learns a module network from data. The algorithm learns both the partitioning of the variables into modules and the dependency structure between the variables. We evaluate our algorithm on synthetic data, and on real data in the domains of gene expression and the stock market. Our results show that module networks generalize better than Bayesian networks, and that the learned module network structure reveals regularities that are obscured in learned Bayesian networks.