Ng announced Tuesday that he raised money from venture capital firms New Enterprise Associates, Sequoia Capital and Greylock Partners as well as SoftBank Group Corp. Under Ng, Baidu released a voice-based operating system that users can talk to - much like Amazon's Alexa voice assistant or Apple's Siri - and also started working on self-driving cars and face recognition technology to open things like transit turnstiles when users approach. I think it's a more systematic, repeatable process than most people think," said Ng, who also taught artificial intelligence courses at Stanford University. The first company to receive money from the fund will be Landing.ai,
As the emergence and the thriving development of social networks, a huge number of short texts are accumulated and need to be processed. Inferring latent topics of collected short texts is useful for understanding its hidden structure and predicting new contents. Unlike conventional topic models such as latent Dirichlet allocation (LDA), a biterm topic model (BTM) was recently proposed for short texts to overcome the sparseness of document-level word co-occurrences by directly modeling the generation process of word pairs. Stochastic inference algorithms based on collapsed Gibbs sampling (CGS) and collapsed variational inference have been proposed for BTM. However, they either require large computational complexity, or rely on very crude estimation. In this work, we develop a stochastic divergence minimization inference algorithm for BTM to estimate latent topics more accurately in a scalable way. Experiments demonstrate the superiority of our proposed algorithm compared with existing inference algorithms.
As intelligent agents become more autonomous, sophisticated, and prevalent, it becomes increasingly important that humans interact with them effectively. Machine learning is now used regularly to acquire expertise, but common techniques produce opaque content whose behavior is difficult to interpret. Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices. We will refer to this general ability as explainable agency. This capacity for explaining decisions is not an academic exercise. When a self-driving vehicle takes an unfamiliar turn, its passenger may desire to know its reasons. When a synthetic ally in a computer game blocks a player's path, he may want to understand its purpose. When an autonomous military robot has abandoned a high-priority goal to pursue another one, its commander may request justification. As robots, vehicles, and synthetic characters become more self-reliant, people will require that they explain their behaviors on demand. The more impressive these agents' abilities, the more essential that we be able to understand them.
The next time you pull out your smartphone and ask Siri or Google for advice, or chat with a bot online, take pride in knowing that some of the theoretical foundation for that technology was brought to life here in Canada. Indeed, as far back as the early 1980s, key organizations such as the Canadian Institute for Advanced Research embarked on groundbreaking work in neural networks and machine learning. Academic pioneers such as Geoffrey Hinton (now a professor emeritus at the University of Toronto and an advisor to Google, among others), the University of Montreal's Yoshua Bengio and the University of Alberta's Rich Sutton produced critical research that helped fuel Canada's rise to prominence as a global leader in artificial intelligence (AI). Stephen Piron, co-CEO of Dessa, praises the federal government's efforts at cutting immigration processing timelines for highly skilled foreign workers. Canada now houses three major AI clusters – in Toronto, Montreal and Edmonton – that form the backbone of the country's machine-learning ecosystem and support homegrown AI startups.