A Crowd of Computer Scientists Lined Up for Bill Gates--But it Was Gavin Newsom That Got Them Buzzing

IEEE Spectrum Robotics Channel

Stanford University launched its Institute for Human-Centered AI on Monday. Known as Stanford HAI, the institute's charter is to develop new technologies while guiding AI's impact on the world, wrestle with ethical questions, and come up with helpful public policies. The Institute intends to raise US $1 billion to put towards this effort. The university kicked off Stanford HAI (pronounced High) with an all-day symposium that laid out some of the issues the institute aims to address while showcasing Stanford's current crop of AI researchers. The most anticipated speaker on the agenda was Microsoft co-founder Bill Gates.

Fei-Fei Li Wants AI to Care More About Humans


Fei-Fei Li heard the crackle of a cat's brain cells a couple of decades ago and has never forgotten it. Researchers had inserted electrodes into the animal's brain and connected them to a loudspeaker, filling a lab at Princeton with the eerie sound of firing neurons. "They played the symphony of a mammalian visual system," she told an audience Monday at Stanford, where she is now a professor. The music of the brain helped convince Li to dedicate herself to studying intelligence--a path that led the physics undergraduate to specializing in artificial intelligence, and helping catalyze the recent flourishing of AI technology and use cases like self-driving cars. These days, though, Li is concerned that the technology she helped bring to prominence may not always make the world better.

MIT celebrates 50th anniversary of historic moon landing

MIT News

On Sept. 12, 1962, in a speech given in Houston to pump up support for NASA's Apollo program, President John F. Kennedy shook a stadium crowd with the now-famous quote: "We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard." As he delivered these lines, engineers in MIT's Instrumentation Laboratory were already taking up the president's challenge. One year earlier, NASA had awarded MIT the first major contract of the Apollo program, charging the Instrumentation Lab with developing the spacecraft's guidance, navigation, and control systems that would shepherd astronauts Michael Collins, Buzz Aldrin, and Neil Armstrong to the moon and back. On July 20, 1969, the hard work of thousands paid off, as Apollo 11 touched down on the lunar surface, safely delivering Armstrong and Aldrin ScD '63 as the first people to land on the moon. On Wednesday, MIT's Department of Aeronautics and Astronautics (AeroAstro) celebrated the 50th anniversary of this historic event with the daylong symposium "Apollo 50 50," featuring former astronauts, engineers, and NASA adminstrators who examined the legacy of the Apollo program, and MIT faculty, students, industry leaders, and alumni who envisioned what human space exploration might look like in the next 50 years.

Exploring the Multi-facets of Artificial Intelligence


Artificial intelligence (AI) remains the topic of conversation at conferences and throughout the media in 2019. At the Radiological Society of North America's (RSNA) 2018 meeting and the recent Healthcare Information and Management Systems Society (HIMSS) conference in Orlando, all eyes were on AI. Contributing Editor Greg Freiherr wrote extensive pre-, at- and post-show coverage for both regarding the trending application, and how implementing it can be achievable and the steps needed to get there, or at least get a good start. In Freiherr's Podcast "Hear and Now: AI and Imaging, Your Data as Strategic Asset," Esteban Rubens, an IT infrastructure architect and executive at Pure Storage, a California company that develops flash data storage hardware and software, acknowledged that there has been "a lot of hype" around medical AI. But he stated that the hype is giving way to real progress.

Artificial intelligence is changing the workplace. Finding the right employees is more important than ever.


Whether it is being used to guide financial advisors at investment companies or track employee productivity at restaurant chains, artificial intelligence is becoming increasingly common in the workplace. And human resources departments need to make sure employees are ready for the change. "In the age of digital transformation, organizations need to adjust and innovate to stay competitive," said Uwe Hohgrawe, the faculty director of analytics and enterprise intelligence in Northeastern's College of Professional Studies. "And people need to develop new skills to work with AI." Hohgrawe and Carl Zangerl, who directs Northeastern's human resources management program, have organized a symposium to help companies figure out how to handle the human side of incorporating artificial intelligence into the workplace. The Symposium on the Intersection of AI and Talent Strategy will be held on Tuesday, February 12 in Northeastern's Interdisciplinary Science and Engineering Complex.

Nearly Optimal Dynamic $k$-Means Clustering for High-Dimensional Data Machine Learning

We consider the $k$-means clustering problem in the dynamic streaming setting, where points from a discrete Euclidean space $\{1, 2, \ldots, \Delta\}^d$ can be dynamically inserted to or deleted from the dataset. For this problem, we provide a one-pass coreset construction algorithm using space $\tilde{O}(k\cdot \mathrm{poly}(d, \log\Delta))$, where $k$ is the target number of centers. To our knowledge, this is the first dynamic geometric data stream algorithm for $k$-means using space polynomial in dimension and nearly optimal (linear) in $k$.

Building softer, friendlier robots


Oussama Khatib, a professor of computer science at Stanford University, encountered a pivotal moment during the first outing of his deep-sea robot, Ocean One, off the coast of France. The robot was trapped, far too deep for human retrieval, between the cannons of a sunken ship. Weather was threatening to force the robotics crew to return to shore, but Khatib and his team resisted. "No way, I'm not leaving the robot," Khatib said before moving to the haptic controls, which simulate a sense of touch and allow for remote operation. Able to control the robot's arms, Khatib pushed.

Who's Who in AI Today


Now that the Consumer Electronics Show is over, let's shift gear and move our attention from shiny gadgets to grownup stuff, like Artificial Intelligence technology. Before I headed to Las Vegas with my sidekick Brian Santo, editor-in-chief of EDN, both of us fully expected to hear lots of AI talk and see loads of AI-integrated devices at CES. Surprisingly, we encountered less AI buzz than we'd anticipated. Mostly, it was what we already knew, such as AI for voice (i.e. Evidently, in initial commercial deployment, AI is focused on convenient, easy-to-use UI (voice) for consumer products. Consequently, voice AI supported by Amazon, Google and Microsoft is popping up everywhere, as another product gimmick for CE vendors.



The symposium will take place on 21-25 October 2019 at the Universidad Nacional Autonoma de México (UNAM), Mexico City, Mexico. The conference is organized by the Instituto de Ciencias Nucleares, the Coordinacion de la Investigación Cientifica (UNAM), and the CERN Open Lab. The policy aspects will be organised by the OECD. The recent progress in Artificial Intelligence and Machine Learning has provided new ways to process large data sets. The new techniques are particularly powerful when dealing with unstructured data or data with complex, non-linear relationships, which are hard to model and analyse with traditional, statistical tools.

Iterative Refinement for $\ell_p$-norm Regression Machine Learning

We give improved algorithms for the $\ell_{p}$-regression problem, $\min_{x} \|x\|_{p}$ such that $A x=b,$ for all $p \in (1,2) \cup (2,\infty).$ Our algorithms obtain a high accuracy solution in $\tilde{O}_{p}(m^{\frac{|p-2|}{2p + |p-2|}}) \le \tilde{O}_{p}(m^{\frac{1}{3}})$ iterations, where each iteration requires solving an $m \times m$ linear system, $m$ being the dimension of the ambient space. By maintaining an approximate inverse of the linear systems that we solve in each iteration, we give algorithms for solving $\ell_{p}$-regression to $1 / \text{poly}(n)$ accuracy that run in time $\tilde{O}_p(m^{\max\{\omega, 7/3\}}),$ where $\omega$ is the matrix multiplication constant. For the current best value of $\omega > 2.37$, we can thus solve $\ell_{p}$ regression as fast as $\ell_{2}$ regression, for all constant $p$ bounded away from $1.$ Our algorithms can be combined with fast graph Laplacian linear equation solvers to give minimum $\ell_{p}$-norm flow / voltage solutions to $1 / \text{poly}(n)$ accuracy on an undirected graph with $m$ edges in $\tilde{O}_{p}(m^{1 + \frac{|p-2|}{2p + |p-2|}}) \le \tilde{O}_{p}(m^{\frac{4}{3}})$ time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve on the $p$-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for $\ell_{p}$-norms, using the smoothed $\ell_{p}$-norms introduced in the work of Bubeck et al. Given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed $\ell_{p}$ norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.