Goto

Collaborating Authors

 london underground


Tomography of the London Underground: a Scalable Model for Origin-Destination Data

Neural Information Processing Systems

The paper addresses the classical network tomography problem of inferring local traffic given origin-destination observations. Focussing on large complex public transportation systems, we build a scalable model that exploits input-output information to estimate the unobserved link/station loads and the users path preferences. Based on the reconstruction of the users' travel time distribution, the model is flexible enough to capture possible different path-choice strategies and correlations between users travelling on similar paths at similar times. The corresponding likelihood function is intractable for medium or large-scale networks and we propose two distinct strategies, namely the exact maximum-likelihood inference of an approximate but tractable model and the variational inference of the original intractable model. As an application of our approach, we consider the emblematic case of the London Underground network, where a tap-in/tap-out system tracks the start/exit time and location of all journeys in a day. A set of synthetic simulations and real data provided by Transport For London are used to validate and test the model on the predictions of observable and unobservable quantities.


Reviews: Tomography of the London Underground: a Scalable Model for Origin-Destination Data

Neural Information Processing Systems

I thank the authors for the clarification in their rebuttal. It is even more clear that the authors should better contrast their work with aggregate approaches such as Dan Sheldon's collective graphical models (e.g., Sheldon and Dietterich (2011), Kumar et al. 2013, Bernstein and Sheldon 2016). Part of the confusion came from some of the modeling choices: In equation (1) the travel times added by one station is Poisson distributed?! Poisson is often used for link loads (how many people there are in a given station), not to model time. Is the quantization of time too coarse for a continuous-time model? Wouldn't a phase-type distribution(e.g., Erlang) be a better choice for time? Such modeling choices must be explained.


London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime

WIRED

Thousands of people using the London Underground had their movements, behavior, and body language watched by AI surveillance software designed to see if they were committing crimes or were in unsafe situations, new documents obtained by WIRED reveal. The machine learning software was combined with live CCTV footage to detect aggressive behavior, guns or knives being brandished, as well as looking for people falling onto tube tracks or dodging fares. From October 2022 until the end of September 2023, Transport for London (TfL), which operates the city's Tube and bus network, tested 11 algorithms to monitor people passing through Willesden Green Tube station, in the northwest of the city. The proof of concept trial is the first time the transport body has combined AI and live video footage to generate alerts that are sent to frontline staff. More than 44,000 alerts were issued during the test, with 19,000 being delivered to station staff in real time.


Tomography of the London Underground: a Scalable Model for Origin-Destination Data

Colombo, Nicolò, Silva, Ricardo, Kang, Soong Moon

Neural Information Processing Systems

The paper addresses the classical network tomography problem of inferring local traffic given origin-destination observations. Focussing on large complex public transportation systems, we build a scalable model that exploits input-output information to estimate the unobserved link/station loads and the users path preferences. Based on the reconstruction of the users' travel time distribution, the model is flexible enough to capture possible different path-choice strategies and correlations between users travelling on similar paths at similar times. The corresponding likelihood function is intractable for medium or large-scale networks and we propose two distinct strategies, namely the exact maximum-likelihood inference of an approximate but tractable model and the variational inference of the original intractable model. As an application of our approach, we consider the emblematic case of the London Underground network, where a tap-in/tap-out system tracks the start/exit time and location of all journeys in a day.


Mobile phones could soon work properly on London Underground

The Independent - Tech

Tube users could soon get phone signal underground, it's been reported. TfL is said to be in talks with telecommunications groups, and could offer full mobile coverage on London Underground after next week's general election. Commuters can already access the internet on sections of the tube network by connecting to Virgin Media's Wi-Fi service, but it isn't especially practical. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.


Machine learning versus AI: what's the difference?

#artificialintelligence

Thanks to the likes of Google, Amazon, and Facebook, the terms artificial intelligence (AI) and machine learning have become much more widespread than ever before. They are often used interchangeably and promise all sorts from smarter home appliances to robots taking our jobs. But while AI and machine learning are very much related, they are not quite the same thing. AI'lawyer' correctly predicts outcomes of human rights trials AI is a branch of computer science attempting to build machines capable of intelligent behaviour, while Stanford University defines machine learning as "the science of getting computers to act without being explicitly programmed". You need AI researchers to build the smart machines, but you need machine learning experts to make them truly intelligent.


Artificial Intelligence: Google's DeepMind Creates Neural Network That Can 'Logically Reason' Its Way Around London Underground

International Business Times

This is a problem for scientists working toward the creation of Artificial Intelligence (AI) systems capable of performing complex tasks with minimal human supervision. In a step toward overcoming this hurdle, researchers at Google's DeepMind -- the company that developed the Go-playing computer program AlphaGo -- announced earlier this week the creation of a neural network that can not only learn, but can also use data stored in its memory to "logically reason" and make inferences to answer questions. DeepMind's new system -- called a Differentiable Neural Computer (DNC) -- combines deep learning, wherein it can learn from examples and make sense of complex input it has never received before, with an external memory, which, as the DeepMind researchers Alexander Graves and Greg Wayne explain in a blog post, allows it to "store knowledge quickly and reason about it flexibly." In order to achieve this, the researchers first trained the neural network using randomly generated map-like structures -- a process that allowed the DNC to learn how to store connections between various parts in its external memory. After this, when it was confronted with a new map, the DNC was able to provide answers that were not explicitly stated in the data set.


Google's new artificial intelligence maps the London underground

#artificialintelligence

Scientists at Google have created an artificial intelligence program that can compute problems requiring strategic reasoning, The Guardian reports. The algorithm, part of an emerging field called deep learning, is able to master tasks independently using external memory, similar to the way humans work through a new recipe, according to the study published in Nature. In this case, it was able to figure out on its own the quickest route between stops on the London Underground and reassess if the destination was overshot. This could pave the way to more efficient virtual assistant applications, which might be bad news for Apple's sassy sidekick.


Google's AI can now learn from its own memory independently

#artificialintelligence

The DeepMind artificial intelligence (AI) being developed by Google's parent company, Alphabet, can now intelligently build on what's already inside its memory, the system's programmers have announced. Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank. What the DNC is doing is effectively combining external memory (like the external hard drive where all your photos get stored) with the neural network approach of AI, where a massive number of interconnected nodes work dynamically to simulate a brain. "These models... can learn from examples like neural networks, but they can also store complex data like computers," write DeepMind researchers Alexander Graves and Greg Wayne in a blog post. At the heart of the DNC is a controller that constantly optimises its responses, comparing its results with the desired and correct ones.


Google's DeepMind gives an AI human-like memory to solve tough problems

#artificialintelligence

With the advances of modern data storage technology, chips the size of your fingernail are capable of storing an entire library's worth of knowledge, so one thing you might think computers do better than people is remember things. But according to Google Inc.'s DeepMind team, the artificial intelligence research group that developed AlphaGo, that is not entirely true. In a new paper published in the journal Nature, DeepMind has outlined a process where it trained a neural network to have human-like memory, giving it not only the ability to store data, but also to recall that information and use it to solve novel problems. "Neural networks excel at pattern recognition and quick, reactive decision-making, but we are only just beginning to build neural networks that can think slowly – that is, deliberate or reason using knowledge," the DeepMind team wrote in a recent blog post. "For example, how could a neural network store memories for facts like the connections in a transport network and then logically reason about its pieces of knowledge to answer questions?" DeepMind calls its new method differentiable neural computers, and the team demonstrated its capabilities using the London Underground, one of the largest public transit systems in the world.