Beijing – From quoting the national anthem to referencing Hollywood blockbusters and George Orwell's dystopian novel "1984," Chinese web users are using creative methods to dodge censorship and voice discontent over COVID-19 measures. China maintains a tight grip over the internet, with legions of censors scrubbing out posts that cast the Communist Party's policies in a negative light. The censorship machine is now in overdrive to defend Beijing's stringent "COVID zero" policy as the business hub of Shanghai endures weeks of lockdown to tackle an outbreak. Stuck at home, many of the city's 25 million residents have taken to social media to vent fury over food shortages and spartan quarantine conditions. Charlie Smith, co-founder of censorship monitoring website GreatFire.org, said the Shanghai lockdown had become "too big of an issue to be able to completely censor."
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. The city of Shanghai allowed 4 million more people out of their homes on Wednesday as anti-virus controls that shut down China's biggest city – home to 25 million – eased. Nearly 12 million are permitted to go outdoors, according to health official Wu Ganyu. Wu told reporters that the virus was "under effective control" for the first time in some parts of the city.
Recently, I was asked to be the General Co-Chair for the IEEE International Conference on Connected Vehicles (ICCVE). Founded a decade ago with academic roots, the 2022 version extended beyond the academic model with a significant industry and regulatory emphasis, and concurrent physical locations in Shanghai, Pune, and Munich. Interestingly, the decade-long historical backdrop of the conference and the assembly of the worldwide activity provided an opportunity to be reflective about progress beyond the noise from the tactical day-to-day headlines. Where do we really stand with AI technology? How did we get here?
The impressive generative capacity of large-scale pretrained language models (PLMs) has inspired machine learning researchers to explore methods for generating model training examples via PLMs and data augmentation procedures, i.e. dataset generation. A novel contribution in this research direction is proposed in the new paper ZeroGen: Efficient Zero-shot Learning via Dataset Generation, from researchers at the University of Hong Kong, Shanghai AI Lab, Huawei Noah's Ark Lab and the University of Washington. The team describes their proposed ZEROGEN as an "extreme instance" of dataset generation via PLMs for zero-shot learning. ZEROGEN is a framework for prompt-based zero-shot learning (PROMPTING). Unlike existing approaches that rely on gigantic PLMs during inference, ZEROGEM introduces a more flexible and efficient approach for conducting zero-shot learning with PLMs.
Wen Li, a Shanghai marketer in the hospitality industry, first suspected that an algorithm was messing with her when she and a friend used the same ride-hailing app one evening. Wen's friend, who less frequently ordered rides in luxury cars, saw a lower price for the same ride. Wen blamed the company's algorithms, saying they wanted to squeeze more money from her. Chinese ride-hailing companies say prices vary because of fluctuations in traffic. But some studies and news reports claim the apps may offer different prices based on factors including ride history and the phone a person is using. "I mean, come on--just admit you are an internet company and this is what you do to make extra profit," Wen says.
AutoX expands RoboTaxi empire to San Francisco. If the race for autonomous vehicles is measured in absolute numbers, a company that's been surprisingly successful navigating real-world rollouts in both China and the U.S. is winning. AutoX now counts more than 1,000 of Level 4 autonomous RoboTaxis in operation in China, and it's been a surprise front runner in U.S. L4 testbeds as well. The thousand fleet milestone comes as AutoX is riding a wave of recent announcements. In July 2021, AutoX's newest Gen5 system-equipped RoboTaxis started rolling off the production line. More recently, in January 2022, AutoX shared an inside look at its end-of-line, Level 4 fully driverless RoboTaxis dedicated production facility located near Shanghai, China, with a video.
Car-following refers to a control process in which the following vehicle (FV) tries to keep a safe distance between itself and the lead vehicle (LV) by adjusting its acceleration in response to the actions of the vehicle ahead. The corresponding car-following models, which describe how one vehicle follows another vehicle in the traffic flow, form the cornerstone for microscopic traffic simulation and intelligent vehicle development. One major motivation of car-following models is to replicate human drivers' longitudinal driving trajectories. To model the long-term dependency of future actions on historical driving situations, we developed a long-sequence car-following trajectory prediction model based on the attention-based Transformer model. The model follows a general format of encoder-decoder architecture. The encoder takes historical speed and spacing data as inputs and forms a mixed representation of historical driving context using multi-head self-attention. The decoder takes the future LV speed profile as input and outputs the predicted future FV speed profile in a generative way (instead of an auto-regressive way, avoiding compounding errors). Through cross-attention between encoder and decoder, the decoder learns to build a connection between historical driving and future LV speed, based on which a prediction of future FV speed can be obtained. We train and test our model with 112,597 real-world car-following events extracted from the Shanghai Naturalistic Driving Study (SH-NDS). Results show that the model outperforms the traditional intelligent driver model (IDM), a fully connected neural network model, and a long short-term memory (LSTM) based model in terms of long-sequence trajectory prediction accuracy. We also visualized the self-attention and cross-attention heatmaps to explain how the model derives its predictions.
There is increasing evidence suggesting neural networks' sensitivity to distribution shifts, so that research on out-of-distribution (OOD) generalization comes into the spotlight. Nonetheless, current endeavors mostly focus on Euclidean data, and its formulation for graph-structured data is not clear and remains under-explored, given the two-fold fundamental challenges: 1) the inter-connection among nodes in one graph, which induces non-IID generation of data points even under the same environment, and 2) the structural information in the input graph, which is also informative for prediction. In this paper, we formulate the OOD problem for node-level prediction on graphs and develop a new domain-invariant learning approach, named Explore-to-Extrapolate Risk Minimization, that facilitates GNNs to leverage invariant graph features for prediction. The key difference to existing invariant models is that we design multiple context explorers (specified as graph editers in our case) that are adversarially trained to maximize the variance of risks from multiple virtual environments. Such a design enables the model to extrapolate from a single observed environment which is the common case for node-level prediction. We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution and further demonstrate its power on various real-world datasets for handling distribution shifts from artificial spurious features, cross-domain transfers and dynamic graph evolution. As the demand for handling in-the-wild unseen instances draws increasing concerns, out-ofdistribution (OOD) generalization (Mansour et al., 2009; Blanchard et al., 2011; Muandet et al., 2013; Gong et al., 2016) occupies a central role in the ML community. Yet, recent evidence suggests that deep neural networks can be sensitive to distribution shifts, exhibiting unsatisfactory performance within new environments, e.g., Beery et al. (2018); Su et al. (2019); Recht et al. (2019); Mancini et al. (2020). A more concerning example is that a model for COVID-19 detection exploits undesired'shortcuts' from data sources (e.g., hospitals) to boost training accuracy (DeGrave et al., 2020). Recent studies of the OOD generalization problem like Rojas-Carulla et al. (2018); Bühlmann (2018); Gong et al. (2016); Arjovsky et al. (2019) treat the cause of distribution shifts between training and testing data as a potential unknown environmental variable e. This work was done during authors' internship at AWS Shanghai AI Lab. Such a problem is hard to solve since the observations in training data cannot cover all the environments in practice. Recent research opens a new possibility via learning domain-invariant models (Arjovsky et al., 2019) under a cornerstone data-generating assumption: there exists a portion of information in x that is invariant for prediction on y across different environments.
In this work, we explore character-level neural syntactic parsing for Chinese with two typical syntactic formalisms: the constituent formalism and a dependency formalism based on a newly released character-level dependency treebank. Prior works in Chinese parsing have struggled with whether to de ne words when modeling character interactions. We choose to integrate full character-level syntactic dependency relationships using neural representations from character embeddings and richer linguistic syntactic information from human-annotated character-level Parts-Of-Speech and dependency labels. This has the potential to better understand the deeper structure of Chinese sentences and provides a better structural formalism for avoiding unnecessary structural ambiguities. Specifically, we first compare two different character-level syntax annotation styles: constituency and dependency. Then, we discuss two key problems for character-level parsing: (1) how to combine constituent and dependency syntactic structure in full character-level trees and (2) how to convert from character-level to word-level for both constituent and dependency trees. In addition, we also explore several other key parsing aspects, including di erent character-level dependency annotations and joint learning of Parts-Of-Speech and syntactic parsing. Finally, we evaluate our models on the Chinese Penn Treebank (CTB) and our published Shanghai Jiao Tong University Chinese Character Dependency Treebank (SCDT). The results show the e effectiveness of our model on both constituent and dependency parsing. We further provide empirical analysis and suggest several directions for future study.
It was 1968, two years into the Cultural Revolution. Shanghai was in the middle of an unseasonal heat wave, and its people cursed the "autumn tiger." Zhi Bingyi had more to worry about than the heat. He had been branded a "reactionary academic authority," one of the many damning allegations that sent millions of people to their deaths or to labor camps during the Cultural Revolution. Was it still appropriate for Zhi to think of himself as one of the people?