Goto

Collaborating Authors

 automobile


Deepfaking Orson Welles's Mangled Masterpiece

The New Yorker

A.I. re-creations of the "Magnificent Ambersons" stars Joseph Cotten, Agnes Moorehead, Dolores Costello, and Tim Holt. Edward Saatchi first saw "The Magnificent Ambersons," Orson Welles's mangled masterpiece from 1942, when he was twelve years old, in the private screening room of his family's crenellated mansion, in West Sussex. Saatchi's parents had already shown him and his brother "Citizen Kane." But "Ambersons," Welles's follow-up film, about a wealthy Midwestern clan brought low, came with a bewitching backstory: R.K.O. had ripped the movie from the director's hands, slashed forty-three minutes, tacked on a happy ending, and destroyed the excised footage in order to free up vault space, leaving decades' worth of cinephiles to obsess over what might have been. Part of this outcome was the result of studio treachery, but Welles, owing to some combination of hubris and distraction, had let his film slip from his grasp. Saatchi recalled, "Around the family dinner table, that was always such a big topic: How much was Welles responsible for this? Mum was always quite tough on him." Saatchi's father, Maurice, a baron also known as Lord Saatchi, is one of two Iraqi British brothers who founded the advertising firm Saatchi & Saatchi, in 1970, which led their family to become one of the richest in the U.K. Edward's mother, Josephine Hart, who died in 2011, was an Irish writer best known for her erotic thriller "Damage," which was adapted into a film by Louis Malle. Edward, born in 1985, grew up in London and at the sprawling country estate, surrounded by palatial gardens and classical statuary. He described his parents as "movie mad." The actor and Welles biographer Simon Callow, a Saatchi family friend, recalled, "They had a cinema of their own inside the house, and it was a ritual of theirs every week to watch a film together." Aside from old movies, Edward was obsessed with "Star Trek"--especially the Holodeck, a device that conjured simulated 3-D worlds populated by characters who could interact with the members of the Starship Enterprise. That kind of wizardry didn't exist in the real world, at least not yet. But the young prince of the Saatchi castle had faith that someday it would, and that it could bring the original "Ambersons" back from oblivion. "To me, this is the lost holy grail of cinema," Saatchi told me recently, like Charles Foster Kane murmuring about Rosebud. "It just seemed intuitively that there would be some way to undo what had happened."



Tesla's 'Robotaxi' brand might be too generic to trademark

Engadget

The US Patent and Trademark Office has refused one of Tesla's initial attempts to trademark the term "Robotaxi" because it believes the name is generic and already in use by other companies, according to a filing spotted by TechCrunch. Tesla was hoping to trademark the term in connection to its planned self-driving car service, but now it'll have to reply with more evidence to change the office's mind. The main issue outlined in the USPTO decision is that "Robotaxi" is "merely descriptive," as in its an already commonly used term. A robotaxi typically refers to the self-driving cars used in services like Waymo. As long as Silicon Valley has believed money could be made selling autonomous vehicles (and the rides you can take in them), the term has been in use.


Obtaining Example-Based Explanations from Deep Neural Networks

Dong, Genghua, Boström, Henrik, Vazirgiannis, Michalis, Bresson, Roman

arXiv.org Artificial Intelligence

Most techniques for explainable machine learning focus on feature attribution, i.e., values are assigned to the features such that their sum equals the prediction. Example attribution is another form of explanation that assigns weights to the training examples, such that their scalar product with the labels equals the prediction. The latter may provide valuable complementary information to feature attribution, in particular in cases where the features are not easily interpretable. Current example-based explanation techniques have targeted a few model types only, such as k-nearest neighbors and random forests. In this work, a technique for obtaining example-based explanations from deep neural networks (EBE-DNN) is proposed. The basic idea is to use the deep neural network to obtain an embedding, which is employed by a k-nearest neighbor classifier to form a prediction; the example attribution can hence straightforwardly be derived from the latter. Results from an empirical investigation show that EBE-DNN can provide highly concentrated example attributions, i.e., the predictions can be explained with few training examples, without reducing accuracy compared to the original deep neural network. Another important finding from the empirical investigation is that the choice of layer to use for the embeddings may have a large impact on the resulting accuracy.


TomTom and Microsoft team up to bring generative AI to automobiles

Engadget

TomTom just announced a "fully integrated, AI-powered conversational automotive assistant" which should start popping up in dashboard infotainment platforms in the near-ish future. The company has issued some bold claims for the AI, saying it'll offer "more sophisticated voice interaction" and allow users to converse naturally to navigate, find stops along a route, control onboard systems, open windows and just about anything else you find yourself doing while driving. The company, best known for GPS platforms, partnered up with Microsoft to develop this AI assistant. Cosmos DB is a multi-model database and Cognitive Services is a set of APIs for use in AI applications, so this should be a capable assistant that draws from the latest advancements. TomTom promises that the voice assistant will integrate into a variety of interfaces offered by major automobile manufacturers, stating that the auto company will retain ownership of its branding.


An Investigation of Representation and Allocation Harms in Contrastive Learning

Maity, Subha, Agarwal, Mayank, Yurochkin, Mikhail, Sun, Yuekai

arXiv.org Machine Learning

The effect of underrepresentation on the performance of minority groups is known to be a serious problem in supervised learning settings; however, it has been underexplored so far in the context of self-supervised learning (SSL). In this paper, we demonstrate that contrastive learning (CL), a popular variant of SSL, tends to collapse representations of minority groups with certain majority groups. We refer to this phenomenon as representation harm and demonstrate it on image and text datasets using the corresponding popular CL methods. Furthermore, our causal mediation analysis of allocation harm on a downstream classification task reveals that representation harm is partly responsible for it, thus emphasizing the importance of studying and mitigating representation harm. Finally, we provide a theoretical explanation for representation harm using a stochastic block model that leads to a representational neural collapse in a contrastive learning setting.


Everyone Has 'Car Brain'

The Atlantic - Technology

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Francis Curzon, born in 1884 and later named the fifth Earl Howe, loved a souped-up Bugatti. And he loved to drive fast. He was famous for his "great skill and daring" on the racetrack, and also, eventually, for crashing into pedestrians--knocking down a boy in Belfast, Northern Ireland; slamming into a horse-drawn cart and killing a peasant in Pesaro, Italy. These incidents (and 10 more) were recounted in a 1947 polemic by J. S. Dean, chair of the Pedestrians' Association in England.


Boosting Tail Neural Network for Realtime Custom Keyword Spotting

Xue, Sihao, Shen, Qianyao, Li, Guoqing

arXiv.org Artificial Intelligence

In this paper, we propose a Boosting Tail Neural Network (BTNN) for improving the performance of Realtime Custom Keyword Spotting (RCKS) that is still an industrial challenge for demanding powerful classification ability with limited computation resources. Inspired by Brain Science that a brain is only partly activated for a nerve simulation and numerous machine learning algorithms are developed to use a batch of weak classifiers to resolve arduous problems, which are often proved to be effective. We show that this method is helpful to the RCKS problem. The proposed approach achieve better performances in terms of wakeup rate and false alarm. In our experiments compared with those traditional algorithms that use only one strong classifier, it gets 18\% relative improvement. We also point out that this approach may be promising in future ASR exploration.


Scalar Invariant Networks with Zero Bias

Geng, Chuqin, Xu, Xiaojie, Ye, Haolin, Si, Xujie

arXiv.org Artificial Intelligence

Just like weights, bias terms are the learnable parameters of many popular machine learning models, including neural networks. Biases are thought to enhance the representational power of neural networks, enabling them to solve a variety of tasks in computer vision. However, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our findings suggest that zero-bias neural networks can perform comparably to biased networks for practical image classification tasks. We demonstrate that zero-bias neural networks possess a valuable property called scalar (multiplication) invariance. This means that the prediction of the network remains unchanged when the contrast of the input image is altered. We extend scalar invariance to more general cases, enabling formal verification of certain convex regions of the input space. Additionally, we prove that zero-bias neural networks are fair in predicting the zero image. Unlike state-of-the-art models that may exhibit bias toward certain labels, zero-bias networks have uniform belief in all labels. We believe dropping bias terms can be considered as a geometric prior in designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the transnational invariance prior. The robustness and fairness advantages of zero-bias neural networks may also indicate a promising path towards trustworthy and ethical AI.


A New Era of Mobility: Exploring Digital Twin Applications in Autonomous Vehicular Systems

Hossain, S M Mostaq, Saha, Sohag Kumar, Banik, Shampa, Banik, Trapa

arXiv.org Artificial Intelligence

Digital Twins (DTs) are virtual representations of physical objects or processes that can collect information from the real environment to represent, validate, and replicate the physical twin's present and future behavior. The DTs are becoming increasingly prevalent in a variety of fields, including manufacturing, automobiles, medicine, smart cities, and other related areas. In this paper, we presented a systematic reviews on DTs in the autonomous vehicular industry. We addressed DTs and their essential characteristics, emphasized on accurate data collection, real-time analytics, and efficient simulation capabilities, while highlighting their role in enhancing performance and reliability. Next, we explored the technical challenges and central technologies of DTs. We illustrated the comparison analysis of different methodologies that have been used for autonomous vehicles in smart cities. Finally, we addressed the application challenges and limitations of DTs in the autonomous vehicular industry.