Goto

Collaborating Authors

 ballard


13 World War II shipwrecks captured in stunning detail

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Judging by newly released photos and video, the crew aboard Ocean Exploration Trust's Nautilus research vessel had an extremely productive summer trip to the South Pacific. Over 22 days, the team completed detailed archaeological surveys of more than a dozen shipwrecks sunk amid the Solomon Islands campaign during World War II. In addition to imaging four of them for the first time, experts guided remotely operated vehicles (ROVs) towards the rediscovery of two long-lost vessels:the separated bow from the USS New Orleans as well as the Imperial Japanese Naval destroyer Teruzuki. Although researchers originally spotted some of these shipwrecks more than 34 years ago, Ocean Exploration Trust president Robert Ballard explained that the most recent trip to Iron Bottom Sound provided opportunities to document their finds using a new generation of technology including high-definition survey cameras, underwater vehicles, and imaging tools aboard the EV Nautilus.


This Clever New Book About the Apocalypse Will Cheer You Up (Really!)

Slate

So long as we can say'This is the worst,' " go the lines from King Lear quoted in Emily St. John Mandel's 2014 novel Station Eleven. Any stories we tell about the end of the world will have to be fictional, since once the real thing occurs, no one will be around to describe it. As the British journalist Dorian Lynskey relates in his erudite, delightfully witty, and strangely cheering new book, Everything Must Go: The Stories We Tell About the End of the World, the fact that we can only ever speculate on the subject makes us speculate all the more frantically. "There is simply no end of ends," Lynskey writes of the books, movies, TV shows, pop songs, and video games we've created to depict the apocalypse--or its near misses and the aftermaths thereof. Station Eleven is often described as "postapocalyptic," but as Lynskey points out, the more accurate term would be "postcatastrophic." That's a better label for stories in which "the world has not ended, but a world has, creating a blank ...


Contextualization Distillation from Large Language Model for Knowledge Graph Completion

Li, Dawei, Tan, Zhen, Chen, Tianlong, Liu, Huan

arXiv.org Artificial Intelligence

While textual information significantly enhances the performance of pre-trained language models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing corpora collected from Wikipedia articles or synsets definitions often limits the potential of PLM-based KGC models. To surmount these challenges, we introduce the Contextualization Distillation strategy, a versatile plug-in-and-play approach compatible with both discriminative and generative KGC frameworks. Our method begins by instructing large language models (LLMs) to transform compact, structural triplets into context-rich segments. Subsequently, we introduce two tailored auxiliary tasks, reconstruction and contextualization, allowing smaller KGC models to assimilate insights from these enriched triplets. Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach, revealing consistent performance enhancements irrespective of underlying pipelines or architectures. Moreover, our analysis makes our method more explainable and provides insight into generating path selection, as well as the choosing of suitable distillation tasks. All the code and data in this work will be released at https://github.com/David-Li0406/Contextulization-Distillation


Johnny Cash's 'Blank Space' Is Why AI Can't Have Nice Things

WIRED

When Texas-based copywriter Dustin Ballard released a cover of Aqua's 1997 Europop hit "Barbie Girl" this summer using an AI-generated version of Johnny Cash's voice, he was surprised by its reception. "I actually expected more of a backlash," he says. Earlier this fall, when he followed up with AI Johnny Cash singing Taylor Swift's "Blank Space," the feedback was unexpectedly positive once again. "This is hauntingly beautiful," the top comment reads. "It absolutely slaps," Futurism wrote.

  Country: North America > United States > Texas (0.26)
  Industry:

Better Algorithms through Faster Math

Communications of the ACM

Developing faster algorithms is an important but elusive goal for data scientists. The ability to accelerate complex computing tasks and reduce latency has far-reaching ramifications in areas such as natural language processing, video streaming, autonomous robotics, gaming, and extended reality. Yet for all the hype surrounding computer algorithms and the increasingly sophisticated ways they operate, a basic fact stands out: these algorithms are typically built atop matrix multiplication, a basic type of linear algebra. The underlying mathematical framework has not changed a great deal since the inception of computing--and finding more efficient formulas has proved elusive. It is an issue attracting growing attention--particularly as machine learning (ML), deep learning (DL), artificial intelligence (AI), and machine automation advance into the mainstream.


Face Recognition Software Led to His Arrest. It Was Dead Wrong

WIRED

Carronne Sawyer took the week off work to get her husband Alonzo out of jail. She knew he was asleep on the couch with her at the time police alleged he assaulted a bus driver near Baltimore and stole their smartphone. But an intelligence analyst using face recognition software had labeled him a possible match with the suspect seen on CCTV footage from the bus, police records show, and an officer had confirmed it. At a police station and in a meeting with her husband's former parole officer, the person who had confirmed the software's suggested match, Carronne drew attention to details in photos on her phone taken recently by her daughter. Her husband is taller than the suspect in the video, she explained, and has facial hair and gaps between his teeth.


Deep sea robots will let us find millions of shipwrecks, says man who discovered Titanic

The Guardian

He is the celebrated deep-sea explorer who discovered the Titanic, as well as the German battleship Bismarck and other historic sunken vessels around the world. Now Dr Robert Ballard is pioneering cutting-edge technology – autonomous underwater vehicles that will "revolutionise" the search for more than three million shipwrecks that lie scattered across ocean floors, according to a Unesco estimate. Many will offer new insights into life on board at the time of sinking, hundreds or even thousands of years ago. "We're going to be finding them like crazy," Ballard told the Observer. "It's going to be rapid discovery because of this technology. New chapters of human history are to be read. "All the work I've done in the past in archaeology used vehicles that were connected to a ship.

  Country: North America > United States (0.18)
  Industry: Government > Military > Navy (0.52)

In Science Fiction, We Are Never Home - Issue 95: Escape

Nautilus

This essay first appeared in our "Home" issue way back in 2013. But somehow feels so timely today. Halfway through director Alfonso Cuarón's Gravity, Sandra Bullock suffers the most cosmic case of homesick blues since Keir Dullea was hurled toward the infinite in 2001: A Space Odyssey nearly half a century ago. For Bullock, home is (as it was for Dullea) the Earth, looming below so huge it would seem she couldn't miss it, if she could somehow just fall from her shattered spacecraft. She cares about nothing more than getting back to where she came from, even as 2001's Dullea is in flight, accepting his exile and even embracing it.


Efficiently Guiding Imitation Learning Algorithms with Human Gaze

Saran, Akanksha, Zhang, Ruohan, Short, Elaine Schaertl, Niekum, Scott

arXiv.org Artificial Intelligence

Human gaze is known to be an intention-revealing signal in human demonstrations of tasks. In this work, we use gaze cues from human demonstrators to enhance the performance of state-of-the-art inverse reinforcement learning (IRL) and behavior cloning (BC) algorithms. We propose a novel approach for utilizing gaze data in a computationally efficient manner --- encoding the human's attention as part of an auxiliary loss function, without adding any additional learnable parameters to those models and without requiring gaze data at test time. The auxiliary loss encourages a network to have convolutional activations in regions where the human's gaze fixated. We show how to augment any existing convolutional architecture with our auxiliary gaze loss (coverage-based gaze loss or CGL) that can guide learning toward a better reward function or policy. We show that our proposed approach consistently improves performance of both BC and IRL methods on a variety of Atari games. We also compare against two baseline methods for utilizing gaze data with imitation learning methods. Our approach outperforms a baseline method, called gaze-modulated dropout (GMD), and is comparable to another method (AGIL) which uses gaze as input to the network and thus increases the amount of learnable parameters.


Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

Zhang, Ruohan, Liu, Zhuode, Guan, Lin, Zhang, Luxin, Hayhoe, Mary M, Ballard, Dana H

arXiv.org Machine Learning

Additionally, previous research has shown that and eye movements while playing Atari videos games. The given a task context, human visual attention is modulated dataset currently has 44 hours of gameplay data from 16 by reward [5, 9, 17]. In performing a familiar task, objects games and a total of 2.97 million demonstrated actions. Human with high potential reward or penalty attracts human attention subjects played games in a frame-by-frame manner to hence gaze indicates the momentary attentional priorities allow enough decision time in order to obtain near-optimal over multiple objects. Therefore the gaze could be a decisions. This dataset could be potentially used for research potentially useful intermediate learning signal for imitation in imitation learning, reinforcement learning, and learning.