Goto

Collaborating Authors

 alvin


108-year-old submarine wreck seen in stunning detail in new footage

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. In 1917, two US submarines collided off the coast of San Diego and submarine USS F-1 sank to the bottom of the Pacific Ocean, along with 19 crew members aboard. The horrible accident, whose wreckage was discovered in 1975, represents the US Naval Submarine Force's first wartime submarine loss. Now, researchers from Woods Hole Oceanographic Institution have captured new footage of the 1,300 feet-deep underwater archaeological site. "They were technical dives requiring specialized expertise and equipment," Anna Michel, a co-lead of the expedition and chief scientist at the National Deep Submergence Facility, said in a statement. "We were careful and methodical in surveying these historical sites so that we could share these stunning images, while also maintaining the reverence these sites deserve."


ALVIN: Active Learning Via INterpolation

Korakakis, Michalis, Vlachos, Andreas, Weller, Adrian

arXiv.org Artificial Intelligence

Active Learning aims to minimize annotation effort by selecting the most useful instances from a pool of unlabeled data. However, typical active learning methods overlook the presence of distinct example groups within a class, whose prevalence may vary, e.g., in occupation classification datasets certain demographics are disproportionately represented in specific classes. This oversight causes models to rely on shortcuts for predictions, i.e., spurious correlations between input attributes and labels occurring in well-represented groups. To address this issue, we propose Active Learning Via INterpolation (ALVIN), which conducts intra-class interpolations between examples from under-represented and well-represented groups to create anchors, i.e., artificial points situated between the example groups in the representation space. By selecting instances close to the anchors for annotation, ALVIN identifies informative examples exposing the model to regions of the representation space that counteract the influence of shortcuts. Crucially, since the model considers these examples to be of high certainty, they are likely to be ignored by typical active learning methods. Experimental results on six datasets encompassing sentiment analysis, natural language inference, and paraphrase detection demonstrate that ALVIN outperforms state-of-the-art active learning methods in both in-distribution and out-of-distribution generalization.


The Oldest Crewed Deep Sea Submarine Just Got a Big Makeover

WIRED

In early March, a gleaming white submarine called Alvin surfaced off the Atlantic coast of North Carolina after spending the afternoon thousands of feet below the surface. The submarine's pilot and two marine scientists had just returned from collecting samples around a methane seep, an oasis for carbon-munching microbes and the larger species of bottom dwellers that feed on them. It was the final dive of a month-long expedition that had taken the crew from the Gulf of Mexico up the East Coast, with stops along the way to explore a massive deep sea coral reef that had recently been discovered off the coast of South Carolina. For Bruce Strickrott, Alvin's chief pilot and the leader of the expedition, these sorts of missions to the bottom of the world are a regular part of life. Since he first started working on Alvin as an engineer nearly 25 years ago, Strickrott has logged more than 2,000 hours in the deep ocean, where he learned to expertly navigate the seabed's alien landscape and probe for samples with the submarine's spindly robotic arms.


Alvin on Twitter

#artificialintelligence

Also, it bears saying that due to humans being deductive and AI being inductive, there is great potential to combine the insights of each to maximize sensitivity and specificity of any diagnostic process, as the different interpretive lenses enhance rule-out and rule-in.


Import AI: Issue 46: Facebook's ImageNet-in-an-hour GPU system, diagnosing networks with attention functions, and the open access paper debate

#artificialintelligence

Attention & interpretability: modern neural networks are hard to interpret because we haven't built tools to make it easy to analyze their decision-making processes. Part of the reason why we haven't built the tools is that it's not entirely obvious how you get a big stack of perceptual math machinery to tell you about what it is thinking in a way that is remotely useful to the untrained eye. The best thing we've been able to come up with, in the case of certain vision and language tasks, is attention where we visualize what parts of a neural network – sometimes down to an individual cell or'neuron' within it – is activating in response to. This can help us diagnose why an AI tool is responding in the way it is. This component is general, working across different neural network architectures (a first, the researchers claim), and only requires the person to fiddle with it at its input or output points.