Goto

Collaborating Authors

 larson


Cats became our companions way later than you think

BBC News

In true feline style, cats took their time in deciding when and where to forge bonds with humans. According to new scientific evidence, the shift from wild hunter to pampered pet happened much more recently than previously thought - and in a different place. A study of bones found at archaeological sites suggests cats began their close relationship with humans only a few thousand years ago, and in northern Africa not the Levant. They are ubiquitous, we make TV programmes about them, and they dominate the internet, said Prof Greger Larson of the University of Oxford. That relationship we have with cats now only gets started about 3.5 or 4,000 years ago, rather than 10,000 years ago.


It Used to Be One of the Main Ways Men Talked to Each Other. Then Everyone Went Silent.

Slate

In 2005 I received a copy of World of Warcraft for my birthday. The game clocked in at 3 gigabytes--a behemoth by the standards of the early 2000s, so big that it had to be distributed across four different CDs. I installed those discs onto our creaking, overworked family PC and, hours later, created my first avatar: a humble dwarf paladin named Pumaras, who set off to explore a realm he would soon call home. World of Warcraft was a singular experience, and completely unlike the lonesome corridors of Halo or Call of Duty. Millions of living, breathing human beings logged on to the game at the same time.


Omega's AI Will Map How Olympic Athletes Win

WIRED

On August 27, 1960, at the Olympics in Rome, one of the most controversial gold medals was awarded. At the 100-meter freestyle men's swimming event, Australian swimmer John Devitt and American Lance Larson both recorded the same finish time of 55.2 seconds. Only Devitt walked away with the gold medal. The way swimming was timed was by using three timers per lane, all with stopwatches, from which an average was taken. In the rare occurrence there was a tie, a head judge, in this case Hans Runströmer from Sweden, was on hand to adjudicate.


Adversarial Image Color Transformations in Explicit Color Filter Space

Zhao, Zhengyu, Liu, Zhuoran, Larson, Martha

arXiv.org Artificial Intelligence

Deep Neural Networks have been shown to be vulnerable to adversarial images. Conventional attacks strive for indistinguishable adversarial images with strictly restricted perturbations. Recently, researchers have moved to explore distinguishable yet non-suspicious adversarial images and demonstrated that color transformation attacks are effective. In this work, we propose Adversarial Color Filter (AdvCF), a novel color transformation attack that is optimized with gradient information in the parameter space of a simple color filter. In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives. In contrast, existing color transformation attacks do not offer the opportunity for systematic analysis due to the lack of such an explicit space. We further demonstrate the effectiveness of our AdvCF in fooling image classifiers and also compare it with other color transformation attacks regarding their robustness to defenses and image acceptability through an extensive user study. We also highlight the human-interpretability of AdvCF and show its superiority over the state-of-the-art human-interpretable color transformation attack on both image acceptability and efficiency. Additional results provide interesting new insights into model robustness against AdvCF in another three visual tasks.


Not Just Plain Text! Fuel Document-Level Relation Extraction with Explicit Syntax Refinement and Subsentence Modeling

Duan, Zhichao, Li, Xiuxing, Li, Zhenyu, Wang, Zhuo, Wang, Jianyong

arXiv.org Artificial Intelligence

Document-level relation extraction (DocRE) aims to identify semantic labels among entities within a single document. One major challenge of DocRE is to dig decisive details regarding a specific entity pair from long text. However, in many cases, only a fraction of text carries required information, even in the manually labeled supporting evidence. To better capture and exploit instructive information, we propose a novel expLicit syntAx Refinement and Subsentence mOdeliNg based framework (LARSON). By introducing extra syntactic information, LARSON can model subsentences of arbitrary granularity and efficiently screen instructive ones. Moreover, we incorporate refined syntax into text representations which further improves the performance of LARSON. Experimental results on three benchmark datasets (DocRED, CDR, and GDA) demonstrate that LARSON significantly outperforms existing methods.


What Will Artificial Intelligence Do To Us? Current Affairs

#artificialintelligence

Will artificial intelligence soon outsmart human beings, and if so, what will become of us? The great computer scientist Alan Turing argued in the early 1950s that we were probably going to see our intellectual capacities surpassed by computers sooner or later. He thought it was probable "that at the end of the [20th] century it will be possible to program a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine." "Machines can be constructed," he said, "which will simulate the behavior of the human mind very closely" because "if it is accepted that real brains, as found in animals, and particularly in men, are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain." Other early AI pioneers anticipated even more rapid developments. Herbert Simon thought in 1965 that "machines will be capable, within twenty years, of doing any work a man can do," and Marvin Minsky said two years later that it would only take a "generation" to "solve" the problem of artificial intelligence. Things have taken a bit longer than that, and theorists in the field of AI have become somewhat notorious for making promises that we might call "Friedmanesque." But there are still those who think we have reason to fear that AI will surpass human intelligence in the near future, and, in fact, that an AI-driven cataclysm may be coming. Elon Musk--who, it should be noted, does not have a good track record when it comes to predicting the future--has warned that "robots will be able to do everything better than us," and "if AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it." He is not alone in spinning apocalyptic stories about a coming "superintelligence" that could literally exterminate the entire human race.


The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do: Larson, Erik J.: 9780674983519: Amazon.com: Books

#artificialintelligence

"If you want to know about AI, read this book…It shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence."―Peter Thiel A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away―and argues that this myth is not just wrong, it's actively blocking innovation and distorting our ability to make the crucial next leap. Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? In fact, we don't even know where that path might be. A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there.


The unseen scars of those who kill via remote control

The Japan Times

Kevin Larson crouched behind a boulder and watched the forest through his breath, waiting for the police he knew would come. It was Jan. 19, 2020. He was clinging to an assault rifle with 30 rounds and a conviction that, after all he had been through, there was no way he was going to prison. Larson was a drone pilot -- one of the best. He flew the heavily armed MQ-9 Reaper, and in 650 combat missions between 2013 and 2018, he had launched at least 188 airstrikes, earned 20 medals for achievement and killed a top man on the U.S.' most-wanted terrorist list. The 32-year-old pilot kept a handwritten thank-you note on his refrigerator from the director of the CIA.


Abductive inference: The blind spot of artificial intelligence

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain. But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson's new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.


Book Review: The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

#artificialintelligence

The book "The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do" is written by Erik J. Larson. It was published on April 30, 2021. A cutting-edge AI researcher and tech entrepreneur debunk the idea that super-intelligence is just a few clicks away, arguing that it's not only false, but it's also actively stifling innovation and distorting our capacity to make the critical next step. AI, according to futurists, will soon surpass the capabilities of even the most brilliant human intellect. Against super-intelligent machines, what hope do humans have? However, we are not yet on the road to creating intelligent machines.