Goto

Collaborating Authors

 mckeown


Zero-Shot Stance Detection using Contextual Data Generation with LLMs

Mahmoudi, Ghazaleh, Behkamkia, Babak, Eetemadi, Sauleh

arXiv.org Artificial Intelligence

Stance detection, the classification of attitudes expressed in a text towards a specific topic, is vital for applications like fake news detection and opinion mining. However, the scarcity of labeled data remains a challenge for this task. To address this problem, we propose Dynamic Model Adaptation with Contextual Data Generation (DyMoAdapt) that combines Few-Shot Learning and Large Language Models. In this approach, we aim to fine-tune an existing model at test time. We achieve this by generating new topic-specific data using GPT-3. This method could enhance performance by allowing the adaptation of the model to new topics. However, the results did not increase as we expected. Furthermore, we introduce the Multi Generated Topic VAST (MGT-VAST) dataset, which extends VAST using GPT-3. In this dataset, each context is associated with multiple topics, allowing the model to understand the relationship between contexts and various potential topics


Textual Summarisation of Large Sets: Towards a General Approach

Kuptavanich, Kittipitch, Reiter, Ehud, Van Deemter, Kees, Siddharthan, Advaith

arXiv.org Artificial Intelligence

Shneiderman's mantra, "Overview first, zoom and filter, then details-on-demand", highlights the importance of giving readers a high-level overview before offering detail. We apply this idea to generate an overview of sets of objects, hypothesising that an overview will be beneficial to readers who want to understand the set. Previously we investigated the domain of consumer products, focusing on descriptions of products (such as TVs) which are intended to help readers decide which specific products to buy. Now we aim to generalise the techniques we have developed, by looking at a very different type of domain, namely bibliographical references in academic papers.


With 5G, AI at the edge promises a compute-everywhere future

MIT Technology Review

Luxury auto maker Audi is driving full-throttle toward Industry 4.0, using AI inference and computer vision on the factory floor with autonomous robot welders that can react in real time and fix issues that may arise when welding the frame of a car. That's just one example of how the company is moving toward realizing its ultimate vision of creating smart factories with a scalable and flexible platform that will enable data analytics, communications and processing at the edge, powered by 5G. In the past, welding required a lot of manual intervention and inspection to ensure sufficient quality, says Nick McKeown, senior vice president and general manager of the network and edge group at Intel, which is working with Audi. Now, with cameras reviewing the quality of the weld the need for human intervention has greatly decreased. "Edge computing is taking the technology resources we've been developing over many years for the computing industry and using them to analyze and process data at the edge", McKeown says.


Building the future with software-based 5G networking

MIT Technology Review

Next-generation solutions and products are hitting a wall with wi-fi: it's not fast enough, and latency and connectivity issues mean it's not reliable enough. What's an innovator to do? Focus on what's next: 5G and software-defined networking. Nick McKeown, senior vice president and general manager of the network and edge group at Intel Corporation says this technical leap is what will make future innovation possible, "Once you've got a software platform where you can change its behavior, you can start introducing previously absurd-sounding ideas," including, he continues, "fanciful ideas of automatic, real-time, closed-loop control of an entire network." While nascent, these technological advancements are already showing promise in practical applications. For example, in industrial settings where there's more analysis happening at the edge, having greater observability into the network is allowing for fine timescale responses to mechanical errors and broken equipment. "Corrective action could be something as mundane as a broken link, a broken piece of equipment, but it could actually be a functional incorrectness in the software that is controlling it," says McKeown. Grad students and programmers are taking advantage of the advancements in network technology to try out new ideas through academic projects. "One of the key ideas," says McKeown, "is to verify in real time that the network is operating according to a specification, formally checking against that specification in real time, as packets fly around in the network. This has never been done before." And although this idea remains in the realm of research projects, McKeown believes it exemplifies the promise of a software-based 5G networking future. Software-defined 5G networking promises applications that we can't yet even imagine, says McKeown. "New IoT apps combined with both public and private 5G is going to create a'Cambrian explosion' of new ideas that will manifest in ways that if we were to try to predict, we would get it wrong." Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma and this is Business Lab. The show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.


Your Guide to the AWS Machine Learning Summit

#artificialintelligence

We're about a week away from the AWS Machine Learning Summit and if you haven't registered yet, you better get on it! On June 2, 2021 (Americas) and June 3, 2021 (Asia-Pacific, Japan, Europe, Middle East, and Africa), don't miss the opportunity to hear from some of the brightest minds in machine learning (ML) at the free virtual AWS Machine Learning Summit. This Summit, which is open to all, brings together industry luminaries, AWS customers, and leading ML experts to share the latest in ML. You'll learn about science breakthroughs in ML, how ML is impacting business, best practices in building ML, and how to get started now without prior ML expertise. This post is your guide to navigating the Summit.


What Is Artificial Intelligence?

#artificialintelligence

When most people think of artificial intelligence (AI) they think of HAL 9000 from "2001: A Space Odyssey," Data from "Star Trek," or more recently, the android Ava from "Ex Machina." But to a computer scientist that isn't what AI necessarily is, and the question "what is AI?" can be a complicated one. One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google's director of research, Peter Norvig, puts artificial intelligence in to four broad categories: The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn't necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks.


What Is Artificial Intelligence?

#artificialintelligence

When most people think of artificial intelligence (AI) they think of HAL 9000 from "2001: A Space Odyssey," Data from "Star Trek," or more recently, the android Ava from "Ex Machina." But to a computer scientist that isn't what AI necessarily is, and the question "what is AI?" can be a complicated one. One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google's director of research, Peter Norvig, puts artificial intelligence in to four broad categories: The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn't necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks.


What Is Synthetic Intelligence?True Viral News

#artificialintelligence

When most individuals consider synthetic intelligence (AI) they consider HAL 9000 from "2001: A House Odyssey," Information from "Star Trek," or extra just lately, the android Ava from "Ex Machina." However to a pc scientist that is not what AI essentially is, and the query "what's AI?" is usually a difficult one. One of many normal textbooks within the discipline, by College of California laptop scientists Stuart Russell and Google's director of analysis, Peter Norvig, places synthetic intelligence in to 4 broad classes: The variations between them may be refined, notes Ernest Davis, a professor of pc science at New York College. AlphaGo, the pc program that beat a world champion at Go, acts rationally when it performs the sport (it performs to win). Nevertheless it does not essentially suppose the best way a human being does, although it engages in a few of the similar sample-recognition duties.


Inferring Strategies for Sentence Ordering in Multidocument News Summarization

Barzilay, R., Elhadad, N.

arXiv.org Artificial Intelligence

The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies.


Inferring Strategies for Sentence Ordering in Multidocument News Summarization

Barzilay, R., Elhadad, N.

Journal of Artificial Intelligence Research

The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies.