Goto

Collaborating Authors

 sunset


The U.S. tried permanent daylight saving time--and hated it

Popular Science

The U.S. tried permanent daylight saving time--and hated it In 1974, America set its clocks forward for good in the name of energy savings. Between January and September in 1974, President Richard Nixon made daylight saving time permanent for a brief period. Breakthroughs, discoveries, and DIY tips sent every weekday. As fall approaches, so too does the end of daylight savings time (DST). On November 2nd, the hour between 1 a.m. and 2 a.m. will happen twice.



Who's important? -- SUnSET: Synergistic Understanding of Stakeholder, Events and Time for Timeline Generation

Sim, Tiviatis, Yang, Kaiwen, Xin, Shen, Kawaguchi, Kenji

arXiv.org Artificial Intelligence

As news reporting becomes increasingly global and decentralized online, tracking related events across multiple sources presents significant challenges. Existing news summarization methods typically utilizes Large Language Models and Graphical methods on article-based summaries. However, this is not effective since it only considers the textual content of similarly dated articles to understand the gist of the event. To counteract the lack of analysis on the parties involved, it is essential to come up with a novel framework to gauge the importance of stakeholders and the connection of related events through the relevant entities involved. Therefore, we present SUnSET: Synergistic Understanding of Stakeholder, Events and Time for the task of Timeline Summarization (TLS). We leverage powerful Large Language Models (LLMs) to build SET triplets and introduced the use of stakeholder-based ranking to construct a $Relevancy$ metric, which can be extended into general situations. Our experimental results outperform all prior baselines and emerged as the new State-of-the-Art, highlighting the impact of stakeholder information within news article.


Unstructured Evidence Attribution for Long Context Query Focused Summarization

Wright, Dustin, Mujahid, Zain Muhammad, Wang, Lu, Augenstein, Isabelle, Jurgens, David

arXiv.org Artificial Intelligence

Large language models (LLMs) are capable of generating coherent summaries from very long contexts given a user query. Extracting and properly citing evidence spans could help improve the transparency and reliability of these summaries. At the same time, LLMs suffer from positional biases in terms of which information they understand and attend to, which could affect evidence citation. Whereas previous work has focused on evidence citation with predefined levels of granularity (e.g. sentence, paragraph, document, etc.), we propose the task of long-context query focused summarization with unstructured evidence citation. We show how existing systems struggle to generate and properly cite unstructured evidence from their context, and that evidence tends to be "lost-in-the-middle". To help mitigate this, we create the Summaries with Unstructured Evidence Text dataset (SUnsET), a synthetic dataset generated using a novel domain-agnostic pipeline which can be used as supervision to adapt LLMs to this task. We demonstrate across 5 LLMs of different sizes and 4 datasets with varying document types and lengths that LLMs adapted with SUnsET data generate more relevant and factually consistent evidence than their base models, extract evidence from more diverse locations in their context, and can generate more relevant and consistent summaries.


Google Photos has new AI-powered features to clean up your library

Engadget

A set of features rolling out to Google Photos today will make it much easier to declutter your photo library, the company announced in a blog post. Google Photos will now automatically identify similar photos that you took in rapid succession – helpful for those times when you clicked 50 shots of that gorgeous sunset to get the one perfect frame you will never look at again – and group them in a single "stack" to clean up your library. The service will select a top pick that best represents the moment, but you can manually choose an image you want too. If you prefer to have multiple sunsets littering your library, you can turn off stacking. Photos will also automatically organize your pictures, separating IDs, receipts, and tickets into different albums, a feature that seems like it should have been there ages ago given how good Google Photos is at recognizing what's in your images.


Legal Challenges to Generative AI, Part II

Communications of the ACM

DALL-E, Midjourney, and Stable Diffusion are among the generative AI technologies widely used to produce images in response to user prompts. The output images are, for the most part, indistinguishable from images humans might have created. Generative AI systems are capable of producing human-creator-like images because of the extremely large quantities of images, paired with textual descriptions of the images' contents, on which the systems' image models were trained. A text prompt to compose a picture of a dog playing with a ball on a beach at sunset will generate a responsive image drawing upon embedded representations of how dogs, balls, beaches, and sunsets are typically depicted and arranged in images of this sort.


'Under Alien Skies' Will Fuel the Next Generation of Sci-Fi

WIRED

Phil Plait, creator of the popular astronomy blog Bad Astronomy, credits his interest in outer space partly to his childhood love of science fiction movies such as Angry Red Planet and Robinson Crusoe on Mars. "I'm a huge science fiction dork," Plait says in Episode 541 of the Geek's Guide to the Galaxy podcast. "I've watched every TV show, just about, and movies and everything, read tons of books. In his new book, Under Alien Skies, Plait explores what various cosmic vistas would look like for a person who was physically present, studying them with ordinary human eyesight. "I open each chapter with a short vignette, basically a fictional tale," he says. So I say'You are at this planet,' 'You are standing on the bridge of your starship,' 'You are standing there watching a dust storm approach you on Mars.' Plait hopes that the book will serve as a valuable resource for filmmakers and science fiction authors looking to inject an extra dose of reality into their speculative visions. "I've actually done some consulting for movies and TV shows, and even a couple of video games," he says. "So I kind of know that process of advising writers, or other folks who are involved in the entertainment business, of what the real science is." As much as Plait enjoys seeing science fiction that incorporates real science, he recognizes that the ultimate aim of any book or movie is to tell a good story. "Even if they don't get the science correct, it's OK, because you're still inspiring people," he says. "And if they get the science right?


Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints

Lu, Albert, Zhang, Hongxin, Zhang, Yanzhe, Wang, Xuezhi, Yang, Diyi

arXiv.org Artificial Intelligence

The limits of open-ended generative models are unclear, yet increasingly important. What causes them to succeed and what causes them to fail? In this paper, we take a prompt-centric approach to analyzing and bounding the abilities of open-ended generative models. We present a generic methodology of analysis with two challenging prompt constraint types: structural and stylistic. These constraint types are categorized into a set of well-defined constraints that are analyzable by a single prompt. We then systematically create a diverse set of simple, natural, and useful prompts to robustly analyze each individual constraint. Using the GPT-3 text-davinci-002 model as a case study, we generate outputs from our collection of prompts and analyze the model's generative failures. We also show the generalizability of our proposed method on other large models like BLOOM and OPT. Our results and our in-context mitigation strategies reveal open challenges for future research. We have publicly released our code at https://github.com/SALT-NLP/Bound-Cap-LLM.


How AI sees the world -- in ways that are predictable, yet way off

#artificialintelligence

The interwebs, as of late, have been filled with images created by artificial intelligence rendering bots such as DALL-E and Midjourney -- and the humans (I think they're humans) using them as tools. Brooklyn-based artist Zach Katz has used it to reimagine the urban design of cities. A reporter at SFGATE has undertaken a similar project, asking DALL-E 2 to retool some of the city's architecture and infrastructure. In July, the Guardian rounded up four artists to come up with unlikely prompts -- such as "biotech harpy in field at sunset" -- for DALL-E Mini (the free, public version of DALL-E). Naturally, the advent of bots that can create an image out of a simple text command is drawing the scrutiny of illustrators.


How the spirit of ancient Stonehenge was captured with a 21st-century drone

National Geographic

Reuben Wu, a British photographer and visual artist based in Chicago, was first introduced to National Geographic as most people are: When he was a child, he enjoyed looking at the magazines his father subscribed to for decades. He dreamed of seeing his photographs in the same magazine--and even on the cover. So when National Geographic asked him to photograph an iconic monument he knows well, he was ready to work. Last summer, Wu experienced a stark contrast of modern and prehistoric, as he used drones and artificial light to photograph Stonehenge, one of the best-known prehistoric monuments, while hearing honking cars passing by. The site in Wiltshire, England, is bisected by the A303--a major road that may soon be in a tunnel should a 2020 proposal become reality--which means motorists may have seen Wu's photo shoot and lit-up drones.