freezer
Ancient underground freezer unearthed at South Korean castle
The 1,400-year-old'bingo' is the oldest known facility of its kind. Breakthroughs, discoveries, and DIY tips sent every weekday. Archaeologists have discovered South Korea's earliest known ice storage chamber at the site of one of the nation's most historically significant royal castles. At over 1,400 years old, the underground facility offers an unprecedented look into feudal Korean culture's architectural complexities and advancements. Researchers uncovered the ice storage bunker while conducting the seventeenth excavation survey of Busosanseong Fortress located about 90 miles south of Seoul in South Chungcheong Province.
DNA forensics helps identify remains found in Colorado freezer as teenager missing for nearly 20 years
Harvey Castro talks about how AI could be used in cold cases and the symbiotic relationship between AI and a detective. A human head and set of hands found inside a freezer at a western Colorado home recently sold before the discovery in January have been discovered as those of a 16-year-old girl who went missing almost 20 years ago. On Jan. 12, people were cleaning out a Grand Junction, Colorado, home, located nearly 200 miles west of Denver, when they discovered a human head and hands inside a freezer. On Friday, the Mesa County Coroner's Office announced that, through DNA testing, the victim was identified as Amanda Leariel Overstreet. The Mesa County Sheriff's Office said Overstreet is believed to have been about 16 years old when she disappeared, adding that she had not been seen or heard from since April 2005.
- North America > United States > Colorado > Mesa County > Grand Junction (0.26)
- North America > United States > Virginia (0.06)
- North America > United States > Massachusetts (0.06)
Incremental Learning of Humanoid Robot Behavior from Natural Interaction and Large Language Models
Bärmann, Leonard, Kartmann, Rainer, Peller-Konrad, Fabian, Waibel, Alex, Asfour, Tamim
Natural-language dialog is key for intuitive human-robot interaction. It can be used not only to express humans' intents, but also to communicate instructions for improvement if a robot does not understand a command correctly. Of great importance is to endow robots with the ability to learn from such interaction experience in an incremental way to allow them to improve their behaviors or avoid mistakes in the future. In this paper, we propose a system to achieve incremental learning of complex behavior from natural interaction, and demonstrate its implementation on a humanoid robot. Building on recent advances, we present a system that deploys Large Language Models (LLMs) for high-level orchestration of the robot's behavior, based on the idea of enabling the LLM to generate Python statements in an interactive console to invoke both robot perception and action. The interaction loop is closed by feeding back human instructions, environment observations, and execution results to the LLM, thus informing the generation of the next statement. Specifically, we introduce incremental prompt learning, which enables the system to interactively learn from its mistakes. For that purpose, the LLM can call another LLM responsible for code-level improvements of the current interaction based on human feedback. The improved interaction is then saved in the robot's memory, and thus retrieved on similar requests. We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6 and evaluate our methods both quantitatively (in simulation) and qualitatively (in simulation and real-world) by demonstrating generalized incrementally-learned knowledge.
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Asia > Japan > Shikoku > Kagawa Prefecture > Takamatsu (0.04)
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text Games
Wang, Ruoyao, Todd, Graham, Yuan, Eric, Xiao, Ziang, Côté, Marc-Alexandre, Jansen, Peter
In this work, we investigate the capacity of language models to generate explicit, interpretable, and interactive world models of scientific and common-sense reasoning tasks. We operationalize this as a task of generating text games, expressed as hundreds of lines of Python code. To facilitate this task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We empirically demonstrate that GPT-4 can use these games as templates for single-shot in-context learning, successfully producing runnable games on unseen topics in 28% of cases. When allowed to self-reflect on program errors, game runnability substantially increases to 57%. While evaluating simulation fidelity is labor-intensive, we introduce a suite of automated metrics to assess game fidelity, technical validity, adherence to task specifications, and winnability, showing a high degree of agreement with expert human ratings. We pose this as a challenge task to spur further development at the juncture of world modeling and code generation.
- North America > United States > Arizona (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > New York (0.04)
- (5 more...)
- Health & Medicine (1.00)
- Leisure & Entertainment > Games > Computer Games (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.53)
Arizona woman arrested for keeping dozens of dogs in squalor, others dead in freezer
'The Big Sunday Show' panelists discuss how artificial intelligence could turn your pet's thoughts into reality. An estimated 55 dogs were rescued from an Arizona woman's home for special needs dogs after they were discovered to be living in filthy conditions, as well as those reportedly found dead in a freezer. Police in Chandler responded to April Mclaughlin's home on Friday and found dozens of dogs living in squalor with no water. Mclaughlin had been running a shelter for special needs dogs, but the reality had spiraled into such filthy conditions that firefighters had to wear special equipment to stand breathing in the home, according to AZ Family. Officials began investigating on Sept. 8 after a vet reached out to police that some of Mclaughlin's dogs were not in healthy conditions.
- North America > United States > Arizona > Maricopa County > Chandler (0.09)
- North America > Puerto Rico (0.06)
ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models
Jentzsch, Sophie, Kersting, Kristian
Humor is a central aspect of human communication that has not been solved for artificial agents so far. Large language models (LLMs) are increasingly able to capture implicit and contextual information. Especially, OpenAI's ChatGPT recently gained immense public attention. The GPT3-based model almost seems to communicate on a human level and can even tell jokes. Humor is an essential component of human communication. But is ChatGPT really funny? We put ChatGPT's sense of humor to the test. In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT's capability to grasp and reproduce human humor. Since the model itself is not accessible, we applied prompt-based experiments. Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model. Over 90% of 1008 generated jokes were the same 25 Jokes. The system accurately explains valid jokes but also comes up with fictional explanations for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the classification of jokes. ChatGPT has not solved computational humor yet but it can be a big leap toward "funny" machines.
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.34)
A Song of Ice and Fire: Analyzing Textual Autotelic Agents in ScienceWorld
Teodorescu, Laetitia, Yuan, Xingdi, Côté, Marc-Alexandre, Oudeyer, Pierre-Yves
Building open-ended agents that can autonomously discover a diversity of behaviours is one of the long-standing goals of artificial intelligence. This challenge can be studied in the framework of autotelic RL agents, i.e. agents that learn by selecting and pursuing their own goals, self-organizing a learning curriculum. Recent work identified language as a key dimension of autotelic learning, in particular because it enables abstract goal sampling and guidance from social peers for hindsight relabelling. Within this perspective, we study the following open scientific questions: What is the impact of hindsight feedback from a social peer (e.g. selective vs. exhaustive)? How can the agent learn from very rare language goal examples in its experience replay? How can multiple forms of exploration be combined, and take advantage of easier goals as stepping stones to reach harder ones? To address these questions, we use ScienceWorld, a textual environment with rich abstract and combinatorial physics. We show the importance of selectivity from the social peer's feedback; that experience replay needs to over-sample examples of rare goals; and that following self-generated goal sequences where the agent's competence is intermediate leads to significant improvements in final performance.
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
An Introduction to Accelerator and Parallel Programming
Computers are a means to an end. They allow us to have faster solutions to complex problems, provide the ability to store and retrieve information across the globe, provide the backbone for remarkable technologies like robotics and self-driving cars (sort of) and AI, and hopefully uplift the lives of everyone on the planet. As the problems to solve have become more complex, computer architecture, programming languages, and programming models have continued to evolve. This has led to the growth of hardware accelerators and domain-specific programming models. Professor David Patterson from UC Berkeley (the author of all the computer architecture books I had in college) has talked extensively about domain-specific architectures and accelerators.
Undersea Permafrost Is a Huge Wild Card for the Climate
Scientists used torpedo-shaped robots to map the Arctic seafloor with sonar, revealing massive sinkholes of thawed permafrost. This story was originally published by Wired and is reproduced here as part of the Climate Desk collaboration. Around 20,000 years ago, the world was so frigid that massive glaciers sucked up enough water to lower sea levels by 400 feet. As the sea pulled back, newly exposed land froze to form permafrost, a mixture of earth and ice that today sprawls across the far north. But as the world warmed into the climate we enjoy today (for the time being), sea levels rose again, submerging the coastal edges of that permafrost, which remain frozen below the water. It's a huge, hidden climate variable that scientists are racing to understand.
- Government (0.35)
- Energy (0.31)
La veille de la cybersécurité
When I talk to retailers about artificial intelligence, their eyes glaze over, like I'm speaking a foreign language and very few people want to talk about it. AI is going to pervade almost every aspect of retail, big and small. Here's a case in point: The EPA estimates that a supermarket of 50,000 square feet, that's a large store but not excessively so, uses about $200,000 worth of electricity and natural gas in the course of a year. According to the EPA, about half of that cost is in refrigeration and lighting. Most such large stores have freezers that consumers go into to pick out their frozen food.
- Retail (0.62)
- Consumer Products & Services > Food, Beverage, Tobacco & Cannabis (0.62)