hoop
Scientists identify the perfect hula hoop 'body type'
Hula hooping has remained a staple of modern US culture since the 1950s, but people around the world have participated in similar activities for thousands of years. The physics behind maintaining a perfect spin, however, has remained a mystery. Is it something that can be achieved by anyone with enough time and effort, or are there natural hula hoopers among us? Researchers recently investigated these dynamics using a specially designed, gyrating robot--and their findings provide the first-of-its-kind insight into the perfect spin. "Seemingly simple toys and games often involve surprisingly subtle physics and mathematics," a team from NYU's Applied Mathematics Laboratory wrote in their study published in the Proceedings of the National Academy of Sciences on Monday.
A Pendulum-Driven Legless Rolling Jumping Robot
Buzhardt, Jake, Chivkula, Prashanth, Tallapragada, Phanindra
In this paper, we present a novel rolling, jumping robot. The robot consists of a driven pendulum mounted to a wheel in a compact, lightweight, 3D printed design. We show that by driving the pendulum to shift the robot's weight distribution, the robot is able to obtain significant rolling speed, achieve jumps of up to 2.5 body lengths vertically, and clear horizontal distances of over 6 body lengths. The robot's dynamic model is derived and simulation results indicate that it is consistent with the rolling motion and jumping observed on the robot. The ability to both roll and jump effectively using a minimalistic design makes this robot unique and could inspire the use of similar mechanisms on robots intended for applications in which agile locomotion on unstructured terrain is necessary, such as disaster response or planetary exploration.
- North America > United States (0.04)
- Asia > Japan > Shikoku > Ehime Prefecture > Matsuyama (0.04)
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Bitton-Guetta, Nitzan, Bitton, Yonatan, Hessel, Jack, Schmidt, Ludwig, Elovici, Yuval, Stanovsky, Gabriel, Schwartz, Roy
Weird, unusual, and uncanny images pique the curiosity of observers because they challenge commonsense. For example, an image released during the 2022 world cup depicts the famous soccer stars Lionel Messi and Cristiano Ronaldo playing chess, which playfully violates our expectation that their competition should occur on the football field. Humans can easily recognize and interpret these unconventional images, but can AI models do the same? We introduce WHOOPS!, a new dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers using publicly-available image generation tools like Midjourney. We consider several tasks posed over the dataset. In addition to image captioning, cross-modal matching, and visual question answering, we introduce a difficult explanation generation task, where models must identify and explain why a given image is unusual. Our results show that state-of-the-art models such as GPT3 and BLIP2 still lag behind human performance on WHOOPS!. We hope our dataset will inspire the development of AI models with stronger visual commonsense reasoning abilities. Data, models and code are available at the project website: whoops-benchmark.github.io
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Oceania > New Zealand (0.04)
- Oceania > Australia (0.04)
- (11 more...)
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models
Xu, Peng, Shao, Wenqi, Zhang, Kaipeng, Gao, Peng, Liu, Shuo, Lei, Meng, Meng, Fanqing, Huang, Siyuan, Qiao, Yu, Luo, Ping
Large Vision-Language Models (LVLMs) have recently played a dominant role in multimodal vision-language learning. Despite the great success, it lacks a holistic evaluation of their efficacy. This paper presents a comprehensive evaluation of publicly available large multimodal models by building a LVLM evaluation Hub (LVLM-eHub). Our LVLM-eHub consists of $8$ representative LVLMs such as InstructBLIP and MiniGPT-4, which are thoroughly evaluated by a quantitative capability evaluation and an online arena platform. The former evaluates $6$ categories of multimodal capabilities of LVLMs such as visual question answering and embodied artificial intelligence on $47$ standard text-related visual benchmarks, while the latter provides the user-level evaluation of LVLMs in an open-world question-answering scenario. The study reveals several innovative findings. First, instruction-tuned LVLM with massive in-domain data such as InstructBLIP heavily overfits many existing tasks, generalizing poorly in the open-world scenario. Second, instruction-tuned LVLM with moderate instruction-following data may result in object hallucination issues (i.e., generate objects that are inconsistent with target images in the descriptions). It either makes the current evaluation metric such as CIDEr for image captioning ineffective or generates wrong answers. Third, employing a multi-turn reasoning evaluation framework can mitigate the issue of object hallucination, shedding light on developing an effective pipeline for LVLM evaluation. The findings provide a foundational framework for the conception and assessment of innovative strategies aimed at enhancing zero-shot multimodal techniques. Our LVLM-eHub will be available at https://github.com/OpenGVLab/Multi-Modality-Arena
- Leisure & Entertainment > Sports (0.69)
- Leisure & Entertainment > Games (0.47)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
AI strips out city noise to improve earthquake monitoring systems
A deep learning algorithm can remove city noise from earthquake monitoring tools, potentially making it easier to pinpoint when and where a tremor occurs. "Earthquake monitoring in urban settings is important because it helps us understand the fault systems that underlie vulnerable cities," says Gregory Baroza at Stanford University in California. "By seeing where the faults go, we can better anticipate earthquake events." However, the sounds of cities – from cars, aircraft, helicopters and general hustle and bustle – adds noise that makes it difficult to discern the underground signals that indicate an earthquake is happening. To try to improve our ability to identify and locate earthquakes, Baroza and his colleagues trained a deep neural network to distinguish between earthquake signals and other noise sources.
- North America > United States > Texas > Harris County > Houston (0.18)
- North America > United States > California > Los Angeles County > Long Beach (0.06)
- Asia > Japan (0.06)
Everything we know about 'Fall Guys' Season 2 and the roadmap beyond
Other levels include a spin on Hoopsy Daisy, a mini-game that involves jumping through hoops to win points quicker than other teams. In this new level, you move ramps and platforms to reach the hoops, and there will be moving draw bridges as well. In the third, it's an obstacle course with spiked logs that rotate, swinging axes and more. Mediatonic is keeping the remaining level under wraps for now, but it did recently announce Big Yeetus, a randomly-appearing swinging hammer that brings more chaos to rounds.
- Media > Television (0.40)
- Leisure & Entertainment > Games > Computer Games (0.40)
AI Helps Seismologists Predict Earthquakes
In May of last year, after a 13-month slumber, the ground beneath Washington's Puget Sound rumbled to life. The quake began more than 20 miles below the Olympic mountains and, over the course of a few weeks, drifted northwest, reaching Canada's Vancouver Island. It then briefly reversed course, migrating back across the US border before going silent again. All told, the monthlong earthquake likely released enough energy to register as a magnitude 6. By the time it was done, the southern tip of Vancouver Island had been thrust a centimeter or so closer to the Pacific Ocean.
- North America > United States (0.36)
- North America > Canada (0.25)
Quadrotor Safety System Stops Propellers Before You Lose a Finger
Quadrotors have a reputation for being both fun and expensive, but it's not usually obvious how dangerous they can be. While it's pretty clear from the get-go that it's in everyone's best interest to avoid the spinny bits whenever possible, quadrotor safety primarily involves doing little more than trying your level best not to run into people. Not running into people with your drone is generally good advice, but the problems tend to happen when for whatever reason the drone escapes from your control. Maybe it's your fault, maybe it's the drone's fault, but either way, those spinny bits can cause serious damage. Safety-conscious quadrotor pilots have few options for making their drones safer, and none of them are all that great, due either to mediocre effectiveness or significant cost and performance tradeoffs.
How Deep Learning Machines Program Themselves – Saad Hussain – Medium
In my last post, I discussed the state of confusion around deep learning and its abilities. Also, how even software programmers have a hard time understanding how deep learning enables machines to program themselves. In this post, I will try to explain probably the hardest to understand deep learning concept i.e. how deep learning machines program themselves without any human intervention. Since the advent of software programming, humans have been writing code to program the behavior of machines. In other words, the behavior of a machine only changes when the machine is reprogrammed by a human through new lines of code.
How Deep Learning Machines Program Themselves
In my last post, I discussed the state of confusion around deep learning and its abilities. Also, how even software programmers have a hard time understanding how deep learning enables machines to program themselves. In this post, I will try to explain probably the hardest to understand deep learning concept i.e. how deep learning machines program themselves without any human intervention. Since the advent of software programming, humans have been writing code to program the behavior of machines. In other words, the behavior of a machine only changes when the machine is reprogrammed by a human through new lines of code.