pollinator
This odd vine contradicts long-standing evolutionary theory
'They don't follow the classic ideas of how we would have imagined the species evolved.' Breakthroughs, discoveries, and DIY tips sent six days a week. A tiny tropical flower is challenging a longstanding model for plant evolution. According to researchers at the Field Museum in Chicago, an oddball member of the lipstick vine family evolved to attract more pollinators spreading to other parts of the world, and not the other way around. "It was really exciting to get these results, because they don't follow the classic ideas of how we would have imagined the species evolved," explained Jing-Yi Lu, a botanist and coauthor of a study published today in the journal . Most lipstick vines look like their name implies: lengthy plants featuring vibrantly red, tubular flowers.
- North America > United States > Illinois > Cook County > Chicago (0.25)
- Asia > Taiwan (0.07)
- North America > United States > Idaho (0.05)
- (5 more...)
Most bugs can't see red--but these beetles can
Breakthroughs, discoveries, and DIY tips sent every weekday. Most insects have evolved to see the blue, green, and even ultraviolet spectrums. But most insects have trouble parsing one hue in particular: red. Even bees and other pollinators that visit traditionally vibrant poppies aren't attracted by the visible coloration, but by the UV light reflected from their petals. Now, an international zoology team has discovered that some insect species can manage to see what their relatives cannot.
Plants can hear tiny wing flaps of pollinators
Breakthroughs, discoveries, and DIY tips sent every weekday. Our planet runs on pollinators. Without bees, moths, weevils, and more zooming around and spreading plants' reproductive cells, plants and important crops would not grow. Without plants we would not breathe or eat. When these crucial pollinating species visit flowers and other plants, they produce a number of characteristic sounds, such as wing flapping when hovering, landing, and taking off.
FloPE: Flower Pose Estimation for Precision Pollination
Shrestha, Rashik, Rijal, Madhav, Smith, Trevor, Gu, Yu
This study presents Flower Pose Estimation (FloPE), a real-time flower pose estimation framework for computationally constrained robotic pollination systems. Robotic pollination has been proposed to supplement natural pollination to ensure global food security due to the decreased population of natural pollinators. However, flower pose estimation for pollination is challenging due to natural variability, flower clusters, and high accuracy demands due to the flowers' fragility when pollinating. This method leverages 3D Gaussian Splatting to generate photorealistic synthetic datasets with precise pose annotations, enabling effective knowledge distillation from a high-capacity teacher model to a lightweight student model for efficient inference. The approach was evaluated on both single and multi-arm robotic platforms, achieving a mean pose estimation error of 0.6 cm and 19.14 degrees within a low computational cost. Our experiments validate the effectiveness of FloPE, achieving up to 78.75% pollination success rate and outperforming prior robotic pollination techniques.
- North America > United States (0.94)
- Asia (0.14)
New Bird Buddy smart garden products let you peek into the secret lives of pollinators
The makers of the camera-equipped bird feeder, Bird Buddy, introduced two new products at CES 2025 under a new brand called Wonder that let you spy on nature and help pollinators thrive. Petal, a solar-powered camera with changeable lenses and Nature Intelligence (aka AI), can be mounted with a clip, a flexible arm or a stem, so it can be set up pretty much wherever you want outdoors. It'll analyze everything it sees to let you know what birds, insects and other critters stopped by. The second product, Wonder Blocks, is a modular system that's kind of like an apartment building for bugs and birds. The Petal camera comes in soft, bright colors like orange, blue and yellow, so it would look right at home in a flower pot or wrapped around the thin branch of a tree.
Rare bees kill Meta's nuclear-powered AI data center plans
Environmental regulators reportedly quashed Mark Zuckerberg's nuclear plant partnership meant to help power Meta's ongoing artificial intelligence projects. Details remain scarce, but the main reason for pausing plans allegedly comes down to one issue--rare bees. The tech company's setback, first reported on November 4th by Financial Times, came after surveyors discovered the currently unspecified pollinators while reviewing land meant for a new AI data center. The selected area offered easy access to tap into the nearby, unspecified nuclear plant. Zuckerberg, however, confirmed the project's cancellation during a Meta all-hands meeting last week, according to FT.
- North America > United States > Pennsylvania (0.06)
- North America > United States > California > San Luis Obispo County (0.06)
- Information Technology > Services (1.00)
- Energy > Power Industry > Utilities > Nuclear (1.00)
A Comprehensive Review of Current Robot- Based Pollinators in Greenhouse Farming
Singh, Rajmeet, Seneviratne, lakmal, Hussain, Irfan
The decline of bee and wind-based pollination systems in greenhouses due to controlled environments and limited access has boost the importance of finding alternative pollination methods. Robotic based pollination systems have emerged as a promising solution, ensuring adequate crop yield even in challenging pollination scenarios. This paper presents a comprehensive review of the current robotic-based pollinators employed in greenhouses. The review categorizes pollinator technologies into major categories such as air-jet, water-jet, linear actuator, ultrasonic wave, and air-liquid spray, each suitable for specific crop pollination requirements. However, these technologies are often tailored to particular crops, limiting their versatility. The advancement of science and technology has led to the integration of automated pollination technology, encompassing information technology, automatic perception, detection, control, and operation. This integration not only reduces labor costs but also fosters the ongoing progress of modern agriculture by refining technology, enhancing automation, and promoting intelligence in agricultural practices. Finally, the challenges encountered in design of pollinator are addressed, and a forward-looking perspective is taken towards future developments, aiming to contribute to the sustainable advancement of this technology.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- South America > Brazil (0.14)
- Asia > Japan (0.04)
- (8 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.48)
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs
Liu, Ziyu, Chu, Tao, Zang, Yuhang, Wei, Xilin, Dong, Xiaoyi, Zhang, Pan, Liang, Zijian, Xiong, Yuanjun, Qiao, Yu, Lin, Dahua, Wang, Jiaqi
Generating natural and meaningful responses to communicate with multi-modal human inputs is a fundamental capability of Large Vision-Language Models(LVLMs). While current open-source LVLMs demonstrate promising performance in simplified scenarios such as single-turn single-image input, they fall short in real-world conversation scenarios such as following instructions in a long context history with multi-turn and multi-images. Existing LVLM benchmarks primarily focus on single-choice questions or short-form responses, which do not adequately assess the capabilities of LVLMs in real-world human-AI interaction applications. Therefore, we introduce MMDU, a comprehensive benchmark, and MMDU-45k, a large-scale instruction tuning dataset, designed to evaluate and improve LVLMs' abilities in multi-turn and multi-image conversations. We employ the clustering algorithm to ffnd the relevant images and textual descriptions from the open-source Wikipedia and construct the question-answer pairs by human annotators with the assistance of the GPT-4o model. MMDU has a maximum of 18k image+text tokens, 20 images, and 27 turns, which is at least 5x longer than previous benchmarks and poses challenges to current LVLMs. Our in-depth analysis of 15 representative LVLMs using MMDU reveals that open-source LVLMs lag behind closed-source counterparts due to limited conversational instruction tuning data. We demonstrate that ffne-tuning open-source LVLMs on MMDU-45k signiffcantly address this gap, generating longer and more accurate conversations, and improving scores on MMDU and existing benchmarks (MMStar: +1.1%, MathVista: +1.5%, ChartQA:+1.2%). Our contributions pave the way for bridging the gap between current LVLM models and real-world application demands. This project is available at https://github.com/Liuziyu77/MMDU.
- Transportation > Passenger (1.00)
- Government > Military (1.00)
- Transportation > Marine (0.93)
- (4 more...)
Motion-based video compression for resource-constrained camera traps
Ratnayake, Malika Nisal, Gallon, Lex, Toosi, Adel N., Dorin, Alan
Field-captured video allows for detailed studies of spatiotemporal aspects of animal locomotion, decision-making, and environmental interactions. However, despite the affordability of data capture with mass-produced hardware, storage, processing, and transmission overheads pose a significant hurdle to acquiring high-resolution video from field-deployed camera traps. Therefore, efficient compression algorithms are crucial for monitoring with camera traps that have limited access to power, storage, and bandwidth. In this article, we introduce a new motion analysis-based video compression algorithm designed to run on camera trap devices. We implemented and tested this algorithm using a case study of insect-pollinator motion tracking. The algorithm identifies and stores only image regions depicting motion relevant to pollination monitoring, reducing the overall data size by an average of 84% across a diverse set of test datasets while retaining the information necessary for relevant behavioural analysis. The methods outlined in this paper facilitate the broader application of computer vision-enabled, low-powered camera trap devices for remote, in-situ video-based animal motion monitoring.
Fusion Intelligence: Confluence of Natural and Artificial Intelligence for Enhanced Problem-Solving Efficiency
Kalavakonda, Rohan Reddy, Huan, Junjun, Dehghanzadeh, Peyman, Jaiswal, Archit, Mandal, Soumyajit, Bhunia, Swarup
This paper introduces Fusion Intelligence (FI), a bio-inspired intelligent system, where the innate sensing, intelligence and unique actuation abilities of biological organisms such as bees and ants are integrated with the computational power of Artificial Intelligence (AI). This interdisciplinary field seeks to create systems that are not only smart but also adaptive and responsive in ways that mimic the nature. As FI evolves, it holds the promise of revolutionizing the way we approach complex problems, leveraging the best of both biological and digital worlds to create solutions that are more effective, sustainable, and harmonious with the environment. We demonstrate FI's potential to enhance agricultural IoT system performance through a simulated case study on improving insect pollination efficacy (entomophily).
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Europe > United Kingdom > England > Hertfordshire (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Health & Medicine (0.68)
- Food & Agriculture > Agriculture (0.46)
- Information Technology > Smart Houses & Appliances (0.34)