Goto

Collaborating Authors

 cauliflower




Disentangling Polysemantic Channels in Convolutional Neural Networks

Hesse, Robin, Fischer, Jonas, Schaub-Meyer, Simone, Roth, Stefan

arXiv.org Artificial Intelligence

Mechanistic interpretability is concerned with analyzing individual components in a (convolutional) neural network (CNN) and how they form larger circuits representing decision mechanisms. These investigations are challenging since CNNs frequently learn polysemantic channels that encode distinct concepts, making them hard to interpret. To address this, we propose an algorithm to disentangle a specific kind of polysemantic channel into multiple channels, each responding to a single concept. Our approach restructures weights in a CNN, utilizing that different concepts within the same channel exhibit distinct activation patterns in the previous layer. By disentangling these polysemantic features, we enhance the interpretability of CNNs, ultimately improving explanatory techniques such as feature visualizations.


Introducing GenCeption for Multimodal LLM Benchmarking: You May Bypass Annotations

Cao, Lele, Buchner, Valentin, Senane, Zineb, Yang, Fangkai

arXiv.org Artificial Intelligence

Multimodal Large Language Models (MLLMs) are commonly evaluated using costly annotated multimodal benchmarks. However, these benchmarks often struggle to keep pace with the rapidly advancing requirements of MLLM evaluation. We propose GenCeption, a novel and annotation-free MLLM evaluation framework that merely requires unimodal data to assess inter-modality semantic coherence and inversely reflects the models' inclination to hallucinate. Analogous to the popular DrawCeption game, GenCeption initiates with a non-textual sample and undergoes a series of iterative description and generation steps. Semantic drift across iterations is quantified using the GC@T metric. Our empirical findings validate GenCeption's efficacy, showing strong correlations with popular MLLM benchmarking results. GenCeption may be extended to mitigate training data contamination by utilizing ubiquitous, previously unseen unimodal data.


GrowliFlower: An image time series dataset for GROWth analysis of cauLIFLOWER

#artificialintelligence

This article presents GrowliFlower, a georeferenced, image-based UAV time series dataset of two monitored cauliflower fields of size 0.39 and 0.60 ha acquired in 2020 and 2021. The dataset contains RGB and multispectral orthophotos from which about 14,000 individual plant coordinates are derived and provided. The coordinates enable the dataset users the extraction of complete and incomplete time series of image patches showing individual plants. The dataset contains collected phenotypic traits of 740 plants, including the developmental stage as well as plant and cauliflower size. As the harvestable product is completely covered by leaves, plant IDs and coordinates are provided to extract image pairs of plants pre and post defoliation, to facilitate estimations of cauliflower head size. Moreover, the dataset contains pixel-accurate leaf and plant instance segmentations, as well as stem annotations to address tasks like classification, detection, segmentation, instance segmentation, and similar computer vision tasks.


Temporal Prediction and Evaluation of Brassica Growth in the Field using Conditional Generative Adversarial Networks

Drees, Lukas, Junker-Frohn, Laura Verena, Kierdorf, Jana, Roscher, Ribana

arXiv.org Artificial Intelligence

Farmers frequently assess plant growth and performance as basis for making decisions when to take action in the field, such as fertilization, weed control, or harvesting. The prediction of plant growth is a major challenge, as it is affected by numerous and highly variable environmental factors. This paper proposes a novel monitoring approach that comprises high-throughput imaging sensor measurements and their automatic analysis to predict future plant growth. Our approach's core is a novel machine learning-based generative growth model based on conditional generative adversarial networks, which is able to predict the future appearance of individual plants. In experiments with RGB time-series images of laboratory-grown Arabidopsis thaliana and field-grown cauliflower plants, we show that our approach produces realistic, reliable, and reasonable images of future growth stages. The automatic interpretation of the generated images through neural network-based instance segmentation allows the derivation of various phenotypic traits that describe plant growth.


Robots gear up to march to the fields and harvest cauliflowers

#artificialintelligence

The job of harvesting cauliflowers could one day be in the mechanical hands of robots thanks to a collaboration between scientists and the French canned vegetable producer Bonduelle. Fieldwork Robotics, the team behind the world's first raspberry-picking robot, is designing a machine in a three-year collaboration launched on Monday. An early prototype already exists, developed by Fieldwork's co-founder Dr Martin Stoelen, lecturer in robotics at the University of Plymouth and associate professor at the Western Norway University of Applied Science. It has a gripper and a cutter that can neatly slice off a cauliflower head. "It works in a lab environment, where we put a lot of cauliflower heads in a row," said Rui Andrês, Fieldwork's chief executive.


Raspberry-picking MACHINES will replace dwindling numbers of migrant farm workers

Daily Mail - Science & tech

Hours spent toiling away under the beating sun to harvest berries and fruit may soon be a thing of the past as robots look set to replace humans in the field. A £700,000 machine built by the University of Plymouth has succeeded in plucking a raspberry from a plant and carefully placing it in a punnet. The painstaking process takes a whole minute to get one berry because it requires a combination of soft robotics, clever AI and'deep learning'. It stands around six foot tall (1.8metres) and will combat a continued drop in the amount of migrant farm workers available for the arduous harvests. Fieldwork Robotics, a spin-off from the university dedicated to agricultural robots, built the machine and says it will be able to pick 25,000 fruits a day in the future.


Cauliflower-picking robots are set to replace migrant workers

Daily Mail - Science & tech

The vegetables you eat with your Sunday roast may soon be picked by a robot. Farmers in Cornwall are testing a machine invented using European funding that picks cauliflowers from the field without bruising them. It works in a similar way to the human hand by squeezing each cauliflower before deciding whether it is ready to be harvested. The GummiArm robot is believed to be a answer to any migrant staff shortages that may arise when the UK leaves the EU. A cauliflower picking robot has been developed which can tell when the vegetable is ready for harvest and pull it out of the ground without damaging it.