Goto

Collaborating Authors

 addition


ImageScope: Unifying Language-Guided Image Retrieval via Large Multimodal Model Collective Reasoning

Luo, Pengfei, Zhou, Jingbo, Xu, Tong, Xia, Yuan, Xu, Linli, Chen, Enhong

arXiv.org Artificial Intelligence

With the proliferation of images in online content, language-guided image retrieval (LGIR) has emerged as a research hotspot over the past decade, encompassing a variety of subtasks with diverse input forms. While the development of large multimodal models (LMMs) has significantly facilitated these tasks, existing approaches often address them in isolation, requiring the construction of separate systems for each task. This not only increases system complexity and maintenance costs, but also exacerbates challenges stemming from language ambiguity and complex image content, making it difficult for retrieval systems to provide accurate and reliable results. To this end, we propose ImageScope, a training-free, three-stage framework that leverages collective reasoning to unify LGIR tasks. The key insight behind the unification lies in the compositional nature of language, which transforms diverse LGIR tasks into a generalized text-to-image retrieval process, along with the reasoning of LMMs serving as a universal verification to refine the results. To be specific, in the first stage, we improve the robustness of the framework by synthesizing search intents across varying levels of semantic granularity using chain-of-thought (CoT) reasoning. In the second and third stages, we then reflect on retrieval results by verifying predicate propositions locally, and performing pairwise evaluations globally. Experiments conducted on six LGIR datasets demonstrate that ImageScope outperforms competitive baselines. Comprehensive evaluations and ablation studies further confirm the effectiveness of our design.


Inventory Demand Forecasting using Machine Learning - Python - GeeksforGeeks

#artificialintelligence

The vendors who are selling everyday items need to keep their stock up to date so, that no customer returns from their shop empty hand. In this article, we will try to implement a machine learning model which can predict the stock amount for the different products which are sold in different stores. Python libraries make it easy for us to handle the data and perform typical and complex tasks with a single line of code. Now let's load the dataset into the panda's data frame and print its first five rows. Now let's check the size we have calculated is correct or not .


Why Do We Need a Validation Set in Addition to Training and Test Sets?

#artificialintelligence

You may already be familiar with training and test sets. This is because you need a separate test set to evaluate your model on unseen data to increase the generalizing capability of the model. We do not test our model on the same data used for training. If we do so, the model will try to memorize data and will not generalize on new unseen data. The validation set is also a part of the original dataset.


@Radiology_AI

#artificialintelligence

To assess how well a brain MRI lesion segmentation algorithm trained at one institution performed at another institution, and to assess the effect of multi-institutional training datasets for mitigating performance loss. In this retrospective study, a three-dimensional U-Net for brain MRI abnormality segmentation was trained on data from 293 patients from one institution (IN1) (median age, 54 years; 165 women; patients treated between 2008 and 2018) and tested on data from 51 patients from a second institution (IN2) (median age, 46 years; 27 women; patients treated between 2003 and 2019). The model was then trained on additional data from various sources: (a) 285 multi-institution brain tumor segmentations, (b) 198 IN2 brain tumor segmentations, and (c) 34 IN2 lesion segmentations from various brain pathologic conditions. All trained models were tested on IN1 and external IN2 test datasets, assessing segmentation performance using Dice coefficients. Performance was lower when tested at an external institution (median Dice score, 0.70 [IN2] vs 0.76 [IN1]).


How to use machine learning and AI in cyber security

#artificialintelligence

Cyber criminals are constantly seeking new ways to perpetrate a breach but thanks to artificial intelligence (AI) and its subset machine learning, it's becoming possible to fight off these attacks automatically. The secret is in machine learning's ability to monitor network traffic and learn what's normal within a system, using this information to flag up any suspicious activity. As the technology's name suggests, it's able to use the vast amounts of security data collected by businesses every day to become more effective over time. At the moment, when the machine spots an anomaly, it sends an alert to a human – usually a security analyst – to decide if an action needs to be taken. But some machine learning systems are already able to respond themselves, by restricting access for certain users, for example.


AI and Deep Learning in 2017 – A Year in Review

#artificialintelligence

The year is coming to an end. I did not write nearly as much as I had planned to. But I'm hoping to change that next year, with more tutorials around Reinforcement Learning, Evolution, and Bayesian Methods coming to WildML! And what better way to start than with a summary of all the amazing things that happened in 2017? Looking back through my Twitter history and the WildML newsletter, the following topics repeatedly came up.


560

AI Magazine

The welcome was given by University of Pittsburgh President Wesley Posvar. The conference cochairmen, Stellan Ohlsson and Jeff Bonar, also gave brief welcomes to the participants. The relatively small size of the conference, about 425 participants, was undoubtedly in part responsible for the congenial ambiance of the meeting. In addition to the opportunity to reunite with old friends, it was easy to establish new relationships with nearly everyone at the conference. With so many attendees from abroad (The Netherlands, Japan, Canada, West Germany, England, Sweden, France, and Hong Kong were all represented by speakers), the international flavor of the conference was well established.


449

AI Magazine

This book is a collection of many of the seminal papers from the first decade of research in artificial intelligence in medicine (AIM). The editors state that the need for such a collection became evident when a two-day AIM tutorial was held at Stanford in 1980, following the annual national AIM research workshop. The 19 papers included in the book are each introduced by a short section written by the editors. Typically one page in length, these introductory sections are designed to place the paper into context in the field. In addition, the editors have included introductory and concluding chapters of their own.


AAAI News

AI Magazine

The Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) and the Twenty-Eighth Conference on Innovative Applications of Artificial Intelligence (IAAI-16) will be held February 12-17 at the Phoenix Convention Center, Phoenix, Arizona, USA. Phoenix is the gateway to the Grand Canyon, and its history is a testament to the spirit of puebloans, ranchers, miners, and visionaries. Nearby Tempe is the site of Arizona State University, home of a leading AI research community. For local information, please visit the Phoenix Visitors site at www.visitphoenix.com. AAAI-16 Registration is now available at aaai.org/aaai16, and online registration can be completed at regonline.


AAAI News

AI Magazine

AAAI President David L. Waltz presented the three AAAI Awards at AAAI-2000 in Austin, Texas. Each award winner received a certificate and a check for $2500. The 2000 AAAI Classic Paper Award was given to the author of the most influential paper(s) from the Second National Conference on Artificial Intelligence, held in 1982 at Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania. The Awards Committee selected Judea Pearl of the University of California at Los Angeles to receive this Judea Pearl accepts Classic Paper Award at AAAI-2000 in Austin, Texas. Pearl is being honored for revolutionizing uncertain reasoning through the introduction of efficient Bayesian inference methods.