Goto

Collaborating Authors

 picnic


COLT: Towards Completeness-Oriented Tool Retrieval for Large Language Models

Qu, Changle, Dai, Sunhao, Wei, Xiaochi, Cai, Hengyi, Wang, Shuaiqiang, Yin, Dawei, Xu, Jun, Wen, Ji-Rong

arXiv.org Artificial Intelligence

Recently, the integration of external tools with Large Language Models (LLMs) has emerged as a promising approach to overcome the inherent constraints of their pre-training data. However, realworld applications often involve a diverse range of tools, making it infeasible to incorporate all tools directly into LLMs due to constraints on input length and response time. Therefore, to fully exploit the potential of tool-augmented LLMs, it is crucial to develop an effective tool retrieval system. Existing tool retrieval methods techniques mainly rely on semantic matching between user queries and tool descriptions, which often results in the selection of redundant tools. As a result, these methods fail to provide a complete set of diverse tools necessary for addressing the multifaceted problems encountered by LLMs. In this paper, we propose a novel modelagnostic COllaborative Learning-based Tool Retrieval approach, COLT, which captures not only the semantic similarities between user queries and tool descriptions but also takes into account the collaborative information of tools. Specifically, we first fine-tune the PLM-based retrieval models to capture the semantic relationships between queries and tools in the semantic learning stage. Subsequently, we construct three bipartite graphs among queries, scenes, and tools and introduce a dual-view graph collaborative learning framework to capture the intricate collaborative relationships among tools during the collaborative learning stage. Extensive experiments on both the open benchmark and the newly introduced ToolLens dataset show that COLT achieves superior performance. Notably, the performance of BERT-mini (11M) with our proposed model framework outperforms BERT-large (340M), which has 30 times more parameters. Additionally, we plan to publicly release the ToolLens dataset to support further research in tool retrieval.


The AI Ethics War Will Make the Content Moderation Debate Look Like a Picnic

#artificialintelligence

Now that AI program speak with us in natural language, turn our thoughts into illustrations, and embody our voices, a major conflict over their ethics is en route. And if you thought the content moderation fight was intense, just wait for this one. At stake is how chatbots address political issues, how AI illustrators portray the world, and whether some applications like voice emulators should even exist. Given the scale and power of this blossoming technology, the activists won't be subtle. They've had their practice fighting over human speech online, and they'll bring that experience to this war.


Robots in New Jersey Make Great Pizza (Sorry, Humans)

#artificialintelligence

When I first saw the Picnic automated pizza-making station demoed at the CES tech show in early 2020, I was skeptical. How could a machine dropping toppings be better than human hands? And does it taste any good? Two and a half years later, we now have a pizza restaurant in New Jersey that built its entire business around the assembly line-styled machines. PizzaHQ, located in Woodland Park, uses the Picnic system to pump out 300 pies an hour for big school or party orders -- and it also takes orders throughout the day for the family that just needs a quick $10 cheese pie.


What is computer vision?

#artificialintelligence

If I asked you to name the objects in the picture below, you would probably come up with a list of words such as "tablecloth, basket, grass, boy, girl, man, woman, orange juice bottle, tomatoes, lettuce, disposable plates…" without thinking twice. Now, if I told you to describe the picture below, you would probably say, "It's the picture of a family picnic" again without giving it a second thought. Those are two very easy tasks that any person with below-average intelligence and above the age of six or seven could accomplish. However, in the background, a very complicated process takes place. The human vision is a very intricate piece of organic technology that involves our eyes and visual cortex, but also takes into account our mental models of objects, our abstract understanding of concepts and our personal experiences through billions and trillions of interactions we've made with the world in our lives. Digital equipment can capture images at resolutions and with detail that far surpasses the human vision system.


Pizza-making robot that can assemble and cook 300 pizzas every hour

Daily Mail - Science & tech

Not even your local pizza joint is safe from the forward progress of automation. At CES, a Seattle based Picnic showcased its automated pizza-making system that can swiftly assemble and cook pies with minimal human interaction. The system, which consists of three compact modular panels that assemble to form a conveyor belt, is capable of taking a pre-made pizza crust, adorning it with toppings, and cooking the pie to pre-specified doneness. What's even more compelling than the fact the pizza is made with little to no human input, however, is the speed at which Picnic's bot operates. According to CEO Clayton Wood, the bot can churn out an impressive 300 12-inch pizzas every hour when at max capacity.


Pizza robots. Pet robots. Sex tech. CES 2020 will feature them all, and more

#artificialintelligence

As 2020 grinds into gear, CNET will be kick-starting a new decade with a trip to the Nevada desert for the annual tech bonanza CES. When we arrive in Las Vegas, we expect to be greeted by a bunch of new TVs, scores of eccentric gadgets and a whole gaggle of robots. We're still some years away from robots outnumbering humans at the show, but every year it does seem as though more bots are present on the show floor. In the past decade we've seen robots become more complex, more affordable and more diverse. The number of contexts in which they play a role in our lives -- from the home to the workplace and beyond -- have expanded to provide us with a vision of how humans and robots will coexist and collaborate in the future.


LG to unveil a 65-inch OLED TV screen that unrolls from the ceiling at CES 2020

Daily Mail - Science & tech

LG will reveal an OLED TV that unfurls from the ceiling and another that'hangs like wallpaper' at the Consumer Electronics Show in Las Vegas next week. The 65-inch UHD Roll-Down TV can be stored in the ceiling and pulled down when desired or rolled up when not in use. Also on show will be a 77-inch UHD Film Cinematic Sound & Wallpaper OLED display that can be hung like wallpaper. The larger display has a wafer-thin screen and sound system that's embedded into the display. OLED video walls, made of 55-inch OLED displays installed on the wall of a plane, enable passengers to'feel more openness' in the narrow space of an enclosed cabin The devices point to'the future of home interior design', according to LG Display.


Picnic's pizza robot to crank out up to 300 pies per hour at CES

#artificialintelligence

Visitors to this year's Consumer Electronics Show (CES) at the Las Vegas Convention Center will have the option of chowing down on robot-made pizzas. Live event hospitality supplier Centerplate has selected Seattle-based food technology company Picnic to provide its automated food assembly system that will create up to 300 12-inch bespoke pizzas an hour on the CES show floor. Originally developed for the high-volume production of bespoke pizzas, Picnic's automated food assembly system was trialed at the T-Mobile Park stadium in Seattle in October this year. The system is modular, freestanding, doesn't take up much space and uses deep-learning to adapt to its tasks and settings. According to Picnic, it requires little training to use and in addition to pizzas, it is also designed to prepare other types of food, including bun, bowl, tortilla, and plate formats.


Dungeon crawling or lucid dreaming?

#artificialintelligence

I've done several experiments with a text-generating neural network called GPT-2. Trained at great expense by OpenAI (to the tune of tens of thousands of dollars worth of computing power), GPT-2 learned to imitate all kinds of text from the internet. I've interacted with the basic model, discovering its abilities to generate fan fiction, British snacks, or tea. I've also used a tool called gpt-2-simple that Max Woolf developed to make it easy to finetune GPT-2 on more specialized datasets - I've tried it on datasets like recipes or crochet. One of my favorite applications of GPT-2 and other text-generating neural nets is Dungeons and Dragons spells, creatures, character names, and character bios.


Super-Compressible Material Developed Through AI

#artificialintelligence

The Seattle-based food tech company Picnic is in the process of utilizing AI to create pizzas for their customers. The deep learning algorithms used by Picnic are capable of running a pizza production line with very little oversight, analyzing the pizza at different stages with a computer-vision system. Picnic was once called Vivid Robotics and the company has created what it dubs the first every all-purpose, automated system designed for the creation of food in the hospitality and foodservice sectors. According to TechXplore, the system is integrated with an app that customers can download and order pizzas with, customizing their toppings. The orders are given directly to the system, and the AI can oversee the creation of up to 300 12-inch or 180 18-inch pizzas every hour.