cellar
A Chance Discovery on Vacation Changed My Whole Perspective on Wine
During a recent stay in London, my wife and I took an early evening stroll down Lamb's Conduit Street, in the West End district of Bloomsbury, a mere 15-minute walk from the din of Piccadilly Circus but a world away: peaceful, elegant, a medley of quaint shops and stately houses, tucked amid which was a restaurant called Noble Rot. The atmosphere was convivial, the food delicious, the wine exquisitely paired. On the bar counter were copies of a small but thick magazine, also called Noble Rot, adorned with a bright hipster-cartoon cover. Out of curiosity, I bought one--and, as sometimes happens in random moments, my take on a whole slice of life began to change. In this case, what changed was my attitude toward wine and its seriously playful possibilities.
- Europe > France (0.05)
- North America > United States > California (0.05)
- Europe > Switzerland (0.05)
- (3 more...)
Perceptions to Beliefs: Exploring Precursory Inferences for Theory of Mind in Large Language Models
Jung, Chani, Kim, Dongkwan, Jin, Jiho, Kim, Jiseon, Seonwoo, Yeon, Choi, Yejin, Oh, Alice, Kim, Hyunwoo
While humans naturally develop theory of mind (ToM), the capability to understand other people's mental states and beliefs, state-of-the-art large language models (LLMs) underperform on simple ToM benchmarks. We posit that we can extend our understanding of LLMs' ToM abilities by evaluating key human ToM precursors -- perception inference and perception-to-belief inference -- in LLMs. We introduce two datasets, Percept-ToMi and Percept-FANToM, to evaluate these precursory inferences for ToM in LLMs by annotating characters' perceptions on ToMi and FANToM, respectively. Our evaluation of eight state-of-the-art LLMs reveals that the models generally perform well in perception inference while exhibiting limited capability in perception-to-belief inference (e.g., lack of inhibitory control). Based on these results, we present PercepToM, a novel ToM method leveraging LLMs' strong perception inference capability while supplementing their limited perception-to-belief inference. Experimental results demonstrate that PercepToM significantly enhances LLM's performance, especially in false belief scenarios.
- Asia > Singapore (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Middle East > Malta > Eastern Region > Northern Harbour District > St. Julian's (0.04)
- (2 more...)
What Makes a Champagne Vintage Great? Ask a Deep Learning Model
In early 2021, Bollinger's winemakers were able to get their first taste of La Grande Année 2014, a prestige fizz that had been aging in the champagne house's cellars since it was blended. La Grande Année, Bollinger's flagship vintage champagne, is produced only in years when the broad quality is deemed sufficiently high, and enjoys seven years of aging under cork before it's launched. Ahead of opening up the 2014 vintage, questions lingered over just how strong a year it really was, given a roller-coaster growing season that saw record-breaking heat in June followed by a cold, wet summer that slowed grape maturation. Moreover, for a champagne house known for its forthright pinot noir character, it was a vintage that distinctly favored chardonnay. But for Denis Bunner, Bollinger's deputy head winemaker (or chef de cave), the answer was clear-cut even before the bottles were opened.
The Biggest Technology Trends In Wine And Winemaking
It is not often that I am able to combine two of my life's passions: future tech and wine. When we think about the wine business, the images that come to mind might be more of vineyards stretching across the French countryside than of robots and digital transformation. But the fact is that the industry has always been driven by science, technology and innovation. Today, things are no different. The latest wave of technology-driven change is focused on artificial intelligence (AI), the internet of things, augmented reality and blockchain.