mortensen
For the First Time, AI Analyzes Language as Well as a Human Expert
If language is what makes us human, what does it mean now that large language models have gained "metalinguistic" abilities? Among the myriad abilities that humans possess, which ones are uniquely human? Language has been a top candidate at least since Aristotle, who wrote that humanity was "the animal that has language." Even as large language models such as ChatGPT superficially replicate ordinary speech, researchers want to know if there are specific aspects of human language that simply have no parallels in the communication systems of other animals or artificially intelligent devices. In particular, researchers have been exploring the extent to which language models can reason about language itself.
- North America > United States > California > Alameda County > Berkeley (0.05)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Asia > China (0.04)
Transformer-Enabled Diachronic Analysis of Vedic Sanskrit: Neural Methods for Quantifying Types of Language Change
Hariharan, Ananth, Mortensen, David
This study demonstrates how hybrid neural-symbolic methods can yield significant new insights into the evolution of a morphologically rich, low-resource language. We challenge the naive assumption that linguistic change is simplification by quantitatively analyzing over 2,000 years of Sanskrit, demonstrating how weakly-supervised hybrid methods can yield new insights into the evolution of morphologically rich, low-resource languages. Our approach addresses data scarcity through weak supervision, using 100+ high-precision regex patterns to generate pseudo-labels for fine-tuning a multilingual BERT. We then fuse symbolic and neural outputs via a novel confidence-weighted ensemble, creating a system that is both scalable and interpretable. Applying this framework to a 1.47-million-word diachronic corpus, our ensemble achieves a 52.4% overall feature detection rate. Our findings reveal that Sanskrit's overall morphological complexity does not decrease but is instead dynamically redistributed: while earlier verbal features show cyclical patterns of decline, complexity shifts to other domains, evidenced by a dramatic expansion in compounding and the emergence of new philosophical terminology. Critically, our system produces well-calibrated uncertainty estimates, with confidence strongly correlating with accuracy (Pearson r = 0.92) and low overall calibration error (ECE = 0.043), bolstering the reliability of these findings for computational philology.
- Europe > Austria > Vienna (0.14)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (10 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Can Large Language Models Code Like a Linguist?: A Case Study in Low Resource Sound Law Induction
Naik, Atharva, Zhang, Kexun, Robinson, Nathaniel, Mysore, Aravind, Marr, Clayton, Byrnes, Hong Sng Rebecca, Cai, Anna, Chang, Kalvin, Mortensen, David
Historical linguists have long written a kind of incompletely formalized ''program'' that converts reconstructed words in an ancestor language into words in one of its attested descendants that consist of a series of ordered string rewrite functions (called sound laws). They do this by observing pairs of words in the reconstructed language (protoforms) and the descendent language (reflexes) and constructing a program that transforms protoforms into reflexes. However, writing these programs is error-prone and time-consuming. Prior work has successfully scaffolded this process computationally, but fewer researchers have tackled Sound Law Induction (SLI), which we approach in this paper by casting it as Programming by Examples. We propose a language-agnostic solution that utilizes the programming ability of Large Language Models (LLMs) by generating Python sound law programs from sound change examples. We evaluate the effectiveness of our approach for various LLMs, propose effective methods to generate additional language-agnostic synthetic data to fine-tune LLMs for SLI, and compare our method with existing automated SLI methods showing that while LLMs lag behind them they can complement some of their weaknesses.
- Oceania > Niue (0.05)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (8 more...)
Why hasn't AI changed the world yet?
When Kursat Ceylan, who is blind, was trying to find his way to a hotel, he used an app on his phone for directions, but also had to hold his cane and pull his luggage. He ended up walking into a pole, cutting his forehead. This inspired him to develop, along with a partner, Wewalk - a cane equipped with artificial intelligence (AI), that detects objects above chest level and pairs with apps including Google Maps and Amazon's Alexa, so the user can ask questions. Jean Marc Feghali, who helped to develop the product, also has an eye condition. In his case his vision is severely impaired when the light is not good. While the smart cane itself only integrates with basic AI functions right now, the aim is for Wewalk, to use information gathered from the gyroscope, accelerometer and compass installed inside the cane.
- Information Technology > Communications > Social Media (0.40)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.34)
- Information Technology > Artificial Intelligence > Robots (0.33)
Why hasn't AI changed the world yet?
When Kursat Ceylan, who is blind, was trying to find his way to a hotel, he used an app on his phone for directions, but also had to hold his cane and pull his luggage. He ended up walking into a pole, cutting his forehead. This inspired him to develop, along with a partner, Wewalk - a cane equipped with artificial intelligence (AI), that detects objects above chest level and pairs with apps including Google Maps and Amazon's Alexa, so the user can ask questions. Jean Marc Feghali, who helped to develop the product, also has an eye condition. In his case his vision is severely impaired when the light is not good. While the smart cane itself only integrates with basic AI functions right now, the aim is for Wewalk, to use information gathered from the gyroscope, accelerometer and compass installed inside the cane.
Meet Your New Colleague: Artificial Intelligence - Workforce
Artificial intelligence is increasingly people's interviewer, colleague and competition. As it burrows its way further into the workplace and different job functions, it holds abilities to take over certain tasks, learn over time and even have conversations. Many of us may not even be aware that who we're talking to isn't even a "who" but a "what." In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the "Outlook on Artificial Intelligence in the Enterprise 2018" report from Narrative Science, an artificial intelligence company, in collaboration with the National Business Research Institute. In the communication arena, 43 percent of these businesses said they send AI-powered communications to employees.
- North America > United States > New York (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
Artificial Intelligence Has Got Some Explaining to Do
During last Wednesday's congressional hearing about Twitter transparency, Twitter CEO Jack Dorsey was forced to take accountability for the damaging cultural and political effects of his company. Soft-spoken and contrite, Dorsey provided a stark contrast to Facebook's Mark Zuckerberg, who seemed more confident when he appeared before Congress in April. In the months since, collective faith in the fabric of the internet has been anything but restored; instead, consumers, politicians, and the tech companies themselves continue to grapple with the aftermath of what social platforms hath wrought. During the hearing, Representative Debbie Dingell asked Dorsey if Twitter's algorithms are able to learn from the decisions they make--like who they suggest users follow, which tweets rise to the top, and in some cases what gets flagged for violating the platform's terms of service or even who gets banned--and also if Dorsey could explain how all of this works. "Great question," Dorsey responded, seemingly excited at a line of questioning that piqued his intellectual curiosity.
Your AI pet project is only as smart as its garbage training set
Train a neural network on flawed data and you'll have one that makes lots of mistakes. Most neural networks learn to distinguish between things by sampling different groups. This is supervised learning, and it only works if someone labels the data first so that the network knows what it's looking at. But how can you find the "right" data to train your AI, and confirm its quality? Well, what you feed your machine might surprise you.
- North America > United States > California (0.05)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.05)
AI and the Future of Work
While no one knows what artificial intelligence's effect on work will be, we can all agree on one thing: it's disruptive. So far, many have cast that disruption in a negative light and projected a future in which robots take jobs from human workers. That's one way to look at it. Another is that automation may create more jobs than it displaces. By offering new tools for entrepreneurs, it may also create new lines of business that we can't imagine now.
- Information Technology (0.49)
- Banking & Finance > Economy (0.35)
- Transportation (0.31)
X.ai brings its scheduling bot to Slack
X.ai's meeting assistant bot is now available on Slack. Plans to bring the bot that automates meeting scheduling to communication platforms were first shared with VentureBeat last August following the closure of a $10 million funding round. The bot, which can be named Amy or Andrew, will be able to do the same thing on Slack as it does in email: Set up meetings for you using natural language understanding, stepping in to eliminate the back-and-forth haggling over when and where to meet. Amy and Andrew can schedule meetings with one or multiple people, record reminders, and deliver meeting summaries. X.ai started four years ago with an email client, since many meeting requests begin there, but expanded to Slack today because team members did not like switching to email just to schedule appointments with each other.