venkatasubramanian
What's in an embedding? Would a rose by any embedding smell as sweet?
Large Language Models (LLMs) are often criticized for lacking true "understanding" and the ability to "reason" with their knowledge, being seen merely as autocomplete systems. We believe that this assessment might be missing a nuanced insight. We suggest that LLMs do develop a kind of empirical "understanding" that is "geometry"-like, which seems adequate for a range of applications in NLP, computer vision, coding assistance, etc. However, this "geometric" understanding, built from incomplete and noisy data, makes them unreliable, difficult to generalize, and lacking in inference capabilities and explanations, similar to the challenges faced by heuristics-based expert systems decades ago. To overcome these limitations, we suggest that LLMs should be integrated with an "algebraic" representation of knowledge that includes symbolic AI elements used in expert systems. This integration aims to create large knowledge models (LKMs) that not only possess "deep" knowledge grounded in first principles, but also have the ability to reason and explain, mimicking human expert capabilities. To harness the full potential of generative AI safely and effectively, a paradigm shift is needed from LLM to more comprehensive LKM.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)
Quo Vadis ChatGPT? From Large Language Models to Large Knowledge Models
Venkatasubramanian, Venkat, Chakraborty, Arijit
The startling success of ChatGPT and other large language models (LLMs) using transformer-based generative neural network architecture in applications such as natural language processing and image synthesis has many researchers excited about potential opportunities in process systems engineering (PSE). The almost human-like performance of LLMs in these areas is indeed very impressive, surprising, and a major breakthrough. Their capabilities are very useful in certain tasks, such as writing first drafts of documents, code writing assistance, text summarization, etc. However, their success is limited in highly scientific domains as they cannot yet reason, plan, or explain due to their lack of in-depth domain knowledge. This is a problem in domains such as chemical engineering as they are governed by fundamental laws of physics and chemistry (and biology), constitutive relations, and highly technical knowledge about materials, processes, and systems. Although purely data-driven machine learning has its immediate uses, the long-term success of AI in scientific and engineering domains would depend on developing hybrid AI systems that use first principles and technical knowledge effectively. We call these hybrid AI systems Large Knowledge Models (LKMs), as they will not be limited to only NLP-based techniques or NLP-like applications. In this paper, we discuss the challenges and opportunities in developing such systems in chemical engineering.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts (0.04)
- (3 more...)
- Research Report > Promising Solution (0.34)
- Research Report > New Finding (0.34)
- Materials > Chemicals (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
Density and Affinity Dependent Social Segregation and Arbitrage Equilibrium in a Multi-class Schelling Game
Venkatasubramanian, Venkat, Shi, Jessica, Goldman, Leo, M., Arun Sankar E., Sivaram, Abhishek
Contrary to the widely believed hypothesis that larger, denser cities promote socioeconomic mixing, a recent study (Nilforoshan et al. 2023) reports the opposite behavior, i.e. more segregation. Here, we present a game-theoretic model that predicts such a density-dependent segregation outcome in both one- and two-class systems. The model provides key insights into the analytical conditions that lead to such behavior. Furthermore, the arbitrage equilibrium outcome implies the equality of effective utilities among all agents. This could be interpreted as all agents being equally "happy" in their respective environments in our ideal society. We believe that our model contributes towards a deeper mathematical understanding of social dynamics and behavior, which is important as we strive to develop more harmonious societies.
As AI's influence grows, lawmakers struggle to keep up
AI expert Marva Bailer explains how Will.i.am's app, FYI, powers his AI co-host for his radio show and why the platform has different capabilities than ChatGPT While artificial intelligence made headlines with ChatGPT, behind the scenes, the technology has quietly pervaded everyday life -- screening job resumes, rental apartment applications, and even determining medical care in some cases. While a number of AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, there's scant government oversight. Lawmakers in at least seven states are taking big legislative swings to regulate bias in artificial intelligence, filling a void left by Congress' inaction. These proposals are some of the first steps in a decades-long discussion over balancing the benefits of this nebulous new technology with the widely documented risks. "AI does in fact affect every part of your life whether you know it or not," said Suresh Venkatasubramanian, a Brown University professor who co-authored the White House's Blueprint for an AI Bill of Rights.
- North America > United States > California (0.07)
- North America > United States > Connecticut (0.06)
- North America > United States > Washington (0.05)
- (7 more...)
- Government > Regional Government > North America Government > United States Government (0.50)
- Law > Litigation (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.72)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.58)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.58)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.49)
Data-Driven Target Localization: Benchmarking Gradient Descent Using the Cram\'er-Rao Bound
Venkatasubramanian, Shyam, Gogineni, Sandeep, Kang, Bosung, Rangaswamy, Muralidhar
In modern radar systems, precise target localization using azimuth and velocity estimation is paramount. Traditional unbiased estimation methods have utilized gradient descent algorithms to reach the theoretical limits of the Cramer Rao Bound (CRB) for the error of the parameter estimates. As an extension, we demonstrate on a realistic simulated example scenario that our earlier presented data-driven neural network model outperforms these traditional methods, yielding improved accuracies in target azimuth and velocity estimation. We emphasize, however, that this improvement does not imply that the neural network outperforms the CRB itself. Rather, the enhanced performance is attributed to the biased nature of the neural network approach. Our findings underscore the potential of employing deep learning methods in radar systems to achieve more accurate localization in cluttered and dynamic environments.
- North America > United States > Ohio > Montgomery County > Dayton (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > California (0.04)
Before a Bot Steals Your Job, It Will Steal Your Name
In May, Tessa went rogue. The National Eating Disorder Association's chatbot had recently replaced a phone hotline and the handful of staffers who ran it. But although it was designed to deliver a set of approved responses to people who might be at risk of an eating disorder, Tessa instead recommended that they lose weight. "Every single thing that Tessa suggested were things that led to the development of my eating disorder," one woman who reviewed the chatbot wrote on Instagram. "It was not our intention to suggest that Tessa could provide the same type of human connection that the Helpline offered," the nonprofit's CEO, Liz Thompson, told NPR.
- North America > United States > California (0.05)
- Asia > Middle East > Saudi Arabia (0.05)
Experts call for AI regulation during Senate hearing
As businesses, consumers and government agencies look for ways to take advantage of artificial intelligence tools, experts this week called on Congress to craft AI regulations addressing challenges facing the technology. AI concerns run the gamut from bias in algorithms that could affect decisions such as who is selected for housing and employment opportunities, to the use of deep fake AI that can artificially generate images and sounds that can imitate real human beings' appearances and voices. Yet AI has also led to the development of lifesaving drugs, advanced manufacturing and self-driving cars. Indeed, the increased adoption of artificial intelligence has led to the rapid growth of advanced technology in "virtually every sector," said Sen. Gary Peters (D-Mich.), chairman of the U.S. Senate Committee on Homeland Security and Governmental Affairs. Peters spoke during a committee hearing on AI risks and opportunities Wednesday.
- Asia > China (0.22)
- North America > United States > Michigan (0.05)
- Europe > Russia (0.05)
- (2 more...)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
How cloud helps make enterprises agile, innovative
Cloud, says Neetan Chopra, is one of the foundational enablers of any enterprise transformation or disruption. Chopra is the chief digital and information officer at IndiGo, India's largest passenger airline. Prior to joining Indigo earlier this year, he held similar technology and digital roles at Dubai Holding and Emirates Airlines. A transformation, he says, means speed and velocity. It means business model experimentation.
- Asia > India (0.29)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.26)
- Transportation > Passenger (0.57)
- Transportation > Air (0.37)
Computer scientist aims to protect people in age of artificial intelligence
As data-driven technologies transform the world and artificial intelligence raises questions about bias, privacy and transparency, Suresh Venkatasubramanian is offering his expertise to help create guardrails to ensure that technologies are developed and deployed responsibly. "We need to protect the American people and make sure that technology is used in ways that reinforce our highest values," said Venkatasubramanian, a professor of computer science and data science at Brown University. On the heels of a recently concluded 15-month appointment as an advisor to the White House Office of Science and Technology Policy, Venkatasubramanian returned to Washington, D.C., on Tuesday, Oct. 4, for the unveiling of "A Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People," during a ceremony at the White House. Venkatasubramanian said the blueprint represents the culmination of 14 months of research and collaboration led by the Office of Science and Technology Policy with partners across the federal government, academia, civil society, the private sector and communities around the country. That collaboration informed the development of the first-ever national guidance focused on the use and deployment of automated technologies that have the potential to impact people's rights, opportunities and access to services.
White House unveils its 'blueprint' for an AI Bill of Rights
Amazon exploiting tech to wring every last ounce of productivity from its workforce, Clearview AI harvesting our facial features from social media and public surveillance footage, school proctoring software invading our children's rooms, Facebook's whole "accused of contributing to genocide" thing -- the same machine learning/AI and automation technologies that have brought us the wonders of the modern world have also wrought upon us the horrors of the modern world. And, by golly, the Biden Administration isn't going to stand for it. On Tuesday, the White House Office of Science and Technology Policy (OSTP) released its long-awaited Blueprint for an AI Bill of Rights (BoR). The document will, "help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public," per a White House press release. As such, the BoR will advocate for five principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback.