davidson
The Accidental Winners of the War on Higher Ed
Go to a small liberal-arts college if you can. I n the waning heat of last summer, freshly back in my office at a major research university, I found myself considering the higher-education hellscape that had lately descended upon the nation. I'd spent months reporting on the Trump administration's attacks on universities for, speaking with dozens of administrators, faculty, and students about the billions of dollars in cuts to public funding for research and the resulting collapse of " college life ."At Initially, I surveyed the situation from the safe distance of a journalist who happens to also be a career professor and university administrator. I saw myself as an envoy between America's college campuses and its citizens, telling the stories of the people whose lives had been shattered by these transformations. By the summer, though, that safe distance had collapsed back on me.
- North America > United States > Texas (0.05)
- North America > United States > Michigan (0.05)
- North America > United States > Massachusetts (0.05)
- (6 more...)
- Law (1.00)
- Education > Educational Setting > Higher Education (1.00)
- Government > Regional Government > North America Government > United States Government (0.90)
The Knowledge-Behaviour Disconnect in LLM-based Chatbots
Large language model-based artificial conversational agents (like ChatGPT) give answers to all kinds of questions, and often enough these answers are correct. Just on the basis of that capacity alone, we may attribute knowledge to them. But do these models use this knowledge as a basis for their own conversational behaviour? I argue this is not the case, and I will refer to this failure as a `disconnect'. I further argue this disconnect is fundamental in the sense that with more data and more training of the LLM on which a conversational chatbot is based, it will not disappear. The reason is, as I will claim, that the core technique used to train LLMs does not allow for the establishment of the connection we are after. The disconnect reflects a fundamental limitation on the capacities of LLMs, and explains the source of hallucinations. I will furthermore consider the ethical version of the disconnect (ethical conversational knowledge not being aligned with ethical conversational behaviour), since in this domain researchers have come up with several additional techniques to influence a chatbot's behaviour. I will discuss how these techniques do nothing to solve the disconnect and can make it worse.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Leisure & Entertainment > Games > Chess (0.70)
- Education > Educational Setting > K-12 Education (0.46)
Do All Problems Have Technical Fixes?
Tech solutionism, as identified by Moss and Metcalf,7 is the notion that all problems have tractable technical fixes. We see variants in the naming and definition of this phenomenon: the technology imperative,8 or "the underlying technocratic philosophy of inevitability",4 or even old-fashioned technocracy itself. All versions designate a confident deployment of technology to solve a non-technical problem, with costs and other drawbacks reduced to secondary consideration. A certain Tech Leader promotes a new startup, Sunshine, thus: "… by applying AI … you can both solve valuable problems and you can give people back time. You can also build their confidence in AI."6
Studying Socially Unacceptable Discourse Classification (SUD) through different eyes: "Are we on the same page ?"
Carneiro, Bruno Machado, Linardi, Michele, Longhi, Julien
We study Socially Unacceptable Discourse (SUD) characterization and detection in online text. We first build and present a novel corpus that contains a large variety of manually annotated texts from different online sources used so far in state-of-the-art Machine learning (ML) SUD detection solutions. This global context allows us to test the generalization ability of SUD classifiers that acquire knowledge around the same SUD categories, but from different contexts. From this perspective, we can analyze how (possibly) different annotation modalities influence SUD learning by discussing open challenges and open research directions. We also provide several data insights which can support domain experts in the annotation task.
- North America > United States (0.28)
- Europe > Ukraine (0.04)
- Europe > Germany (0.04)
- Government (0.94)
- Media (0.69)
Biden may regulate AI for 'disinformation,' 'discriminatory outcomes'
Republican Rep. Lance Gooden is concerned that AI could eventually replace human decision-making in government and other critical areas of society. The Biden administration is pursuing regulations for artificial intelligence systems that would require government audits to ensure they produce trustworthy outputs, which could include assessments of whether AI is promoting "misinformation" and "disinformation." Alan Davidson, assistant secretary of communications and information at the Commerce Department's National Telecommunications and Information Administration (NTIA), said in a speech at the University of Pittsburgh this week that government audits of AI systems are one way to build trust in this emerging technology. "Much as financial audits create trust in the accuracy of financial statements, accountability mechanisms for AI can help assure that an AI system is trustworthy," he said in his prepared remarks. "Policy was necessary to make that happen in the finance sector, and it may be necessary for AI." President Biden's administration is considering regulations that would require audits of AI systems to make sure they output they deliver contains no "misinformation" or "disinformation."
- Media > News (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
US looks to establish rules for artificial intelligence
The US government is taking its first tentative steps toward establishing rules for artificial intelligence tools, as the frenzy over generative AI and chatbots reach a fever pitch. The US commerce department on Tuesday announced it is officially requesting public comment on how to create accountability measures for AI, seeking help on how to advise US policymakers to approach the technology. "In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy," said Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), at a press conference at the University of Pittsburgh. Davidson said that the NTIA is seeking feedback from the public, including from researchers, industry groups, and privacy and digital rights organizations on the development of audits and assessments of AI tools created by private industry. He also said that the NTIA looking to establish guardrails that would allow the government to determine whether AI systems perform the way companies claim they do, whether they are safe and effective, whether they have discriminatory outcomes or "reflect unacceptable levels of bias", whether they spread or perpetuate misinformation, and whether they respect individuals' privacy.
'We have to move fast': US looks to establish rules for artificial intelligence
The US government is taking its first tentative steps toward establishing rules for artificial intelligence tools, as the frenzy over generative AI and chatbots reach a fever pitch. The US commerce department on Tuesday announced it is officially requesting public comment on how to create accountability measures for AI, seeking help on how to advise US policymakers to approach the technology. "In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy," said Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), at a press conference at the University of Pittsburgh. Davidson said that the NTIA is seeking feedback from the public, including from researchers, industry groups, and privacy and digital rights organizations on the development of audits and assessments of AI tools created by private industry. He also said that the NTIA looking to establish guardrails that would allow the government to determine whether AI systems perform the way companies claim they do, whether they are safe and effective, whether they have discriminatory outcomes or "reflect unacceptable levels of bias", whether they spread or perpetuate misinformation, and whether they respect individuals' privacy.
- North America > United States (1.00)
- Europe (0.16)
Biden administration asks public for help regulating AI systems like ChatGPT
Artificial Intelligence poses both risks and rewards, but developers should be weary of technologies that could threaten "scary" outcomes, AI technologist says. Federal regulators are asking the public for input on policies that would hold artificial intelligence (AI) systems accountable and help manage risks from the rapidly growing and powerful technology. As programs like ChatGPT gain popularity for their astounding ability to answer written questions with human-like responses, policymakers and tech experts are increasingly concerned with their potential for misuse, including how artificially-generated news reports can rapidly spread fabricated and false information. Now that ChatGPT has more than 100 million monthly active users, the government is beginning to study how these programs should be regulated. The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, solicited public feedback Tuesday as it works to develop policies to "ensure artificial intelligence (AI) systems work as claimed – and without causing harm."
- North America > United States > California (0.05)
- Europe > Germany > Berlin (0.05)
Asking Bing Chat to be more creative will decrease its accuracy
Microsoft's Bing Chat is beginning to roll out options that let users make the chat's responses creative, balanced, or more precise. Just be careful: adopting the "creative" option will initially make the Bing AI chatbot less accurate, in the name of more entertaining responses. Microsoft began rolling out the new Bing Chat response options at the end of last week. We've been hard at work tweaking dials so you can chat with the new Bing however you'd like. Starting today, some users will see the ability to choose a style that is more Precise, Balanced, or Creative .
A Bibliographic View on Constrained Clustering
Kuncheva, Ludmila, Williams, Francis, Hennessey, Samuel
A keyword search on constrained clustering on Web-of-Science returned just under 3,000 documents. We ran automatic analyses of those, and compiled our own bibliography of 183 papers which we analysed in more detail based on their topic and experimental study, if any. This paper presents general trends of the area and its sub-topics by Pareto analysis, using citation count and year of publication. We list available software and analyse the experimental sections of our reference collection. We found a notable lack of large comparison experiments. Among the topics we reviewed, applications studies were most abundant recently, alongside deep learning, active learning and ensemble learning.
- Europe > United Kingdom (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)