Goto

Collaborating Authors

 government regulation


House DOGE Caucus eyes federal employees, government regulations in new goal-setting memo

FOX News

Fox News' senior national correspondent William La Jeunesse joins'America's Newsroom' to discuss Congress' history of killing pushes for cost-cutting. FIRST ON FOX: The Congressional Department of Government Efficiency (DOGE) Caucus is holding its second-ever meeting on Wednesday, where its leaders are expected to unveil a set of "principles" to guide the group in its mission to cut government waste. They outlined eight goals, some practical while others more symbolic, in a bid to ensure the caucus is in sync with the DOGE advisory panel set up by President-elect Donald Trump. "The federal government must serve the interests of taxpayers, and taxpayers are best served by a lean, efficient, transparent, and accountable bureaucracy," the first principle read, according to a draft memo obtained by Fox News Digital. The document also suggested both lofty and smaller-scale goals.


'Godfather of AI' shortens odds of the technology wiping out humanity over next 30 years

The Guardian

The British-Canadian computer scientist often touted as a "godfather" of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is "much faster" than expected. Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a "10% to 20%" chance that AI would lead to human extinction within the next three decades. Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4's Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: "Not really, 10% to 20%." Hinton's estimate prompted Today's guest editor, the former chancellor Sajid Javid, to say "you're going up", to which Hinton replied: "If anything. You see, we've never had to deal with things more intelligent than ourselves before."

  Country: North America > Canada > Ontario > Toronto (0.17)
  Genre: Personal > Honors (0.94)
  Industry:

'Disinformation on steroids': is the US prepared for AI's influence on the election?

The Guardian

The AI election is here. Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The "deepfake" calls were linked to two Texas companies, Life Corporation and Lingo Telecom. It's not clear if the deepfake calls actually prevented voters from turning out, but that doesn't really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that's been pushing for federal and state regulation of AI's use in politics. "I don't think we need to wait to see how many people got deceived to understand that that was the point," Gilbert said.


Sen. Richard Blumenthal Defends His Controversial Bill Regulating Social Media for Kids

Slate

For a while now, Washington has been wrestling with two big forces shaping technology: social media and artificial intelligence. Who should do it--and how? Currently, Congress is considering a bill that would regulate how social media companies treat minors: the Kids Online Safety Act. Although it has bipartisan support, KOSA is not without controversy. Several critics have called it "government censorship." One group, the Electronic Frontier Foundation, says it is "one of the most dangerous bills in years."


Fears of AI hitting black market stir concerns of criminals evading government regulations: Expert

FOX News

Dr. Harvey Castro said he's less concerned about AI being developed by big corporations because there are safeguards, but it can be created without safeguards and sold. Artificial intelligence – specifically large language models like ChatGPT – can theoretically give criminals information needed to cover their tracks before and after a crime, then erase that evidence, an expert warns. Large language models, or LLMs, make up a segment of AI technology that uses algorithms that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. ChatGPT is the most well known LLM, and its successful, rapid development has created unease among some experts and sparked a Senate hearing to hear from Sam Altman, the CEO of ChatGPT maker OpenAI, who pushed for oversight. Corporations like Google and Microsoft are developing AI at a fast pace. But when it comes to crime, that's not what scares Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence who created his own LLM called "Sherlock."


Is it too late to regulate AI to keep it from outsmarting the human race?

FOX News

Scammers are texting victims and stealing their information by posing as legitimate businesses or agencies. CyberGuy explains how to stay safe. Remember the good ol' days when our biggest worry was accidentally pocket-dialing someone? Well, times have changed, and so has technology. We now have these nifty AI systems that can do everything from making restaurant reservations to driving our cars.


Exploration of the effects of epidemics on the regional socio-economics: a modelling approach

Snellman, Jan E., Barrio, Rafael A., Kaski, Kimmo K., Korpi--Lagg, Maarit J.

arXiv.org Artificial Intelligence

Pandemics, in addition to affecting the health of populations, can have huge impacts on their social and economic behavior. These factors, on the other hand, have the potential to feed back to and influence the disease spreading. It is important to systematically study these interrelations, to determine which ones have significant effects, and whether the effects are adverse or beneficial. Our recently developed epidemic model with agent-based and geographical elements is used in this study for such a purpose. We perform an extensive parameter space exploration of the socio-economic part of the model, including factors like the attitudes (called values) of the agents towards the disease spreading, health, economic situation, and regulations by government agents. We search for prominent patterns from the resulting simulated data using basic classification tools, namely self-organizing maps and principal component analysis. We seek to isolate the most important value parameters of the population and government agents influencing the disease spreading speed and patterns, and monitor different quantities of the model output, such as infection rates, the propagation speed of the epidemic, economic activity, government regulations, and the compliance of population. Out of these, the ones describing the epidemic spreading were resulting in the most distinctive clustering of the data, and they were selected as the basis of the remaining analysis. We relate the found clusters to three distinct types of disease spreading: wave-like, chaotic, and transitional spreading patterns. The most important value parameter contributing to phase changes between these phases was found to be the compliance of the population agents towards the government regulations.


Saha

AAAI Conferences

Government regulations are critical to understanding how to do business with a government entity and receive other benefits. However, government regulations are also notoriously long and organized in ways that can be confusing for novice users. Developing cognitive assistance tools that remove some of the burden from human users is of potential benefit to a variety of users. The volume of data found in United States federal government regulation suggests a multiple-step approach to process the data into machine-readable text, create an automated legal knowledge base capturing various facts and rules, and eventually building a legal question and answer system to acquire understanding from various regulations and provisions. Our work discussed in this paper represents our initial efforts to build a framework for Federal Acquisition Regulations System (Title 48, Code of Federal Regulations) in order to create an efficient legal knowledge base representing relationships between various legal elements, semantically similar terminologies, deontic expressions and cross-referenced legal facts and rules.


Organizations Struggle with AI Bias

#artificialintelligence

As organizations roll out machine learning and AI models into production, they're increasing cognizant of the presence of bias in their systems. Not only does this bias potentially lead to poorer decisions on the part of the AI systems, but it can put the organizations running them in legal jeopardy. However, getting on top of this problem is turning out to be tougher than expected for a lot of organizations. For example, Harvard University and Accenture demonstrated how algorithmic bias can creep into the hiring processes at human resources departments in a report issued last year. In their 2021 joint report "Hidden Workers: Untapped Talent," the two organizations show how the combination of outdated job descriptions and automated hiring systems that leans heavily on algorithmic processes for posting of ads for open job and evaluation of resumes can keep otherwise qualified individuals from landing jobs.


AI Weekly: An outline for government regulation of AI

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Governments face a range of policy challenges around AI technologies, many of which are exacerbated by the fact that they lack sufficiently detailed information. A whitepaper published this week by AI ethicist Jess Whittlestone and former OpenAI policy director Jack Clark outlines a potential solution that involves investing in governments' capacity to monitor the capabilities of AI systems. As the paper points out, AI as an industry routinely creates a range of data and measures, and if the data was synthesized, the insights could improve governments' ability to understand the technologies while helping to create tools to intervene. "Governments should play a central role in establishing measurement and monitoring initiatives themselves while subcontracting out other aspects to third parties, such as through grantmaking, or partnering with research institutions," Whittlestone and Clark wrote.