security and emerging technology
China experimenting with brain-computer interfaces in global race for AI dominance: report
WEHEAD connects to ChatGPT and displays a face, expressions and voice. China is reportedly working to cognitively merge humans with machines as part of its ongoing efforts to compete in the artificial intelligence race. The communist country is using brain-computer interface (BCI) technology -- systems that allow for communication between the brain and an external device -- to "augment human cognition and human-machine teaming," The Washington Times reported, citing a presentation from Georgetown experts delivered to U.S. officials. These include invasive, minimally-invasive and non-invasive BCIs, according to The Washington Times. Invasive BCIs involve surgery to implant electrodes into the brain, while non-invasive BCIs use sensors on the scalp to monitor brain activity. Meanwhile, minimally-invasive BCIs involve implanting devices, but they do not penetrate brain tissue, according to a report in the National Library of Medicine.
- North America > United States > Hawaii (0.06)
- Asia > Taiwan (0.06)
- Asia > China > Fujian Province (0.06)
Why Biden's AI Executive Order Only Goes So Far
President Biden this week signed a sweeping Executive Order on artificial intelligence that seeks to tackle threats posed by the technology, but some experts say the regulation has left questions unanswered about how it could work in practice. The order tasks agencies with rethinking their approach to AI and aims to address threats relating to national security, competition and consumer privacy, while promoting innovation, competition, and the use of AI for public services. One of the most significant elements of the order is the requirement for companies developing the most powerful AI models to disclose the results of safety tests. On Tuesday, Secretary of Commerce Gina Raimondo told CNBC that under the Executive Order "the President directs the Commerce Department to require companies to tell us: what are the safety precautions they're putting in place and to allow us to judge whether that's enough. And we plan to hold these companies accountable."
Perspectives on the Social Impacts of Reinforcement Learning with Human Feedback
Is it possible for machines to think like humans? And if it is, how should we go about teaching them to do so? As early as 1950, Alan Turing stated that we ought to teach machines in the way of teaching a child. Reinforcement learning with human feedback (RLHF) has emerged as a strong candidate toward allowing agents to learn from human feedback in a naturalistic manner. RLHF is distinct from traditional reinforcement learning as it provides feedback from a human teacher in addition to a reward signal. It has been catapulted into public view by multiple high-profile AI applications, including OpenAI's ChatGPT, DeepMind's Sparrow, and Anthropic's Claude. These highly capable chatbots are already overturning our understanding of how AI interacts with humanity. The wide applicability and burgeoning success of RLHF strongly motivate the need to evaluate its social impacts. In light of recent developments, this paper considers an important question: can RLHF be developed and used without negatively affecting human societies? Our objectives are threefold: to provide a systematic study of the social effects of RLHF; to identify key social and ethical issues of RLHF; and to discuss social impacts for stakeholders. Although text-based applications of RLHF have received much attention, it is crucial to consider when evaluating its social implications the diverse range of areas to which it may be deployed. We describe seven primary ways in which RLHF-based technologies will affect society by positively transforming human experiences with AI. This paper ultimately proposes that RLHF has potential to net positively impact areas of misinformation, AI value-alignment, bias, AI access, cross-cultural dialogue, industry, and workforce. As RLHF raises concerns that echo those of existing AI technologies, it will be important for all to be aware and intentional in the adoption of RLHF.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Oceania > Australia > Queensland (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Social Sector (1.00)
- Government (1.00)
- Education (1.00)
- (2 more...)
Forecasting Potential Misuses of Language Models for Disinformation Campaigns--and How to Reduce Risk
OpenAI researchers collaborated with Georgetown University's Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science. But, as with any new technology, it is worth considering how they can be misused.
- Media > News (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.74)
Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)
CSET Research Fellows apply their varied experience and expertise to challenging policy questions at the intersection of national security and emerging technology. Comfortable working with empirical data and evidence to support policy recommendations, Research Fellows are expected to lead research projects, brief policymakers, participate in public events, and manage and mentor CSET's Research Analysts, Research Assistants, and student affiliates. They are also expected to work closely with CSET's data scientists to conduct empirical analyses. Generally, Research Fellows are less than 10 years out of graduate school (MA, PhD or JD) programs and are encouraged to enter public service at the conclusion of their fellowship. Please note that each position has separate, detailed position descriptions, requirements and application instructions.
Community colleges can become America's AI incubators
Millions of students attend community colleges every year, with almost 1,300 schools located in every corner of the United States. With their large student bodies, community colleges are a massive source of potential for expanding the artificial intelligence (AI) workforce, but employers and policymakers alike sorely underestimate their potential. If the United States aims to maintain its global lead and competitive advantage in AI, it must recognize that community colleges hold a special spot in our education system and are too important to be overlooked any longer. As detailed in a recent study I co-authored as part of Georgetown University's Center for Security and Emerging Technology (CSET), community colleges have the potential to support the country in its mission for superiority in AI. Community colleges could create pathways to good-paying jobs across the United States and become tools for training a new generation of AI-literate workers.
- Government (0.95)
- Education > Educational Setting (0.33)
Accelerating AI
The success of machine learning for a wide range of applications has come with serious costs. The largest deep neural networks can have hundreds of billions of parameters that need to be tuned to mammoth datasets. This computationally intensive training process can cost millions of dollars, as well as large amounts of energy and associated carbon. Inference, the subsequent application of a trained model to new data, is less demanding for each use, but for widely used applications, the cumulative energy use can be even greater. "Typically there will be more energy spent on inference than there is on training," said David Patterson, Professor Emeritus at the University of California, Berkeley, and a Distinguished Engineer at Google, who in 2017 shared ACM's A.M. Turing Award.
- North America > United States > California > Alameda County > Berkeley (0.25)
- Oceania > Australia (0.05)
- North America > United States > Massachusetts > Suffolk County > Boston (0.05)
- North America > Canada > Quebec (0.05)
To Get Better at AI, Get Better at Finding AI Talent
The Defense Department's recent efforts to raise its artificial intelligence game have revealed a few obstacles. There are no cohesive goals across the military branches, and there is no way of knowing whether each service has enough people with the right skills. DOD should work with the services to establish AI-specific goals for cultivating technical talent, make it easier for all personnel to learn about AI and put it to use, and enable AI "rock stars" to succeed. It is currently impossible for the DOD to assess its AI posture, let alone assert leadership in AI. That's because posture assessment requires measurement.
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.81)
The US can compete with China in AI education -- here's how
The artificial intelligence (AI) "strategic competition" with China is more intense than ever. To many, the stakes have never been higher -- who leads in AI will lead globally. At first glance, China appears to be well-positioned to take the lead when it comes to AI talent. China is actively integrating AI into every level of its education system, while the United States has yet to embrace AI education as a strategic priority. To maintain its competitive edge, the United States must adopt AI education and workforce policies that are targeted and coordinated.
- North America > United States > Rhode Island (0.05)
- Asia > China > Shandong Province (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Education > Curriculum > Subject-Specific Education (0.49)
- Education > Educational Setting > K-12 Education (0.34)
China's 'New Generation' AI-Brain Project – Analysis
China is pursuing what its leaders call a "first-mover advantage" in artificial intelligence (AI), facilitated by a state-backed plan to achieve breakthroughs by modeling human cognition. While not unique to China, the research warrants concern since it raises the bar on AI safety, leverages ongoing U.S. research, and exposes U.S. deficiencies in tracking foreign technological threats. The article begins with a review of the statutory basis for China's AI-brain program, examines related scholarship, and analyzes the supporting science. China's advantages are discussed along with the implications of this brain-inspired research. Recommendations to address our concerns are offered in conclusion. All claims are based on primary Chinese data.1 Analysts familiar with China's technical development programs understand that in China things happen by plan, and that China is not reticent about announcing these plans. On July 8, 2017 China's State Council released its "New Generation AI Development Plan"2 to advance Chinese artificial intelligence in three stages, at the end of which, in 2030, China would lead the world in AI theory, technology, and applications.3
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Law (0.69)
- Government > Regional Government > North America Government > United States Government (0.68)
- (2 more...)