white paper
Migrants will need A-level standard English to work in UK
Some migrants coming to the UK will need to speak English to an A-level standard under tougher new rules set to be introduced by the government. Applicants will be tested in person on their speaking, listening, reading and writing at Home Office-approved providers, with their results checked as part of the visa process. The changes, which come into force from 8 January 2026, form part of wider plans to cut levels of immigration to the UK outlined in a white paper in May. Home Secretary Shabana Mahmood said: If you come to this country, you must learn our language and play your part. Those applying for skilled worker, scale-up and high potential individual (HPI) visas will be required to reach B2 level - a step up from the current B1 standard which is equivalent to GCSE.
- South America (0.15)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (15 more...)
Charter school is replacing teachers with AI
An Austin-based national charter school network offers K-12 students an AI-guided education. Operating under a model called "2 Hour Learning," a company of the same name advertises accelerated pace, app-based classes designed to teach students at "2X" the speed of a traditional classroom, whatever that means. Parents are promised that the system works for 80-90 percent of children, and that students consistently rank in the NWEA's 90th percentile. Apart from generating top-ranking national standardized test takers, however, one of 2 Hour Learning's other explicit goals is the removal of teachers from classrooms. "Imagine starting a school and declaring, 'We won't have any academic teachers.' We did exactly that!" reads a portion of the company's white paper.
- North America > United States > Texas (0.06)
- North America > United States > Arizona (0.06)
Shifting the Human-AI Relationship: Toward a Dynamic Relational Learning-Partner Model
As artificial intelligence (AI) continues to evolve, the current paradigm of treating AI as a passive tool no longer suffices. As a human-AI team, we together advocate for a shift toward viewing AI as a learning partner, akin to a student who learns from interactions with humans. Drawing from interdisciplinary concepts such as ecorithms, order from chaos, and cooperation, we explore how AI can evolve and adapt in unpredictable environments. Arising from these brief explorations, we present two key recommendations: (1) foster ethical, cooperative treatment of AI to benefit both humans and AI, and (2) leverage the inherent heterogeneity between human and AI minds to create a synergistic hybrid intelligence. By reframing AI as a dynamic partner, a model emerges in which AI systems develop alongside humans, learning from human interactions and feedback loops including reflections on team conversations. Drawing from a transpersonal and interdependent approach to consciousness, we suggest that a "third mind" emerges through collaborative human-AI relationships. Through design interventions such as interactive learning and conversational debriefing and foundational interventions allowing AI to model multiple types of minds, we hope to provide a path toward more adaptive, ethical, and emotionally healthy human-AI relationships. We believe this dynamic relational learning-partner (DRLP) model for human-AI teaming, if enacted carefully, will improve our capacity to address powerful solutions to seemingly intractable problems.
- North America > United States > Michigan (0.05)
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Education (0.49)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.34)
Japanese companies lag in AI adoption, white paper says
Companies in Japan lag behind those in the United States and Europe in the use of generative artificial intelligence, a government white paper showed Friday. The white paper said that 46.8% of companies in Japan use generative AI in their operations, compared with 84.7% in the U.S. and 72.7% in Germany. Japanese companies' use of generative AI has been limited to taking meeting minutes and creating emails and documents. In contrast, firms in the U.S. and Europe use the technology in a wider range of operations, including for customer services, the white paper said. Including companies that use generative AI on a trial basis, the proportion of those using the technology stands at about 70% in Japan, lower than over 90% in both the U.S. and Germany.
- Asia > Japan (1.00)
- North America > United States (0.88)
- Europe > Germany (0.55)
TechScape: Why is the UK so slow to regulate AI?
Britain wants to lead the world in AI regulation. But AI regulation is a rapidly evolving, contested policy space in which there's little agreement over what a good outcome would look like, let alone the best methods to get there. And being the third most important hub of AI research in the world doesn't give you an awful lot of power when the first two are the US and China. How to slice through this Gordian knot? Simple: move swiftly and decisively to do … absolutely nothing.
- Europe > United Kingdom (1.00)
- Asia > China (0.25)
- North America > United States (0.16)
- North America > Canada (0.05)
- Law > Statutes (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.30)
The New AI Panic
For decades, the Department of Commerce has maintained a little-known list of technologies that, on grounds of national security, are prohibited from being sold freely to foreign countries. Any company that wants to sell such a technology overseas must apply for permission, giving the department oversight and control over what is being exported and to whom. These export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China's development of artificial intelligence: The department last year limited China's access to the computer chips needed to power AI and is in discussions now to expand them. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > California (0.04)
- Government > Commerce (0.57)
- Government > Regional Government (0.49)
White paper on Selected Environmental Parameters affecting Autonomous Vehicle (AV) Sensors
Shung, James Lee Wei, Piazzoni, Andrea, Vijay, Roshan, Kin, Lincoln Ang Hon, de Boer, Niels
Autonomous Vehicles (AVs) being developed these days rely on various sensor technologies to sense and perceive the world around them. The sensor outputs are subsequently used by the Automated Driving System (ADS) onboard the vehicle to make decisions that affect its trajectory and how it interacts with the physical world. The main sensor technologies being utilized for sensing and perception (S&P) are LiDAR (Light Detection and Ranging), camera, RADAR (Radio Detection and Ranging), and ultrasound. Different environmental parameters would have different effects on the performance of each sensor, thereby affecting the S&P and decision-making (DM) of an AV. In this publication, we explore the effects of different environmental parameters on LiDARs and cameras, leading us to conduct a study to better understand the impact of several of these parameters on LiDAR performance. From the experiments undertaken, the goal is to identify some of the weaknesses and challenges that a LiDAR may face when an AV is using it. This informs AV regulators in Singapore of the effects of different environmental parameters on AV sensors so that they can determine testing standards and specifications which will assess the adequacy of LiDAR systems installed for local AV operations more robustly. Our approach adopts the LiDAR test methodology first developed in the Urban Mobility Grand Challenge (UMGC-L010) White Paper on LiDAR performance against selected Automotive Paints.
Britain must become a leader in AI regulation, say MPs
The UK should introduce new legislation to control artificial intelligence or risk falling behind the EU and the US in setting the pace for regulating the technology, MPs have said. Rishi Sunak's government was urged to act as it prepares to host a global AI safety summit at Bletchley Park, home of the Enigma codebreakers, in November. The science, innovation and technology committee said on Thursday the regulatory approach outlined in a recent government white paper risked falling behind others. "The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI," the committee said in an interim report on AI governance. "This threat is made more acute by the efforts of other jurisdictions, principally the European Union and the United States, to set international standards." The EU, a trendsetter in tech regulation, is pushing ahead with the AI Act, while in the US the White House has published a blueprint for an AI bill of rights and the US senate majority leader, Chuck Schumer, has published a framework for developing AI regulations.
- North America > United States (1.00)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.27)
- Asia > China (0.07)
- Law > Statutes (0.95)
- Government > Regional Government > North America Government > United States Government (0.91)
6G Network Business Support System
Ouyang, Ye, Zhang, Yaqin, Wang, Peng, Liu, Yunxin, Qiao, Wen, Zhu, Jun, Liu, Yang, Zhang, Feng, Wang, Shuling, Wang, Xidong
6G is the next-generation intelligent and integrated digital information infrastructure, characterized by ubiquitous interconnection, native intelligence, multi-dimensional perception, global coverage, green and low-carbon, native network security, etc. 6G will realize the transition from serving people and people-things communication to supporting the efficient connection of intelligent agents, and comprehensively leading the digital, intelligent and green transformation of the economy and the society. As the core support system for mobile communication network, 6 6G BSS need to integrate with new business models brought about by the development of the next-generation Internet and IT, upgrade from "network-centric" to "business and service centric" and "customer-centric". 6G OSS and BSS systems need to strengthen their integration to improve the operational efficiency and benefits of customers by connecting the digital intelligence support capabilities on both sides of supply and demand. This paper provides a detailed introduction to the overall vision, potential key technologies, and functional architecture of 6G BSS systems. It also presents an evolutionary roadmap and technological prospects for the BSS systems from 5G to 6G.
- North America > United States (0.04)
- Europe > Finland > Northern Ostrobothnia > Oulu (0.04)
- Asia > South Korea > Busan > Busan (0.04)
- (2 more...)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Networks (1.00)
- (4 more...)
Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation
The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world--the E.U.'s AI Act--to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI's engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests. In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law--which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January. In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems--including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2--to be "high risk," a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight. That argument brought OpenAI in line with Microsoft, which has invested $13 billion into the AI lab, and Google, both of which have previously lobbied E.U. officials in favor of loosening the Act's regulatory burden on large AI providers.
- Law > Statutes (1.00)
- Government > Regional Government > Europe Government (0.72)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)