editorial
The Guardian view on AI and jobs: the tech revolution should be for the many not the few Editorial
'AI already appears to be squeezing the number of entry-level jobs in white-collar occupations.' 'AI already appears to be squeezing the number of entry-level jobs in white-collar occupations.' I n The Making of the English Working Class, the leftwing historian EP Thompson made a point of challenging the condescension of history towards luddism, the original anti-tech movement. The early 19th-century croppers and weavers who rebelled against new technologies should not be written off as "blindly resisting machinery", wrote Thompson in his classic history . They were opposing a laissez-faire logic that dismissed its disastrous impact on their lives. Photographers, coders and writers, for example, would sympathise with the powerlessness felt by working people who saw customary protections swept away in a search for enhanced productivity and profit.
- Europe > United Kingdom (0.51)
- North America > United States > California (0.08)
- Oceania > Australia (0.05)
- Europe > Ukraine (0.05)
- Leisure & Entertainment > Sports (0.73)
- Government > Regional Government (0.72)
Coding Triangle: How Does Large Language Model Understand Code?
Zhang, Taolin, Ma, Zihan, Cao, Maosong, Liu, Junnan, Zhang, Songyang, Chen, Kai
Large language models (LLMs) have achieved remarkable progress in code generation, yet their true programming competence remains underexplored. We introduce the Code Triangle framework, which systematically evaluates LLMs across three fundamental dimensions: editorial analysis, code implementation, and test case generation. Through extensive experiments on competitive programming benchmarks, we reveal that while LLMs can form a self-consistent system across these dimensions, their solutions often lack the diversity and robustness of human programmers. We identify a significant distribution shift between model cognition and human expertise, with model errors tending to cluster due to training data biases and limited reasoning transfer. Our study demonstrates that incorporating human-generated editorials, solutions, and diverse test cases, as well as leveraging model mixtures, can substantially enhance both the performance and robustness of LLMs. Furthermore, we reveal both the consistency and inconsistency in the cognition of LLMs that may facilitate self-reflection and self-improvement, providing a potential direction for developing more powerful coding models.
The Guardian view on video games: computer generated worlds are influencing real ones Editorial
"It's possible to play [video] games with no ulterior motive, but I do think they provide a place where we can actually be vulnerable and more open to the full spectrum of human emotions," the author Gabrielle Zevin told the Guardian ahead of the launch of her 2023 bestseller. Zevin's absorbing novel Tomorrow, and Tomorrow, and Tomorrow examines how video games can ease suffering, challenge assumptions and forge human connections through alternate realities, eschewing the common misconceptions of them as childish or violent. Gaming allows players to immerse themselves in experiences that they have not had or would not have otherwise. While those virtual experiences have their limits in conveying the reality that they are simulating, gaming – being more social than ever before – has developed a more participatory, even empathic culture, as Zevin understands. This should be better understood as video games increasingly influence our reality.
Big Bang, Low Bar -- Risk Assessment in the Public Arena
Always keep an eye on ways that things could go badly wrong, even if they seem unlikely. The more disastrous a potential failure, the more improbable it needs to be, before we can safely ignore it. This principle may seem obvious, but it is easily overlooked in public discourse about risk - even, as we'll see, by well-qualified commentators, who should certainly know better. The present piece is prompted by neglect of the principle in recent discussions about the potential risks of artificial intelligence (AI). I don't think the failing is peculiar to this case, but recent debates in this area provide particularly stark examples of how easily the principle can be overlooked. Part of the problem, in my view, is that there isn't a catchy formulation of this safety principle, already on the tip of educated tongues. By contrast, consider the slogan'Correlation is not causation.' All scientists, science journalists, and policymakers know this phrase.
- North America > United States (0.71)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Artificial Intelligence Is Infiltrating Medicine -- But Is It Ethical?
Artificial intelligence (AI) is being embraced by hospitals and other healthcare organizations, which are using the technology to do everything from interpreting CT scans to predicting which patients are most likely to suffer debilitating falls while being treated. Electronic medical records are scoured and run through algorithms designed to help doctors pick the best cancer treatments based on the mutations in patients' tumors, for example, or to predict their likelihood to respond well to a treatment regimen based on past experiences of similar patients. But do algorithms, robots and machine learning cross ethical boundaries in healthcare? A group of physicians out of Stanford University contend that AI does raise ethical challenges that healthcare leaders must anticipate and deal with before they embrace this technology. "Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes," they wrote in an editorial published this week in the New England Journal of Medicine.
- North America > United States (0.05)
- Europe > Denmark > Capital Region > Copenhagen (0.05)
- Asia > China > Jiangsu Province > Nanjing (0.05)
- Health & Medicine > Health Care Providers & Services (0.75)
- Health & Medicine > Therapeutic Area > Oncology (0.74)
- Health & Medicine > Health Care Technology > Medical Record (0.56)
Editorial: Even robots need ethics training
The fear that robots are taking over the world might not be that far-fetched. While artificial intelligence is praised for its responsiveness to expansive amounts of information, AI seems to be absorbing biases, too. An experiment published in June demonstrated that robots trained with artificial intelligence exhibited racism and sexism in their decision-making. While sorting through billion of images, robots in this experiment routinely categorized Black men as "criminals." Similarly, the label "homemaker" was given to women more regularly than it was given to men.
Editorial: Computers getting smarter – are we prepared?
Don't throw away that smartphone! Just because a Google software engineer whose conclusions have been questioned says a computer program is sentient, meaning it can think and has feelings, doesn't mean an attack of the cyborgs through your devices is imminent. However, Blake Lemoine's analysis should make us consider how little we have planned for a future where advances in robotics will increasingly change how we live. Already, automation has put thousands of Americans who lack higher-level skills out of a job. But let's get back to Lemoine, who was put on leave by Google for violating its confidentiality policy.
Five Years as Editor-in-Chief of Communications
This is my last editorial as Editor-in-Chief of Communications,a so it is a moment to share learnings and, of course, to reflect on accomplishments. First, we launched the Regional Special Sections (RSS) in November 2018 with a spotlight on computing in the China Region. With 40 pages of articles, spanning tech idols to gaming to computing culture to fintech and "superAI," the first RSS created an excitement that inspired and challenged co-hosts of the Europe, India, East Asia and Oceania, Latin America, and Arabia Regions. In just three years, we have circumnavigated the globe,b and with the second Europe Region Section (April 2022) and India Region Section (November 2022), a new circuit is well under way! The RSS are an exciting read for the ACM community (great job by the co-hosts and authors), delivering news insights and perspectives into how computing is shaping and being shaped around the world.
- Europe (0.47)
- Asia > India (0.47)
- South America (0.25)
- (9 more...)
Editorial: Artificial intelligence is not educational taboo
The importance of these subjects has been evident in the rapid growth and development of new industries -- particularly computer-related ones -- since the 1970s. However, that importance has only been codified for schools under the acronym STEM since 2001. That was when the National Science Foundation put a new emphasis on how critical education in those fields was. It has led to a hard push for schools to up the opportunities for kids to explore, learn and grow both exposure to and foundational knowledge of these areas. Almost every school looks to wedge STEM into a lesson any way possible, in addition to extracurricular activities that shore up how fun science and invention can be.