yampolskiy
A dangerous tipping point? AI hacking claims divide cybersecurity experts
AI startup Anthropic's recent announcement that it detected the world's first artificial intelligence-led hacking campaign has prompted a multitude of responses from cybersecurity experts. In a report on Friday, Anthropic said its assistant Claude Code was manipulated to carry out 80-90 percent of a "large-scale" and "highly sophisticated" cyberattack, with human intervention required "only sporadically". Anthropic, the creator of the popular Claude chatbot, said the attack aimed to infiltrate government agencies, financial institutions, tech firms and chemical manufacturing companies, though the operation was only successful in a small number of cases. The San Francisco-based company, which attributed the attack to Chinese state-sponsored hackers, did not specify how it had uncovered the operation, nor identify the "roughly" 30 entities that it said had been targeted. Roman V Yampolskiy, an AI and cybersecurity expert at the University of Louisville, said there was no doubt that AI-assisted hacking posed a serious threat, though it was difficult to verify the precise details of Anthropic's account.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > China (0.06)
- South America (0.05)
- (5 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Joe Rogan flips the God debate on its head with shocking theory that 'we created him'
Joe Rogan has come to a mind-bending conclusion about life, fearing that humanity has misinterpreted what reality is and we're actually in the process of creating God. While interviewing computer scientist Roman Yampolskiy on the Joe Rogan Experience podcast, the men debated the possibility that reality is a giant simulation and humans are building a God-like supercomputer using artificial intelligence (AI). According to Rogan's theory, humanity has misinterpreted ancient prophecies regarding the second coming of Jesus Christ and Judgement Day, saying the creation of this AI super intelligence is the final chapter before our reality resets. Maybe we just completely misinterpreted these ancient scrolls and texts and what it really means is that we are going to give birth to this,' Rogan explained. Yampolskiy, an author and researcher in AI safety, added to Rogan's theory, suggesting that reality is just an ongoing cycle of Big Bangs - the explosion that kickstarted the universe - starting and restarting life over and over again.
'The outcome could be extinction': Elon Musk-backed researcher warns there is NO proof AI can be controlled - and says tech should be shelved NOW
A researcher backed by Elon Musk is re-sounding the alarm about AI's threat to humanity after finding no proof the tech can be controlled. Dr Roman V Yampolskiy, an AI safety expert, has received funding from the billionaire to study advanced intelligent systems that is the focus on his upcoming book'AI: Unexplainable, Unpredictable, Uncontrollable. The book examines how AI has the potential to dramatically reshape society, not always to our advantage, and has the'potential to cause an existential catastrophe.' Yampsolskiy, who is a professor at the University of Louisville, conducted an'examination of the scientific literature on AI' and concluded there is no proof that the tech could be stopped from going rogue. To fully control AI, he suggested that it needs to be modifiable with'undo' options, limitable, transparent, and easy to understand in human language.
On a Functional Definition of Intelligence
Sritriratanarak, Warisa, Garcia, Paulo
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question. This lack of consensus hinders research, and public perception, on Artificial Intelligence (AI), particularly since the rise of generative- and large-language models. Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science. Because these perspectives are intrinsically linked to intelligence as it is demonstrated by natural creatures, we argue such fields cannot, and will not, provide a sufficiently rigorous definition that can be applied to artificial means. Thus, we present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved; focusing on the "what", rather than the "how". To achieve this, we first distinguish other related concepts (sentience, sensation, agency, etc.) from the notion of intelligence, particularly identifying how these concepts pertain to artificial intelligent systems. As a result, we achieve a formal definition of intelligence that is conceptually testable from only external observation, that suggests intelligence is a continuous variable. We conclude by identifying challenges that still remain towards quantifiable measurement. This work provides a useful perspective for both the development of AI, and for public perception of the capabilities and risks of AI.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Artificial Superintelligence: Coordination & Strategy: Yampolskiy, Roman V, Duettmann, Allison: 9783039218547: Amazon.com: Books
Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks.
Artificial Superintelligence: A Futuristic Approach: Yampolskiy, Roman V.: 9781482234435: Amazon.com: Books
Roman V. Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA. After completing his PhD dissertation Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. In 2008 Dr. Yampolskiy accepted an assistant professor position at the Speed School of Engineering, University of Louisville, KY.
- Education > Educational Setting > Higher Education (0.61)
- Retail > Online (0.40)
California legislation targets Amazon's AI warehouse bosses
A new California law designed to prevent the warehouse industry from overworking employees doesn't name a specific company. But the legislation's target is clear: Amazon, which has given machines unparalleled control over workers and is accused of using the technology to impose unreasonable demands on them. Authored by Assemblywoman Lorena Gonzalez, the bill prohibits the use of monitoring systems that thwart basic worker rights such as rest periods, bathroom breaks and safety. The legislation will help determine whether governments can regulate human resources software that's expected to play an increasing role in deciding who gets hired and fired, how much workers are paid and how hard they work. "This is just the beginning of our work to regulate Amazon & its algorithms that put profits over workers' safety," Gonzalez, a San Diego Democrat, tweeted earlier this year.
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Impossibility Results in AI: A Survey
Brcic, Mario, Yampolskiy, Roman V.
An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added a new result (theorem) about the unfairness of explainability, the first explainability-related result in the induction category. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (9 more...)
- Leisure & Entertainment (0.46)
- Government (0.46)
How An Artificial Superintelligence Might Actually Destroy Humanity
I'm confident that machine intelligence will be our final undoing. Its potential to wipe out humanity is something I've been thinking and writing about for the better part of 20 years. I take a lot of flak for this, but the prospect of human civilisation getting extinguished by its own tools is not to be ignored. There is one surprisingly common objection to the idea that an artificial superintelligence might destroy our species, an objection I find ridiculous. It's not that superintelligence itself is impossible.
- Energy (0.69)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.47)
A Cyber Science Based Ontology for Artificial General Intelligence Containment
Pittman, Jason M., Crosby, Courtney
The development of artificial general intelligence is considered by many to be inevitable. What such intelligence does after becoming aware is not so certain. To that end, research suggests that the likelihood of artificial general intelligence becoming hostile to humans is significant enough to warrant inquiry into methods to limit such potential. Thus, containment of artificial general intelligence is a timely and meaningful research topic. While there is limited research exploring possible containment strategies, such work is bounded by the underlying field the strategies draw upon. Accordingly, we set out to construct an ontology to describe necessary elements in any future containment technology. Using existing academic literature, we developed a single domain ontology containing five levels, 32 codes, and 32 associated descriptors. Further, we constructed ontology diagrams to demonstrate intended relationships. We then identified humans, AGI, and the cyber world as novel agent objects necessary for future containment activities.
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
- Law (1.00)
- Information Technology > Security & Privacy (0.71)
- Government (0.69)