Goto

Collaborating Authors

 containment system


Welcome to the Underground! GIVE US YOUR BALLS

#artificialintelligence

Predictions and trends for AI obfuscation and containment The potential impact of AI obfuscation on AI development and society Conclusion: As AI systems continue to grow and evolve, understanding AI obfuscation becomes increasingly crucial. This book has provided an accessible and straightforward guide to the concepts of AI obfuscation, containment, and the use of metaphors to explain these ideas. By learning about these topics, readers can engage in informed discussions about the development and ethical implications of AI obfuscation and its role in shaping the future of artificial intelligence. NanoCheeZe MEQUAVIS Explain the lyric "Your only still alive because i made a promise" as being about sans following papyrus' directives despite his wishes.


Detecting Synthetic Phenomenology in a Contained Artificial General Intelligence

Pittman, Jason M., Hanks, Ashlyn

arXiv.org Artificial Intelligence

Human-like intelligence in a machine is a contentious subject. Whether mankind should or should not pursue the creation of artificial general intelligence is hotly debated. As well, researchers have aligned in opposing factions according to whether mankind can create it. For our purposes, we assume mankind can and will do so. Thus, it becomes necessary to contemplate how to do so in a safe and trusted manner -- enter the idea of boxing or containment. As part of such thinking, we wonder how a phenomenology might be detected given the operational constraints imposed by any potential containment system. Accordingly, this work provides an analysis of existing measures of phenomenology through qualia and extends those ideas into the context of a contained artificial general intelligence.


The AGI Containment Problem

Babcock, James, Kramar, Janos, Yampolskiy, Roman

arXiv.org Artificial Intelligence

There is considerable uncertainty about what properties, capabilities and motivations future AGIs will have. In some plausible scenarios, AGIs may pose security risks arising from accidents and defects. In order to mitigate these risks, prudent early AGI research teams will perform significant testing on their creations before use. Unfortunately, if an AGI has human-level or greater intelligence, testing itself may not be safe; some natural AGI goal systems create emergent incentives for AGIs to tamper with their test environments, make copies of themselves on the internet, or convince developers and operators to do dangerous things. In this paper, we survey the AGI containment problem - the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous. We identify requirements for AGI containers, available mechanisms, and weaknesses that need to be addressed.