trilemma
The Download: clean energy progress, and OpenAI's trilemma
"We were very much impressed. At the same time, we were afraid." Inside the quest to map the universe with mysterious bursts of radio energy When our universe was less than half as old as it is today, a burst of energy that could cook a sun's worth of popcorn shot out from somewhere amid a compact group of galaxies. Some 8 billion years later, radio waves from that burst reached Earth and were captured by a sophisticated low-frequency radio telescope in the Australian outback. The signal, which arrived in June 2022, and lasted for under half a millisecond, is one of a growing class of mysterious radio signals called fast radio bursts. In the last 10 years, astronomers have picked up nearly 5,000 of them.
- Leisure & Entertainment (0.46)
- Media (0.43)
- Energy > Renewable (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.40)
A Formal Rebuttal of "The Blockchain Trilemma: A Formal Proof of the Inherent Trade-Offs Among Decentralization, Security, and Scalability"
This paper presents a comprehensive refutation of the so-called "blockchain trilemma," a widely cited but formally ungrounded claim asserting an inherent trade-off between decentralisation, security, and scalability in blockchain protocols. Through formal analysis, empirical evidence, and detailed critique of both methodology and terminology, we demonstrate that the trilemma rests on semantic equivocation, misuse of distributed systems theory, and a failure to define operational metrics. Particular focus is placed on the conflation of topological network analogies with protocol-level architecture, the mischaracterisation of Bitcoin's design--including the role of miners, SPV clients, and header-based verification--and the failure to ground claims in complexity-theoretic or adversarial models. By reconstructing Bitcoin as a deterministic, stateless distribution protocol governed by evidentiary trust, we show that scalability is not a trade-off but an engineering outcome. The paper concludes by identifying systemic issues in academic discourse and peer review that have allowed such fallacies to persist, and offers formal criteria for evaluating future claims in blockchain research.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Trading (1.00)
- Information Technology > Services > e-Commerce Services (0.45)
- Information Technology > e-Commerce > Financial Technology (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (0.92)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.68)
HoneyGPT: Breaking the Trilemma in Terminal Honeypots with Large Language Model
Wang, Ziyang, You, Jianzhou, Wang, Haining, Yuan, Tianwei, Lv, Shichao, Wang, Yang, Sun, Limin
Honeypots, as a strategic cyber-deception mechanism designed to emulate authentic interactions and bait unauthorized entities, continue to struggle with balancing flexibility, interaction depth, and deceptive capability despite their evolution over decades. Often they also lack the capability of proactively adapting to an attacker's evolving tactics, which restricts the depth of engagement and subsequent information gathering. Under this context, the emergent capabilities of large language models, in tandem with pioneering prompt-based engineering techniques, offer a transformative shift in the design and deployment of honeypot technologies. In this paper, we introduce HoneyGPT, a pioneering honeypot architecture based on ChatGPT, heralding a new era of intelligent honeypot solutions characterized by their cost-effectiveness, high adaptability, and enhanced interactivity, coupled with a predisposition for proactive attacker engagement. Furthermore, we present a structured prompt engineering framework that augments long-term interaction memory and robust security analytics. This framework, integrating thought of chain tactics attuned to honeypot contexts, enhances interactivity and deception, deepens security analytics, and ensures sustained engagement. The evaluation of HoneyGPT includes two parts: a baseline comparison based on a collected dataset and a field evaluation in real scenarios for four weeks. The baseline comparison demonstrates HoneyGPT's remarkable ability to strike a balance among flexibility, interaction depth, and deceptive capability. The field evaluation further validates HoneyGPT's efficacy, showing its marked superiority in enticing attackers into more profound interactive engagements and capturing a wider array of novel attack vectors in comparison to existing honeypot technologies.
- Asia > China > Beijing > Beijing (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Virginia (0.04)
Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning
Liu, Zheyuan, Dou, Guangyao, Tian, Yijun, Zhang, Chunhui, Chien, Eli, Zhu, Ziwei
Machine Unlearning (MU) algorithms have become increasingly critical due to the imperative adherence to data privacy regulations. The primary objective of MU is to erase the influence of specific data samples on a given model without the need to retrain it from scratch. Accordingly, existing methods focus on maximizing user privacy protection. However, there are different degrees of privacy regulations for each real-world web-based application. Exploring the full spectrum of trade-offs between privacy, model utility, and runtime efficiency is critical for practical unlearning scenarios. Furthermore, designing the MU algorithm with simple control of the aforementioned trade-off is desirable but challenging due to the inherent complex interaction. To address the challenges, we present Controllable Machine Unlearning (ConMU), a novel framework designed to facilitate the calibration of MU. The ConMU framework contains three integral modules: an important data selection module that reconciles the runtime efficiency and model generalization, a progressive Gaussian mechanism module that balances privacy and model generalization, and an unlearning proxy that controls the trade-offs between privacy and runtime efficiency. Comprehensive experiments on various benchmark datasets have demonstrated the robust adaptability of our control mechanism and its superiority over established unlearning methods. ConMU explores the full spectrum of the Privacy-Utility-Efficiency trade-off and allows practitioners to account for different real-world regulations. Source code available at: https://github.com/guangyaodou/ConMU.
- North America > United States > Indiana > Saint Joseph County > South Bend (0.04)
- North America > United States > California (0.04)
- North America > United States > Virginia > Fairfax County > Fairfax (0.04)
- (6 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
GAN are the days for NVIDIA
NVIDIA's model works better than the rest when it comes to customised prompts, due to the expert denoising system which trains denoisers to maintain fidelity to the textual prompt even in the later stage of the generation process. But, this is not the first time NVIDIA stepped into the waters of text-to-image modelling. Before coming up with eDiffi, NVIDIA used deep learning models to create versions of the GauGAN model. The second version of the model, released in November 2021, was trained on 10 million high-quality landscape images. The application demo allowed users to produce images based on any text input they provide. The GauGAN model is based on generative adversarial networks (GAN), unlike eDiffi, which uses diffusion modelling for generating images. So why did NVIDIA take a departure from using GAN for their text-to-image feature?
Lamboozling Attackers
Deception is a powerful resilience tactic that provides observability into attack operations, deflects impact from production systems, and advises resilient system design. A lucid understanding of the goals, constraints, and design trade-offs of deception systems could give leaders and engineers in software development, architecture, and operations a new tactic for building more resilient systems--and for bamboozling attackers. Unfortunately, innovation in deception has languished for nearly a decade because of its exclusive ownership by information security specialists. Mimicry of individual system components remains the status-quo deception mechanism despite growing stale and unconvincing to attackers, who thrive on interconnections between components and expect to encounter systems. Consequently, attackers remain unchallenged and undeterred. This wasted potential motivated our design of a new generation of deception systems, called deception environments. These are isolated replica environments containing complete, active systems that exist to attract, mislead, and observe attackers. By harnessing modern infrastructure and systems design expertise, software engineering teams can use deception tactics that are largely inaccessible to security specialists. To help software engineers and architects evaluate deception systems through the lens of systems design, we developed a set of design principles summarized as a pragmatic framework. This framework, called the FIC trilemma, captures the most important dimensions of designing deception systems: fidelity, isolation, and cost. The goal of this article is to educate software leaders, engineers, and architects on the potential of deception for systems resilience and the practical considerations for building deception environments. By examining the inadequacy and stagnancy of historical deception efforts by the information security community, the article also demonstrates why engineering teams are now poised--with support from advancements in computing--to become significantly more successful owners of deception systems.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Web (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
- Information Technology > Communications > Networks (0.68)
Do We Live in a Simulation? Chances Are about 50–50
It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk. The show's host Neil deGrasse Tyson had just explained the simulation argument--the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time--much like a video game optimized to render only the parts of a scene visible to a player. "Maybe that's why we can't travel faster than the speed of light, because if we could, we'd be able to get to another galaxy," said Nice, the show's co-host, prompting Tyson to gleefully interrupt.
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)