secrecy
What's the Deal with U.F.O.s?
When I was growing up, I watched a lot of sci-fi movies about aliens that come to Earth. The extraterrestrials in popular culture, however, always looked so familiar that I found them far-fetched. What are the chances that E.T., the Predator, or ALF would develop arms and legs, a humanlike face, and opposable thumbs? Perhaps as a result, I associated alien life more with fantasy than with science, and I never gave much thought to what a visit would really look like. But my attitude started to change in 2020, when I read Liu Cixin's "The Three-Body Problem" and its two sequels.
- North America > United States > Montana > Yellowstone County > Billings (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California (0.05)
- Government > Regional Government > North America Government > United States Government (0.30)
- Government > Military (0.30)
The Tech That Safeguards the Conclave's Secrecy
In 2005, cell phones were banned for the first time during the conclave, the process by which the Catholic Church elects its new pope. Twenty years later, after the death of Pope Francis, the election process is underway again. Authorities have two priorities: to protect the integrity of those attending the meeting, and to ensure that it proceeds in strict secrecy (under penalty of excommunication and imprisonment) until the final decision is made. By 2025, the Gendarmerie corps guarding Vatican City faces unprecedented technological challenges compared to other conclaves. Among them are artificial intelligence systems, drones, military satellites, microscopic microphones, a misinformation epidemic, and a world permanently connected and informed through social media.
- Government > Voting & Elections (0.57)
- Media > News (0.37)
Why Won't OpenAI Say What the Q* Algorithm Is?
Last week, it seemed that OpenAI--the secretive firm behind ChatGPT--had been broken open. The company's board had suddenly fired CEO Sam Altman, hundreds of employees revolted in protest, Altman was reinstated, and the media dissected the story from every possible angle. Yet the reporting belied the fact that our view into the most crucial part of the company is still so fundamentally limited: We don't really know how OpenAI develops its technology, nor do we understand exactly how Altman has directed work on future, more powerful generations. This was made acutely apparent last Wednesday, when Reuters and The Information reported that, prior to Altman's firing, several staff researchers had raised concerns about a supposedly dangerous breakthrough. At issue was an algorithm called Q* (pronounced "Q-star"), which has allegedly been shown to solve certain grade-school-level math problems that it hasn't seen before.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
SHAPE: A Framework for Evaluating the Ethicality of Influence
Bezou-Vrakatseli, Elfia, Brückner, Benedikt, Thorburn, Luke
Agents often exert influence when interacting with humans and non-human agents. However, the ethical status of such influence is often unclear. In this paper, we present the SHAPE framework, which lists reasons why influence may be unethical. We draw on literature from descriptive and moral philosophy and connect it to machine learning to help guide ethical considerations when developing algorithms with potential influence. Lastly, we explore mechanisms for governing algorithmic systems that influence people, inspired by mechanisms used in journalism, human subject research, and advertising.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (7 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (3 more...)
We Don't Actually Know If AI Is Taking Over Everything
Since the release of ChatGPT last year, I've heard some version of the same thing over and over again: What is going on? The rush of chatbots and endless "AI-powered" apps has made starkly clear that this technology is poised to upend everything--or, at least, something. Yet even the AI experts are struggling with a dizzying feeling that for all the talk of its transformative potential, so much about this technology is veiled in secrecy. More and more of this technology, once developed through open research, has become almost completely hidden within corporations that are opaque about what their AI models are capable of and how they are made. Transparency isn't legally required, and the secrecy is causing problems: Earlier this year, The Atlantic revealed that Meta and others had used nearly 200,000 books to train their AI models without the compensation or consent of the authors.
Attributing Image Generative Models using Latent Fingerprints
Nie, Guangyu, Kim, Changhoon, Yang, Yezhou, Ren, Yi
Generative models have enabled the creation of contents that are indistinguishable from those taken from nature. Open-source development of such models raised concerns about the risks of their misuse for malicious purposes. One potential risk mitigation strategy is to attribute generative models via fingerprinting. Current fingerprinting methods exhibit a significant tradeoff between robust attribution accuracy and generation quality while lacking design principles to improve this tradeoff. This paper investigates the use of latent semantic dimensions as fingerprints, from where we can analyze the effects of design variables, including the choice of fingerprinting dimensions, strength, and capacity, on the accuracy-quality tradeoff. Compared with previous SOTA, our method requires minimum computation and is more applicable to large-scale models. We use StyleGAN2 and the latent diffusion model to demonstrate the efficacy of our method.
- North America > United States > Arizona (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Europe > United Kingdom (0.04)
- Africa > Gabon (0.04)
OpenAI's GPT-4 Is Closed Source and Shrouded in Secrecy
Stable Diffusion was trained on LAION-5B, an open-source dataset, which resulted in the public being able to see if their own images were included in the dataset. GPT-4's release is the latest volley from OpenAI in an AI arms race. Big tech companies like Google, Microsoft, and Meta are racing to create new AI technologies as fast as possible, often sidestepping or shrugging off ethical concerns along the way. Google announced on Wednesday that its language model PaLM would be launching an API for businesses and developers to use. Meanwhile, Microsoft cut an entire ethics and society team within its AI department, as part of its recent layoffs, leaving the company without a dedicated responsible AI team, while it continues to adopt OpenAI products as part of its business.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.96)
Błażej Kuźniacki on why we need transparency around AI in tax
Over the course of a number of years, thousands of parents were falsely accused of fraud by the Dutch tax authorities due to discriminative algorithms. The consequences for families were devastating. But, the fact that the scandal was eventually brought to light might prove the Netherlands is ahead of other countries, says Assistant Professor Błażej Kuźniacki. He urges for more transparency about the use of artificial intelligence (AI) in tax related tasks. It's of great importance when it comes to tax. Humans are not capable of going through a massive amount of data as fast and accurately as algorithms.
- Government > Tax (0.68)
- Law (0.55)
Kabul drone attack: US advocates decry 'impunity, secrecy'
Washington, DC – The United States is sending a "dangerous and misleading message" by failing to hold any US military personnel responsible for a Kabul drone attack that killed 10 civilians, including seven children, human rights advocates have said. Calls for accountability for the deadly bombing on August 29 grew on Tuesday, a day after US media outlets first reported that US Defense Secretary Lloyd Austin had accepted a recommendation from top commanders not to punish any members of the military. Rights groups also urged President Joe Biden's administration to do more to help the survivors of the attack in the Afghan capital to relocate to the US. The bombing targeted the car of Zemari Ahmadi, who worked for US-based aid organisation Nutrition and Education International (NEI), killing him and nine of his family members. "I've been beseeching the US government to evacuate directly-impacted family members and NEI employees for months because their security situation is so dire," Steven Kwon, founder and president of NEI, said in a statement.
- Asia > Afghanistan > Kabul Province > Kabul (0.64)
- North America > United States > District of Columbia > Washington (0.26)
- Asia > Pakistan (0.05)
- (4 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Nuclear Espionage and AI Governance - LessWrong
Using both primary and secondary sources, I discuss the role of espionage in early nuclear history. Nuclear weapons are analogous to AI in many ways, so this period may hold lessons for AI governance. Nuclear spies successfully transferred information about the plutonium implosion bomb design and the enrichment of fissile material. Spies were mostly ideologically motivated. Counterintelligence was hampered by its fragmentation across multiple agencies and its inability to be choosy about talent used on the most important military research program in the largest war in human history. Nuclear espionage most likely sped up Soviet nuclear weapons development, but the Soviet Union would have been capable of developing nuclear weapons within a few years without spying. The slight gain in speed due to spying may nevertheless have been strategically significant. Acknowledgements: I am grateful to Matthew Gentzel for supervising this project and Michael Aird, Christina Barta, Daniel Filan, Aaron Gertler, Sidney Hough, Nat Kozak, Jeffery Ohl, and Waqar Zaidi for providing comments. This research was supported by a fellowship from the Stanford Existential Risks Initiative. This post is a short version of the report, x-posted from EA Forum. The full version with additional sections, an appendix, and a bibliography, is available here. The early history of nuclear weapons is in many ways similar to hypothesized future strategic situations involving advanced artificial intelligence (Zaidi and Dafoe 2021, 4). And, in addition to the objective similarity of the situations, the situations may be made more similar by deliberate imitation of the Manhattan Project experience (see this report to the US House Armed Service Committee).
- Asia > Russia (0.50)
- Asia > South Korea (0.14)
- Asia > North Korea (0.14)
- (9 more...)
- Law (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)