Plotting

Results


Responses to Jack Clark's AI Policy Tweetstorm

#artificialintelligence

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.


One year after Afghanistan, spy agencies pivot toward China

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. In a recent closed-door meeting with leaders of the agency's counterterrorism center, the CIA's No. 2 official made clear that fighting al-Qaida and other extremist groups would remain a priority -- but that the agency's money and resources would be increasingly shifted to focusing on China. The CIA drone attack that killed al-Qaida's leader showed that fighting terrorism is hardly an afterthought. But it didn't change the message the agency's deputy director, David Cohen, delivered at that meeting weeks earlier: While the U.S. will continue to go after terrorists, the top priority is trying to better understand and counter Beijing.


Who is Ayman Al Zawahiri? Al Qaeda leader killed in Afghanistan

FOX News

Ayman Al Zawahiri, the terrorist killed in a U.S. drone strike in Afghanistan Monday, was a top deputy to al Qaeda leader Usama bin Laden before taking the helm of the organization after his predecessor's death in 2011. A drone strike on a Kabul home took him out over the weekend, Fox News reported earlier. Taliban spokesman Zabihullah Mujahid confirmed and condemned the attack on Twitter, calling it "a clear violation of international principles," according to a translation of the thread. However, the 2020 Doha Agreement, which preceded the Biden administration's highly criticized withdrawal of U.S. troops from Afghanistan last year, called for the Taliban to combat terrorism within the country. Al Zawahiri was also a doctor and founder of the Egyptian Islamic Jihad terror group, which later merged with al-Qaeda, according to authorities.


Synthetic Media: How deepfakes could soon change our world

#artificialintelligence

You may never have heard the term "synthetic media"-- more commonly known as "deepfakes"-- but our military, law enforcement and intelligence agencies certainly have. They are hyper-realistic video and audio recordings that use artificial intelligence and "deep" learning to create "fake" content or "deepfakes." The U.S. government has grown increasingly concerned about their potential to be used to spread disinformation and commit crimes. That's because the creators of deepfakes have the power to make people say or do anything, at least on our screens. As we first reported in October, most Americans have no idea how far the technology has come in just the last five years or the danger, disruption and opportunities that come with it.


Program Manager- AI/ML (telework options)

#artificialintelligence

Riverside Research is an independent National Security Nonprofit dedicated to research and development in the national interest. With revenues of $125M, and a staff of more than 630, Riverside Research provides high-end technical services, research and development, and prototype solutions to some of the country's most challenging technical problems. Riverside Research also supports advanced technical education and collaborates widely with university researchers. The company was formed from a respected research laboratory at Columbia University and has a current focus on technical areas including Radar systems, Optics and Photonics, Electromagnetics, Plasma physics, Geoint, Masint, Systems Engineering, and Modeling & Simulation. Riverside Research's open innovation R&D model encourages both internal and external collaboration to accelerate innovation, advance science, and expand market opportunities.


Chilling moment robot dog with a submachine gun strapped to its back opens fire

Daily Mail - Science & tech

A chilling video reminiscent of Black Mirror of a robot dog opening fire with a submachine gun strapped to its back - uploaded by the Russian founder of a hoverbike company - is a preview of future warfare. Alexander Atamanov, the founder of a Russian hoverbike company, uploaded the viral video, which shows a UnitreeYushu dogbot that retails online for about $3,000 shooting at snow-covered hills outside, and it appears he was simply creating something to play around with. At a time when autonomous drones are being used to target terrorists and the US Army has its own sniper rifle-armed robot dog, the video is a terrifying reminder that this type weapon is already a reality. The robot dog, called a'technology dog' by its manufacturer, appears to be carrying a Russian gun known as a PP-19 Vityaz, a type of submachine gun that's based on the AK-47 design The robot dog, called a'technology dog' by its manufacturer, appears to be carrying a Russian gun known as a PP-19 Vityaz, a type of submachine gun that's based on the AK-47 design, according to Vice. The robot also has strips of Velcro on its sides and a Russian flag is seen on its left flank.


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.


Watching the Watchers: Democratizing AI To Audit The State

#artificialintelligence

Socially disadvantaged communities have often raised legitimate concerns about being over-policed and under-protected. Now, the rise of AI algorithms driving a myriad of "predictive policing" attempts has threatened to exacerbate the problem. The use of automated algorithms in policing does not do away with inequity; biases might be introduced through how such machines are trained. The black-box nature of state-of-the-art AI algorithms that do not consider the underlying social mechanics of crime, fosters little confidence that such schemes can ultimately thwart crime in any meaningful manner. To make things worse, AI algorithms are demonstrably an effective force-multiplier for the state, manifesting an evermore intrusive control and surveillance apparatus to monitor all aspects of our lives.


The Download: a military AI boom, and China's industrial espionage

MIT Technology Review

Exactly two weeks after Russia invaded Ukraine in February, Alexander Karp, the CEO of data analytics company Palantir, made his pitch to European leaders. With war on their doorstep, Europeans ought to modernize their arsenals with Silicon Valley's help, he argued in an open letter. Militaries are responding to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing "priority" technologies, while the UK has launched a new AI strategy specifically for defense, and the Germans have earmarked just under half a billion for research and AI. The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield. Those with the most to gain are startups such as Palantir, which are hoping to cash in as militaries race to update their arsenals with the latest technologies.


Artificial intelligence

#artificialintelligence

Deep learning[133] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.[134] Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, image classification[135] and others. Deep learning often uses convolutional neural networks for many or all of its layers.