Goto

Collaborating Authors

Results


AI and Cyber Security Battlefield

#artificialintelligence

Artificial intelligence (AI) is truly a revolutionary feat of computer science, set to become a core component of all modern software over the coming years and decades. This presents a threat but also an opportunity. AI will be deployed to augment both defensive and offensive cyber operations. Additionally, new means of cyber attack will be invented to take advantage of the particular weaknesses of AI technology. Finally, the importance of data will be amplified by AI's appetite for large amounts of training data, redefining how we must think about data protection. Prudent governance at the global level will be essential to ensure that this era-defining technology will bring about broadly shared safety and prosperity.


'Sentient' Artificial Intelligence: Have We Reached Peak AI Hype? - AI Summary

#artificialintelligence

Lemoine, who worked for Google's Responsible AI organization until he was placed on paid leave last Monday, and who "became ordained as a mystic Christian priest, and served in the Army before studying the occult," had begun testing LaMDA to see if it used discriminatory or hate speech. The Post article continued: "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington. Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI): Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term "Eliza Effect" in 1995, in which he said that while the "achievements of today's artificial neural networks are astonishing … I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat." "I think corporations are going to be woefully on their back feet reacting, because they just don't get it – they have a false sense of security," said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week. Lemoine, who worked for Google's Responsible AI organization until he was placed on paid leave last Monday, and who "became ordained as a mystic Christian priest, and served in the Army before studying the occult," had begun testing LaMDA to see if it used discriminatory or hate speech. The Post article continued: "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington.


'Sentient' artificial intelligence: Have we reached peak AI hype?

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend. Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Google's conversational AI for generating chatbots based on large language models (LLM), was sentient. Lemoine, who worked for Google's Responsible AI organization until he was placed on paid leave last Monday, and who "became ordained as a mystic Christian priest, and served in the Army before studying the occult," had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began "teaching" LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story: "It's a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person."


GCN - AI Summary

#artificialintelligence

DOE investing in machine learning tools for data analysis The Department of Energy is dedicating $29 million to develop new tools and advanced algorithms that will benefit multiple scientific fields and inform cutting-edge solutions for a variety of complex problems. INDUSTRY INSIGHT How an advanced architecture can dramatically mitigate massive data breaches A labeled gateway running on a trustworthy operating system enforces mandatory access control policies to protect the entire system from modification and prevents unauthorized data flows, such as massive data breaches. NGA taps 4 states for cybersecurity policy academy Kansas, Missouri, Montana and Washington will participate in the National Governors Association's 2021 Policy Academy where they will refine and share best practices in cybersecurity governance, workforce development, critical infrastructure security and local engagement and partnership. Dems push modular UI tech for state modernizations Congressional Democrats are asking the Labor Department to develop and maintain a set of modular functions states can use to modernize their unemployment compensation programs. Data, AI to power medical support on the battlefield The Army wants to give warfighters access to an artificial intelligence-enhanced medical database they can use to care for fellow service members incapacitated by injury or disease in the field.


Accelerating The Pace Of Machine Learning - AI Summary

#artificialintelligence

But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time. In the paper "Distributed Learning With Sparsified Gradient Differences," published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of "Gradient Descent method with Sparsification and Error Correction," or GD-SEC, to improve the communications efficiency of machine learning conducted in a "worker-server" wireless architecture. "Various distributed optimization algorithms have been developed to solve this problem," he continues,"and one primary method is to employ classical GD in a worker-server architecture. "Current methods create a situation where each worker has expensive computational cost; GD-SEC is relatively cheap where only one GD step is needed at each round," says Blum. Professor Blum's collaborators on this project include his former student Yicheng Chen '19G '21PhD, now a software engineer with LinkedIn; Martin Takác, an associate professor at the Mohamed bin Zayed University of Artificial Intelligence; and Brian M. Sadler, a Life Fellow of the IEEE, U.S. Army Senior Scientist for Intelligent Systems, and Fellow of the Army Research Laboratory. But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time. In the paper "Distributed Learning With Sparsified Gradient Differences," published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of "Gradient Descent method with Sparsification and Error Correction," or GD-SEC, to improve the communications efficiency of machine learning conducted in a "worker-server" wireless architecture. "Various distributed optimization algorithms have been developed to solve this problem," he continues,"and one primary method is to employ classical GD in a worker-server architecture.


How Facial Recognition Tech Made Its Way to the Battlefield in Ukraine

Slate

When the Russian warship Moskva sank in the Black Sea south of Ukraine, some 500 crew members were reportedly on board. The Russian state held a big ceremony for the surviving sailors and officers who were on the ship. But, considering Russia's history of being not exactly truthful when it comes to events like this, many people wondered whether these were actual sailors from Moskva. Toler is director of research and training for Bellingcat, the group that specializes in open-source and social media investigations. He used facial recognition software to identify the men in the video through images in Russian social media, and found that most of the men were indeed sailors from Sevastopol, the town the ship was operating out of.


7 Key Military Applications of Machine Learning

#artificialintelligence

Machine learning is now a critical component in modern warfare systems. Let's explore 7 key military applications of artificial intelligence today. Machine learning has become a critical part of modern warfare, and a major point of interest for me, both as an Army veteran and data scientist. Compared with conventional systems, military systems equipped with ML/DL are capable of handling tremendously larger volumes of data more efficiently. Additionally, AI improves self-control, self-regulation, and self-actuation of combat systems due to its inherent computing and decision-making capabilities; a critical aspect to consider due to the nature of combat.


As Russia Plots Its Next Move, an AI Listens to the Chatter

WIRED

A radio transmission between several Russian soldiers in Ukraine in early March, captured from an unencrypted channel, reveals panicked and confused comrades retreating after coming under artillery fire. "Vostok, I am Sneg 02. On the highway we have to turn left, fuck," one of the soldiers says in Russian using code names meaning "East" and "Snow 02." No need to move further. Later, a third soldier tries to make contact with another codenamed "South 95": "Yug 95, do you have contact with a senior? The third Russian soldier continues, becoming increasingly agitated: "Get on the radio.


Machine learning fine-tunes graphene synthesis

AIHub

Rice University chemists are employing machine learning to fine-tune its flash Joule heating process to make graphene. A flash signifies the creation of graphene from waste. Rice University scientists are using machine learning techniques to streamline the process of synthesizing graphene from waste through flash Joule heating. This flash Joule process has expanded beyond making graphene from various carbon sources, to extracting other materials, like metals, from urban waste. The technique is the same for all of the above: blasting a jolt of high energy through the source material to eliminate all but the desired product.


Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models

arXiv.org Artificial Intelligence

The problem of determining if a military unit has correctly understood an order and is properly executing on it is one that has bedeviled military planners throughout history. The advent of advanced language models such as OpenAI's GPT-series offers new possibilities for addressing this problem. This paper presents a mechanism to harness the narrative output of large language models and produce diagrams or "maps" of the relationships that are latent in the weights of such models as the GPT-3. The resulting "Neural Narrative Maps" (NNMs), are intended to provide insight into the organization of information, opinion, and belief in the model, which in turn provide means to understand intent and response in the context of physical distance. This paper discusses the problem of mapping information spaces in general, and then presents a concrete implementation of this concept in the context of OpenAI's GPT-3 language model for determining if a subordinate is following a commander's intent in a high-risk situation. The subordinate's locations within the NNM allow a novel capability to evaluate the intent of the subordinate with respect to the commander. We show that is is possible not only to determine if they are nearby in narrative space, but also how they are oriented, and what "trajectory" they are on. Our results show that our method is able to produce high-quality maps, and demonstrate new ways of evaluating intent more generally. N the 1979 motion picture Apocalypse Now, Captain Willard (played by Martin Sheen) is sent on a mission to assassinate Colonel Kurtz (played by Marlon Brando), a highly decorated officer who, in the words of the general authorizing the mission, has gone from "one of the most outstanding officers this country has ever produced" to someone "out there operating without any decent restraint, totally beyond the pale of any acceptable human conduct." The movie explores the paradoxes in war, where some illegal acts are embraced by the command structure, some tolerated, and some are to be terminated, "with extreme prejudice." Willard has to navigate these conflicts as he moves towards Kurtz' compound deep in Cambodia. Apocalypse Now provides an example of the difficulty that any intent-aware system must face in a military context [1]. Not only does the system need to determine if an order is being followed, it should also determine if the order itself is valid, so that the warriors implementing the order are not placed in ethical dilemmas. This is the goal that we attempt to address in this paper, with the concept of Neural Narrative Mapping (NNM). By placing narrative elements at coordinates in a virtual space, we can determine sophisticated relationships between concepts that go well beyond textual comparison.