Goto

Collaborating Authors

Results


Soldiers Don't Want to Rest. Soon, Computers Will Tell Them When They Need To

#artificialintelligence

In the U.S. Armed Forces, when people are tired, they keep on working. "We have a lot of type-A personalities in the military who take it personally to get the job done no matter what the task," Lt. Colonel Bradley Ritland, deputy chief of the Military Performance Division at the U.S. Army Research Institute of Environmental Medicine, tells Popular Mechanics. "You do have a set percentage who are hesitant to report feeling tired or to report feeling an injury. They think that would impact their career, or feel like they're letting a colleague down, or feel they wouldn't be contributing as best they can to the mission." Now, machine learning technology from the Johns Hopkins University Applied Physics Laboratory may eventually be able to report in real time whether those soldiers need a break or identify who is at risk for injury.




Why business is booming for military AI startups

MIT Technology Review

Militaries are responding to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing "priority" technologies such as artificial intelligence, big-data processing, and automation. Since the war started, the UK has launched a new AI strategy specifically for defense, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military. "War is a catalyst for change," says Kenneth Payne, who leads defense studies research at King's College London and is the author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict. The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield.


AI and Cyber Security Battlefield

#artificialintelligence

Artificial intelligence (AI) is truly a revolutionary feat of computer science, set to become a core component of all modern software over the coming years and decades. This presents a threat but also an opportunity. AI will be deployed to augment both defensive and offensive cyber operations. Additionally, new means of cyber attack will be invented to take advantage of the particular weaknesses of AI technology. Finally, the importance of data will be amplified by AI's appetite for large amounts of training data, redefining how we must think about data protection. Prudent governance at the global level will be essential to ensure that this era-defining technology will bring about broadly shared safety and prosperity.


'Sentient' Artificial Intelligence: Have We Reached Peak AI Hype? - AI Summary

#artificialintelligence

Lemoine, who worked for Google's Responsible AI organization until he was placed on paid leave last Monday, and who "became ordained as a mystic Christian priest, and served in the Army before studying the occult," had begun testing LaMDA to see if it used discriminatory or hate speech. The Post article continued: "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington. Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI): Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term "Eliza Effect" in 1995, in which he said that while the "achievements of today's artificial neural networks are astonishing … I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat." "I think corporations are going to be woefully on their back feet reacting, because they just don't get it – they have a false sense of security," said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week. Lemoine, who worked for Google's Responsible AI organization until he was placed on paid leave last Monday, and who "became ordained as a mystic Christian priest, and served in the Army before studying the occult," had begun testing LaMDA to see if it used discriminatory or hate speech. The Post article continued: "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington.


'Sentient' artificial intelligence: Have we reached peak AI hype?

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend. Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Google's conversational AI for generating chatbots based on large language models (LLM), was sentient. Lemoine, who worked for Google's Responsible AI organization until he was placed on paid leave last Monday, and who "became ordained as a mystic Christian priest, and served in the Army before studying the occult," had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began "teaching" LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story: "It's a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person."


GCN - AI Summary

#artificialintelligence

DOE investing in machine learning tools for data analysis The Department of Energy is dedicating $29 million to develop new tools and advanced algorithms that will benefit multiple scientific fields and inform cutting-edge solutions for a variety of complex problems. INDUSTRY INSIGHT How an advanced architecture can dramatically mitigate massive data breaches A labeled gateway running on a trustworthy operating system enforces mandatory access control policies to protect the entire system from modification and prevents unauthorized data flows, such as massive data breaches. NGA taps 4 states for cybersecurity policy academy Kansas, Missouri, Montana and Washington will participate in the National Governors Association's 2021 Policy Academy where they will refine and share best practices in cybersecurity governance, workforce development, critical infrastructure security and local engagement and partnership. Dems push modular UI tech for state modernizations Congressional Democrats are asking the Labor Department to develop and maintain a set of modular functions states can use to modernize their unemployment compensation programs. Data, AI to power medical support on the battlefield The Army wants to give warfighters access to an artificial intelligence-enhanced medical database they can use to care for fellow service members incapacitated by injury or disease in the field.


Accelerating The Pace Of Machine Learning - AI Summary

#artificialintelligence

But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time. In the paper "Distributed Learning With Sparsified Gradient Differences," published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of "Gradient Descent method with Sparsification and Error Correction," or GD-SEC, to improve the communications efficiency of machine learning conducted in a "worker-server" wireless architecture. "Various distributed optimization algorithms have been developed to solve this problem," he continues,"and one primary method is to employ classical GD in a worker-server architecture. "Current methods create a situation where each worker has expensive computational cost; GD-SEC is relatively cheap where only one GD step is needed at each round," says Blum. Professor Blum's collaborators on this project include his former student Yicheng Chen '19G '21PhD, now a software engineer with LinkedIn; Martin Takác, an associate professor at the Mohamed bin Zayed University of Artificial Intelligence; and Brian M. Sadler, a Life Fellow of the IEEE, U.S. Army Senior Scientist for Intelligent Systems, and Fellow of the Army Research Laboratory. But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time. In the paper "Distributed Learning With Sparsified Gradient Differences," published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of "Gradient Descent method with Sparsification and Error Correction," or GD-SEC, to improve the communications efficiency of machine learning conducted in a "worker-server" wireless architecture. "Various distributed optimization algorithms have been developed to solve this problem," he continues,"and one primary method is to employ classical GD in a worker-server architecture.


How Facial Recognition Tech Made Its Way to the Battlefield in Ukraine

Slate

When the Russian warship Moskva sank in the Black Sea south of Ukraine, some 500 crew members were reportedly on board. The Russian state held a big ceremony for the surviving sailors and officers who were on the ship. But, considering Russia's history of being not exactly truthful when it comes to events like this, many people wondered whether these were actual sailors from Moskva. Toler is director of research and training for Bellingcat, the group that specializes in open-source and social media investigations. He used facial recognition software to identify the men in the video through images in Russian social media, and found that most of the men were indeed sailors from Sevastopol, the town the ship was operating out of.