To determine how close humans and robots can become, we need a clear understanding of what, exactly, friendship is, and defining friendship isn't easy. Our friendships are made, maintained, and repaired all the time. Hopefully, we all have friends and believe, deep in our hearts, that the one ship that won't sink is friendship. Although friendship plays such a profound role in our lives that research links it to both emotional and physical well-being, people disagree about what makes friendship special and how far the bonds can go. Can we be friends with people who do things we find unconscionable?
In brief US government should avoid hastily banning AI-powered autonomous weapons and instead step up its efforts in developing such systems to keep up with foreign enemies, according to the National Security Commission on AI. The independent group headed by ex-Google CEO Eric Schmidt and funded by the Department of Defense has published its final report advising the White House on how best to advance AI and machine learning to stay ahead of its competitors. Stretching over 750 pages, the report covers a lot of areas, including retaining talent, the future of warfare, protecting IP, and US semiconductor supply chains. The most controversial point raised by Schmidt and the other advisors was that America should not turn its back on autonomous AI weapons. The US government should actually be building its own systems to deter other countries from wreaking havoc, it argued.
This blog is a continuation of the Building AI Leadership Brain Trust Blog Series which targets board directors and CEO's to accelerate their duty of care to develop stronger skills and competencies in AI in order to ensure their AI programs achieve sustaining results. In this blog series, I have identified forty skill domains in an AI Leadership Brain Trust Framework to guide board directors and CEO's to ensure they can develop and accelerate their investments in successful AI initiatives. You can see the full roster of the forty leadership Brain Trust skills in my first blog. Each of the blogs in this series explores either a group of skills or does a deeper dive into one of the skill areas. I have come to the conclusion that to unlock the last mile of AI value realization that board directors and CEOs must accelerate building a unified brain trust (a unified set of leadership skills that are hardwired in relevant digital and AI skills) to modernize their organizations more rapidly.
The US federal government should do more to fund research and facilitate collaboration which helps cities tap the benefits of artificial intelligence (AI) and other emerging technologies, says a new report from non-profit thinktank the Information Technology and Innovation Foundation (ITIF). "Smart cities offer an important opportunity to address both infrastructure needs and strained state and local budgets at the same time," the report says, noting the large revenue shortfalls many cities face due to the pandemic. Cities can use AI in transport, the electrical grid, buildings, city operations and more. Similarly, a 2020 report from Microsoft and PwC found that AI-enabled decarbonisation technologies could reduce the carbon intensity of the global economy. ITIF's research outlines several key challenges to deployment.
Researchers have developed a method based on Artificial Intelligence (AI) that rapidly identifies currently available medications that may treat Alzheimer's disease. The method could represent a rapid and inexpensive way to repurpose existing therapies into new treatments for this progressive, debilitating neurodegenerative condition. Importantly, it could also help reveal new, unexplored targets for therapy by pointing to mechanisms of drug action. "Repurposing FDA-approved drugs for Alzheimer's disease is an attractive idea that can help accelerate the arrival of effective treatment -- but unfortunately, even for previously approved drugs, clinical trials require substantial resources, making it impossible to evaluate every drug in patients with Alzheimer's disease," said researcher Artem Sokolov from Harvard Medical School. "We therefore built a framework for prioritising drugs, helping clinical studies to focus on the most promising ones," Sokolov added.
In many ways, cybersecurity has always been a contest; vendors race to develop security products that can identify and mitigate any threats, while cybercriminals aim to develop malware and exploits capable of bypassing protections. With the emergence of artificial intelligence (AI), however, this combative exchange between attackers and defenders is about to become more complex and increasingly ferocious. According to Max Heinemeyer, Director of Threat Hunting at AI security firm Darktrace, it is only a matter of time before AI is co-opted by malicious actors to automate attacks and expedite the discovery of vulnerabilities. "We don't know precisely when offensive AI will begin to emerge, but it could already be happening behind closed doors," he told TechRadar Pro. "If we are able to [build complex AI products] here in our labs with a few researchers, imagine what nation states that invest heavily in cyberwar could be capable of." When this trend starts to play out, as seems inevitable, Heinemeyer says cybersecurity will become a "battle of the algorithms", with AI pitted against AI.
Machine Learning is increasingly relevant to information security. The general idea is that it can offer better threat analysis to businesses while improving their entire IT infrastructure security. ML can also help automate menial tasks that were often given to security teams with minimal skill to handle. Data security is at severe risk in such an environment. But machine learning continues to grow in impact and adoption in cybersecurity solutions.
SHENZHEN – Chinese drone giant DJI Technology Co. built up such a successful U.S. business over the past decade that it almost drove all competitors out of the market. Yet its North American operations have been hit by internal disturbances in recent weeks and months, with a raft of staff cuts and departures, according to interviews with more than two dozen current and former employees. The loss of key managers, including some who have joined rivals, has compounded problems caused by U.S. government restrictions on Chinese companies, and raised the once-remote prospect of DJI's dominance being eroded, said four of the people, including two senior executives who were at the company until late 2020. About a third of DJI's 200-strong team in the region was laid off or resigned last year, from offices in Palo Alto, Burbank and New York, according to three former and one current employee. In February this year, DJI's head of U.S. R&D left and the company laid off the remaining R&D staff, numbering roughly 10 people, at its flagship U.S. research center in California's Palo Alto, four people said.
Researchers from the University of Missouri and the University of North Carolina at Charlotte with image processing and cybersecurity expertise have been awarded nearly $1.2 million from the National Science Foundation to find out. They're designing an AI program they believe will need only a small number of deepfake examples to start to build its knowledge base. As it learns, the program will be able to spot new deepfake techniques, making more accurate detections and preventing mistakes in identifying content. Relying on a small number of examples overcomes the current challenges of algorithms that typically need a vast number of labeled samples to learn from. By leveraging accumulated knowledge, the deepfake detector will also learn to prevent camouflaged or obscured visual content from being classified as genuine content.
Historically, cybersecurity has been a field dominated by resource-intensive efforts. Monitoring, threat hunting, incident response, and other duties are often manual and time-intensive, which can delay remediation activities, increase exposure, and heighten vulnerability to cyber adversaries. Over the past few years, artificial intelligence solutions have rapidly matured to the point where they can bring substantial benefits to cyber defensive operations across a broad range of organizations and missions. By automating key elements of labor-heavy core functions, AI can transform cyber workflows into streamlined, autonomous, continuous processes that speed remediation and maximize protection.