On February 12, 2019 the Department of Defense released a summary and supplementary fact sheet of its artificial intelligence strategy ("AI Strategy"). The AI Strategy has been a couple of years in the making as the Trump administration has scrutinized the relative investments and advancements in artificial intelligence by the United States, its allies and partners, and potential strategic competitors such as China and Russia. The animating concern was articulated in the Trump administration's National Defense Strategy ("NDS"): strategic competitors such as China and Russia has made investments in technological modernization, including artificial intelligence, and conventional military capability that is eroding U.S. military advantage and changing how we think about conventional deterrence. As the NDS states, "[t]he reemergence of long-term strategic competition, rapid dispersion of technologies" such as "advanced computing, "big data" analytics, artificial intelligence" and others will be necessary to "ensure we will be able to fight and win the wars of the future." The AI Strategy offers that "[t]he United States, together with its allies and partners, must adopt AI to maintain its strategic position, prevail on future battlefields, and safeguard [a free and open international] order. We will also seek to develop and use AI technologies in ways that advance security, peace, and stability in the long run. We will lead in the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner."
With competitive pressure increasing drastically and the digital economy progressing considerably, enterprises need to figure out new ways to plan, develop, and add value. Therefore, to adapt to digital transformation efficiently, DevOps has become a necessity to eliminate technical and cultural constraints for offering value rapidly. Unfortunately, the conservative nature of IT enterprises has led to slower adoption of DevOps. Additionally, many organizations still need to adopt them for efficient processes. The adoption would enhance the efficiency of operational processes and reduce downtime in the development life cycle of software.
Every year, it seems, pundits and egg-heads (like me) write down our predictions. Most of the time they are self-serving. However, looking into my crystal ball, I think 2019 is going to be a defining year at the intersection of physical and cybersecurity. The Internet of Things (IoT) and advances in artificial intelligence are some of the main driving forces in this evolution. Further, with chaos happening throughout the world, and the lead-up to a turbulent US presidential election, there has never been a more perilous--and opportunistic--time for those in the IoT and security business.
Darktrace helped pave the way for using artificial intelligence to combat malicious hacking and enterprise security breaches. Now a new UK startup founded by an ex-Darktrace executive has raised some funding to take the use of AI in cybersecurity to the next level. Senseon, which has pioneered a new model that it calls "AI triangulation" -- simultaneously applying artificial intelligence algorithms to oversee, monitor and defend an organization's network appliances, endpoints, and'investigator bots' covering multiple microservices -- has raised $6.4 million in seed funding. David Atkinson -- the startup's CEO and founder who had previously been the commercial director for Darktrace and before that helped pioneer new cybersecurity techniques as an operative at the UK's Ministry of Defense -- said that Senseon will use the funding to continue to expand its business both in Europe and the US. The deal was co-led by MMC Ventures and Mark Weatherford, who is chief cyber security strategist at vArmour (which itself raised money in recent weeks) and previously Deputy Under Secretary for Cybersecurity, U.S. Department of Homeland Security.
The American military wants to expand its use of artificial intelligence, or AI, for war. But it says the technology will be deployed in respect to the nation's values. The United States Defense Department released its first AI strategy this week. The strategy calls for increasing the use of AI systems throughout the military, from decision-making to predicting problems in planes or ships. It urges the military to provide AI training to change "its culture, skills and approaches."
The Pentagon's research office is exploring how artificial intelligence can improve technologies that link troops' brains and bodies to military systems. The Defense Advanced Research Projects Agency recently began recruiting teams to research how AI tools could augment and enhance "next-generation neurotechnology." Through the program, officials ultimately aim to build AI into neural interfaces, a technology that lets people control, feel and interact with remote machines as though they were a part of their own body. Impossible as they may sound, neural interfaces have already been used to allow people to control prosthetic limbs, translate thoughts into text, and telepathically fly drones. Through the Intelligent Neural Interfaces program, DARPA will explore how AI can make these systems more durable, efficient and effective.
Building smart factories is a substantial endeavor for organizations. The initial steps involve understanding what makes them unique and what new advantages they offer. However, a realistic view of smart factories also involves acknowledging the risks and threats that may arise in its converged virtual and physical environment. As with many systems that integrate with the industrial internet of things (IIoT), the convergence of information technology (IT) and operational technology (OT) in smart factories allows for capabilities such as real-time monitoring, interoperability, and virtualization. But this also means an expanded attack surface.
Technology is presently evolving at such a rapid pace that yearly predictions of trends can seem to be obsolete before they even go live as a published article blog post. As technology evolves, it empowers much quicker change and progress, causing the increasing speed of the rate of change, until eventually, it will become exponential. The newest member community set up by CompTIA, the leading trade association for the global technology industry, is empowering the selection of new and emerging technologies to enhance business outcomes, however, in a rational, thoughtful way that makes sense for tech organizations and their customers alike. "It's an energizing time for innovation on numerous fronts," said Estelle Johannes, CompTIA's staff liaison to the Emerging Technology Community. Artificial Intelligence, or AI, has effectively received a ton of buzz in recent years, however, it keeps on to be a trend to watch since its consequences on how we live, work and play are just in the early stages.
What makes AI cybersecurity different is its adaptability: It does not need to follow specific rules; rather, it can watch patterns and learn. "Unlike a signature-based approach that delivers a 1-for-1 mapping of threats to countermeasures, data science uses the collective learning of all threats observed in the past to proactively identify new ones that haven't been seen before," said Chris Morales, head of security analytics at Vectra, an AI threat detection vendor. After downloading ransomware, the malware would scan your files, single out what it finds important, make an encrypted copy of those files, delete the original ones and send the encryption keys to the ransomware operators so they have a unique key for every victim. "That sequence of events is pretty unique; you're not going to see a lot of credible software doing that," said Doug Shepherd, chief security officer at Nisos. This limits the usefulness of traditional antivirus software, which looks for signatures detected in known ransomware in order to block a new attack.