protect ai
Protect AI lands a $13.5M investment to harden AI projects from attack • TechCrunch
Seeking to bring greater security to AI systems, Protect AI today raised $13.5 million in a seed-funding round co-led by Acrew Capital and Boldstart Ventures with participation from Knollwood Capital, Pelion Ventures and Aviso Ventures. Ian Swanson, the co-founder and CEO, said that the capital will be put toward product development and customer outreach as Protect AI emerges from stealth. Protect AI claims to be one of the few security companies focused entirely on developing tools to defend AI systems and machine learning models from exploits. Its product suite aims to help developers identify and fix AI and machine learning security vulnerabilities at various stages of the machine learning life cycle, Swanson explains, including vulnerabilities that could expose sensitive data. "As machine learning models usage grows exponentially in production use cases, we see AI builders needing products and solutions to make AI systems more secure, while recognizing the unique needs and threats surrounding machine learning code," Swanson told TechCrunch in an email interview. "We have researched and uncovered unique exploits and provide tools to reduce risk inherent in [machine learning] pipelines."
How to protect AI from cyberattacks – start with the data
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Artificial intelligence is certainly a game-changer when it comes to security. Not only does it greatly expand the capability to manage and monitor systems and data, it adds a level of dynamism to both protection and recovery that significantly increases the difficulty, and lessens the rewards, of mounting a successful attack. But AI is still a digital technology, which means it can be compromised as well, particularly when confronted by an intelligent attack. As the world becomes more dependent on systems intelligence and autonomy for everything from business processes to transportation to healthcare, the consequences of a security breach rise even as the likelihood declines.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.41)
Can you trust AI to protect AI?
Now that AI is heading into the mainstream of IT architecture, the race is on to ensure that it remains secure when exposed to sources of data that are beyond the enterprise's control. From the data center to the cloud to the edge, AI will have to contend with a wide variety of vulnerabilities and an increasingly complex array of threats, nearly all of which will be driven by AI itself. Meanwhile, the stakes will be increasingly high, given that AI is likely to provide the backbone of our healthcare, transportation, finance, and other sectors that are crucial to support our modern way of life. So before organizations start to push AI into these distributed architectures too deeply, it might help to pause for a moment to ensure that it can be adequately protected. In a recent interview with VentureBeat, IBM chief AI officer Seth Dobrin noted that building trust and transparency into the entire AI data chain is crucial if the enterprise hopes to derive maximum value from its investment.
New Cybersecurity Tools and Techniques are Needed to Protect AI
Artificial intelligence is reorganizing the world, introducing innovations that will likely exceed those that came with the World Wide Web. And like the Web there were, and still are, security concerns. Today, trust in artificial intelligence is probably the single greatest risk to continuing AI innovation and adoption. A simple framework for the operational elements that need to be addressed for the responsible deployment of artificial intelligence must include'Fairness and Bias', 'Interpretability and Explainability', as well as the newest and equally important element of'Robustness and Security'. Note, that these are operational considerations, and privacy should be inherent in the design and implementation of responsible AI -- e.g., privacy must be foundational in every step of the process.
- North America > United States (0.16)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.43)
Intelligence Community Searches for Ways to Protect AI from Tampering
The intelligence community is investing in artificial intelligence as a way to augment the capabilities of intelligence analysts, with the National Geospatial-Intelligence Agency promoting computer vision technology and the CIA looking to AI and machine learning to help it sift through large volumes of data. Yet AI is only as strong and useful as the protections around it. That's why the intelligence community's research arm is looking for ways to predict if AI has been tampered with. In a draft broad agency announcement released last month, the Intelligence Advanced Research Projects Activity sought industry input on its TrojAI program. The program is designed to create software to automatically inspect an AI system and predict if it contains a "Trojan" attack that has tampered with its training.
SXSW 2018: Protect AI, robots, cars (and us) from bias
As Mark Hamill humorously shared the behind-the-scenes of "Star Wars: The Last Jedi" with a packed SXSW audience, two floors below on the exhibit floor Universal Robots recreated General Grievous' famed light saber battles. The battling machines were steps away from a twelve foot dancing Kuka robot and an automated coffee dispensary. Somehow the famed interactive festival known for its late night drinking, dancing and concerts had a very mechanical feel this year. Everywhere debates ensued between utopian tech visionaries and dystopia-fearing humanists. Even my panel on "Investing In The Autonomy Economy" took a very social turn when discussing the opportunities of utilizing robots for the growing aging population.
- Media > Film (0.90)
- Leisure & Entertainment (0.90)
- Government > Regional Government > North America Government > United States Government (0.50)
To Protect AI, Machine Learning Avances, US Wants To Chinese Investment Over Military Fears
U.S. officials reportedly are rethinking the advisability of allowing the Chinese to invest in sensitive technologies seen as vital to national security. Reuters reported Wednesday U.S. officials are concerned such cutting-edge technologies as artificial intelligence and machine learning could be used by the Chinese to augment their military capabilities and achieve greater advancements in strategic industries. Technology is the fastest growing industry in the United States, and China has funneled $45.6 billion into U.S. acquisitions and Greenfield investments in the last year, Rhodium Group found. That investment is expected to double this year. Read: What Is Artificial Intelligence?
- Asia > China (0.30)
- North America > United States > Texas (0.06)
- North America > United States > California (0.06)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)