If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Forbes published an intriguing story about the capacity of AI to serve as a kind of cybersecurity sheriff. Published on February 6, the story stated that AI has already displayed limitless potential in applications across different industries. That much is certainly true. It goes on to say that deploying AI for cybersecurity solutions will help protect organizations from existing cyber threats and help identify newer malware types too. Additionally, AI-powered cybersecurity systems can ensure effective security standards and help in the creation of better prevention and recovery strategies.
Dating is hard enough without the added stress of worrying about your digital safety online. But social media and dating apps are pretty inevitably involved in romance these days--which makes it a shame that so many of them have had security lapses in such a short amount of time. Within days of each other this week, the dating apps OkCupid, Coffee Meets Bagel, and Jack'd all disclosed an array of security incidents that serve as a grave reminder of the stakes on digital profiles that both store your personal information and introduce you to total strangers. "Dating sites are designed by default to share a ton of information about you; however, there's a limit to what should be shared," says David Kennedy, CEO of the threat tracking firm Binary Defense Systems. "And often times these dating sites provide little to no security, as we have seen with breaches going back several years from these sites."
Security vulnerabilities discovered in the Android version of a popular online dating application could have allow hackers to access usernames, passwords and personal information according to security researchers. The flaws in the Android version of the OKCupid dating app - which the Google Play Store lists as having over 10 million downloads - were been discovered by researchers at cyber security firm Checkmarx. The researchers have previously disclosed exploits which could be abused by hackers in another dating app. The researchers found that the WebView built in browser contained vulnerabilities which could be exploited by attackers. While most links in the app will open in the user's browser of choice, researchers found it was possible to mimic certain links which open within the application.
Ben Gurion, the main international airport in Israel, is one of the most protected airports in the world. It is known for its multilayered security. On the way from the office to the airport, you get caught in the lens of airport cameras. The road curves several kilometers to the terminal, and when you are driving, the security system has enough time to analyze your identity. In case of any signs of danger, you will be intercepted.
Google is telling Nest camera owners that it's not to blame for a recent string of creepy security incidents. The search giant, which owns Nest, sent an email to owners of its security devices telling them to reset their passwords and enable stronger account authentication settings in light of an uptick in hacked cameras. Last month, users began reporting a number of bizarre cases, where hackers appeared to take over their Nest security cameras to hurl insults at them, spy on their sleeping baby and even tell Amazon's Alexa to play'Despacito' by Justin Bieber. Nest told users that the company notifies users if they detect their email was part of another website breach. When this happens, the firm will proactively disable their Nest account as a security measure.
WASHINGTON - If you see a video of a politician speaking words he never would utter, or a Hollywood star improbably appearing in a cheap adult movie, don't adjust your television set -- you may just be witnessing the future of "fake news." "Deepfake" videos that manipulate reality are becoming more sophisticated due to advances in artificial intelligence, creating the potential for new kinds of misinformation with devastating consequences. As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors. "We're not quite to the stage where we are seeing deepfakes weaponized, but that moment is coming," said Robert Chesney, a University of Texas law professor who has researched the topic. Chesney argues that deepfakes could add to the current turmoil over disinformation and influence operations.
As happens infrequently--but definitely not never--Apple wrestled with an embarrassing and problematic security bug this week in its iOS FaceTime group calling feature. The flaw was bad enough that Apple took the drastic step of pulling group FaceTime functionality altogether. A full fix will come next week. Meanwhile, Facebook faced criticism for paying users as young as 13 to download a mobile research app that gave the company invasive access to all sorts of user data and activity, including web browsing. The app didn't meet Apple's privacy standards for iOS, and Facebook was distributing it through a loophole in the platform.
As the volume of digital information in corporate networks continues to grow, so grows the number of cyberattacks, and their cost. One cybersecurity vendor, Juniper Networks, estimates that the cost of data breaches worldwide will reach $2.1 trillion in 2019, roughly four times the cost of breaches in 2015. Now, two Boston University computer scientists, working with researchers at Draper, a not-for-profit engineering solutions company located in Cambridge, have developed a tool that could make it harder for hackers to find their way into networks where they don't belong. Peter Chin, a research professor of computer science and an affiliate of the Rafik B. Hariri Institute for Computing and Computational Science & Engineering, and Jacob Harer, a fourth-year Ph.D. student in computer science, worked with Draper researchers to develop technology that can scan software systems for the kinds of vulnerabilities that are often used by cybercriminals to gain entry. The tool, which used deep learning to train neural networks to identify patterns that indicate software flaws, can scan millions of lines of code in seconds, and will someday have the ability to fix the coding errors that it spots.
In an era of digital disruption where Internet of Things (IoT) and mobility are invading IT perimeters, artificial intelligence is emerging as the future of cybersecurity. With the expansion of the modern threat landscape, the inclusion of AI in the security strategy has become imperative for the establishment and maintenance of an effective security posture. Given the urgent need for protecting both data and high-value assets, organisations have started incorporating elements of machine learning and AI. With a series of investments, a raft of new products, and a rising tide of enterprise deployments, artificial intelligence is making a splash in the IoT ecosystem. Large organisations across sectors are already exploring and leveraging the power of AI with IoT to deliver new offerings and operate more efficiently.
For decades, the police and drivers with lead feet have engaged in a war of radars and radar detectors. Every time police radar technology improves, so do radar detectors to outsmart it. The same is true with cybersecurity. In turn, the technology that hackers use is also improved. It makes little sense to continue this way.