Goto

Collaborating Authors

 check point


FBI warns seniors about billion-dollar scam draining retirement funds, expert says AI driving it

FOX News

Pete Nicoletti, chief information security officer at Check Point, told Fox News Digital that an FBI-warned scam is now using AI to target seniors. A cybersecurity expert warns that a scam that has been used to drain entire life savings or retirement accounts has become "devastating" for seniors. FBI Los Angeles on July 15 posted a reminder on X about the Phantom Hacker Scam, which has cost Americans over 1 billion since at least 2024, according to the agency. The FBI said the scam targets senior citizens and warns that victims could lose their "life savings." The scam operates in three phases: a "tech support impostor," "financial institution impostor" and a "US government impostor." In the first phase, a tech support impostor will contact victims through text, phone call or email, then direct them to download a program allowing the scammer remote access to their computer.


The Digital Insider

#artificialintelligence

Pål (Paul) has more than 30 years of experience from the IT industry and has worked with both domestic and international clients on a local and global scale. Pål has a very broad competence base that covers everything from general security, to datacenter security, to cloud security services and development. For the past 10 years, he has worked primarily within the private sector, with a focus on both large and medium-sized companies within most verticals. In this interview, Pål Aaserudseter, a Security Engineer for Check Point, discusses artificial intelligence, cyber security and how to keep your organization safe in an era of eerie and daunting digital innovation. Read on to learn more!


Is ChatGPT a cybersecurity threat? • TechCrunch

#artificialintelligence

Since its debut in November, ChatGPT has become the internet's new favorite plaything. The AI-driven natural language processing tool rapidly amassed more than 1 million users, who have used the web-based chatbot for everything from generating wedding speeches and hip-hop lyrics to crafting academic essays and writing computer code. Not only have ChatGPT's human-like abilities taken the internet by storm, but it has also set a number of industries on edge: a New York school banned ChatGPT over fears that it could be used to cheat, copywriters are already being replaced, and reports claim Google is so alarmed by ChatGPT's capabilities that it issued a "code red" to ensure the survival of the company's search business. It appears the cybersecurity industry, a community that has long been skeptical about the potential implications of modern AI, is also taking notice amid concerns that ChatGPT could be abused by hackers with limited resources and zero technical knowledge. Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI's code-writing system Codex, could create a phishing email capable of carrying a malicious payload. Check Point threat intelligence group manager Sergey Shykevich told TechCrunch that he believes use cases like this illustrate that ChatGPT has the "potential to significantly alter the cyber threat landscape," adding that it represents "another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities."


People are already trying to get ChatGPT to write malware

#artificialintelligence

The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code. AI writing tools can help lighten your workload by writing emails and essays and even doing math. They use artificial intelligence to generate text or answer queries based on user input. ChatGPT is one popular example, but there are other noteworthy AI writers. ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.


The Digital Insider

#artificialintelligence

Late last year, the company called OpenAI released an artificially intelligent chatbot that has taken the world by storm. The chatbot, known as ChatGPT, is well-versed in a wide range of topics and is versatile in its capabilities. For instance, ChatGPT can write computer programs, debug computer programs, compose music and create student essays. Within a week of its launch, ChatGPT had over a million users. The servers couldn't keep up. Microsoft has poured more than a billion dollars into the technology and speculators say that it's "looking like an excellent value for [the] money."


ChatGPT can be used to generate malicious code, finds research

#artificialintelligence

OpenAI's ChatGPT, the large language model (LLM)-based artificial intelligence (AI) text generator, can be seemingly used to generate code for malicious tasks, a research note by cyber security firm Check Point observed on Tuesday. Researchers at Check Point used ChatGPT and Codex, a fellow OpenAI natural language to code generator, used standard English instructions to create code that can be used to launch spear phishing attacks.


ChatGPT can be used to generate malicious code, finds research

#artificialintelligence

OpenAI's ChatGPT, the large language model (LLM)-based artificial intelligence (AI) text generator, can be seemingly used to generate code for malicious tasks, a research note by cyber security firm Check Point observed on Tuesday. Researchers at Check Point used ChatGPT and Codex, a fellow OpenAI natural language to code generator, used standard English instructions to create code that can be used to launch spear phishing attacks. The biggest issue with such AI code generators lie in the fact that the natural language processing (NLP) tools can lower the entry barrier for hackers with malicious intent. With the code generators not needing users to be well versed with coding, any user can collate the logical flow of information that is used in a malicious tool from the open web, and use the same logic to generate syntax for malicious tools. Demonstrating the issue, Check Point showcased how the AI code generator was used to create a basic code template for a phishing email scam, and apply subsequent instructions in plain English to keep improving the code.


Security experts find major vulnerabilities in Amazon Alexa that lets hackers control the device

Daily Mail - Science & tech

More than 200 million Amazon Alexa devices were at risk of cyber attacks due to a bug found lurking in the smart assistant. Security researchers found a vulnerability that lets cybercriminals obtain voice history data, along with deleting and installing commands and apps. The team discovered a misconfiguration in the system the permitted them to perform actions on the victim's behalf and view personal information. Amazon has since rolled out a patch after the issue was reported to the tech giant and notes it is not aware of any incidents related to the bug. More than 200 million Amazon Alexa devices were at risk of cyber attacks due to a bug found lurking in the smart assistant.


Amazon's Alexa has serious privacy flaws, researchers find

FOX News

Fox News Flash top headlines are here. Check out what's clicking on FoxNews.com. Flaws in Amazon's Alexa were serious enough that a user "in just one-click" could have handed over their voice history, home address and control of their Amazon account, cybersecurity firm Check Point said in a recent report. An attacker could have also silently installed, viewed and removed Alexa skills, Check Point said, referring to voice-driven Alexa apps. A hacker could have also accessed a victim's personal information, such as banking data history and usernames.


An Alexa Bug Could Have Exposed Your Voice History to Hackers

WIRED

Smart-assistant devices have had their share of privacy missteps, but they're generally considered safe enough for most people. New research into vulnerabilities in Amazon's Alexa platform, though, highlights the importance of thinking about the personal data your smart assistant stores about you--and minimizing it as much as you can. Findings published on Thursday by the security firm Check Point reveal that Alexa's web services had bugs that a hacker could have exploited to grab a target's entire voice history, meaning their recorded audio interactions with Alexa. Amazon has patched the flaws, but the vulnerability could have also yielded profile information, including home address, as well as all of the "skills," or apps, the user had added for Alexa. An attacker could have even deleted an existing skill and installed a malicious one to grab more data after the initial attack.