shykevich
It's not just you: Cybercriminals are also using ChatGPT to make their jobs easier
Whether it is writing essays or analyzing data, ChatGPT can be used to lighten a person's workload. That goes for cybercriminals too. Sergey Shykevich, a lead ChatGPT researcher at cybersecurity company Checkpoint security, has already seen cybercriminals harness the AI's power to create code that can be used in a ransomware attack. Shykevich's team began studying the potential for AI to lend itself to cyber crimes in December 2021. Using the AI's large language model, they created phishing emails and malicious code.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
People are already trying to get ChatGPT to write malware
The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code. AI writing tools can help lighten your workload by writing emails and essays and even doing math. They use artificial intelligence to generate text or answer queries based on user input. ChatGPT is one popular example, but there are other noteworthy AI writers. ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.34)
How AI chatbot ChatGPT changes the phishing game
ChatGPT, OpenAI's free chatbot based on GPT-3.5, was released on 30 November 2022 and racked up a million users in five days. It is capable of writing emails, essays, code and phishing emails, if the user knows how to ask. By comparison, it took Twitter two years to reach a million users. Facebook took ten months, Dropbox seven months, Spotify five months, Instagram six weeks. Pokemon Go took ten hours, so don't break out the champagne bottles, but still, five days is pretty impressive for a web-based tool that didn't have any built-in name recognition.
Russian Hackers Try to Bypass ChatGPT's Restrictions For Malicious Purposes - Infosecurity Magazine
Russian cyber-criminals have been observed on dark web forums trying to bypass OpenAI's API restrictions to gain access to the ChatGPT chatbot for nefarious purposes. Various individuals have been observed, for instance, discussing how to use stolen payment cards to pay for upgraded users on OpenAI (thus circumventing the limitations of free accounts). Others have created blog posts on how to bypass the geo controls of OpenAI, and others still have created tutorials explaining how to use semi-legal online SMS services to register to ChatGPT. "Generally, there are a lot of tutorials in Russian semi-legal online SMS services on how to use it to register to ChatGPT, and we have examples that it is already being used," wrote Check Point Research (CPR), which shared the findings with Infosecurity ahead of publication. "It is not extremely difficult to bypass OpenAI's restricting measures for specific countries to access ChatGPT," said Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies.
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.43)
- Government > Regional Government > Asia Government > Russia Government (0.43)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.97)
Hackers Exploiting ChatGPT To Write Malicious Codes To Steal Your Data
New Delhi, Jan 8 (IANS) Artificial intelligence (AI)-driven ChatGPT, that gives human-like answers to questions, is also being used by cyber criminals to develop malicious tools that can steal your data, a report has warned. The first such instances of cybercriminals using ChatGPT to write malicious codes have been spotted by Check Point Research (CPR) researchers. In underground hacking forums, threat actors are creating "infostealers", encryption tools and facilitating fraud activity. The researchers warned of the fast-growing interest in ChatGPT by cybercriminals to scale and teach malicious activity. In recent weeks, we're seeing evidence of hackers starting to use it to write malicious code.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)