Identifying the top nation-state actors can depend on who you ask, which underscores the need to gather threat intelligence from varied data sources. In a climate where geopolitical issues now can drive industry discussions, organizations will be better served if they formulate their cybersecurity strategy on information that reflects threat activities on an international scale. For some organizations, this requirement means gathering threat intel that is comprehensive and, in particular, diverse. Most threat intelligence houses currently originate from the West or are Western-oriented, and this can result in bias or skewed representations of the threat landscape, noted Minhan Lim, head of research and development at Ensign Labs. The Singapore-based cybersecurity vendor was formed through a joint venture between local telco StarHub and state-owned investment firm, Temasek Holdings.
Earlier this year, ChatGPT was briefly banned in Italy due to a suspected privacy breach. To help overturn the ban, the chatbot's parent company, OpenAI, committed to providing a way for citizens to object to the use of their personal data to train artificial intelligence (AI) models. The right to be forgotten (RTBF) law plays an important role in the online privacy rights of some countries. It gives individuals the right to ask technology companies to delete their personal data. It was established via a landmark case in the European Union (EU) involving search engines in 2014.
A Microsoft AI research team that uploaded training data on GitHub in an effort to offer other researchers open-source code and AI models for image recognition inadvertently exposed 38TB of personal data. Wiz, a cybersecurity firm, discovered a link included in the files that contained backups of Microsoft employees' computers. Those backups contained passwords to Microsoft services, secret keys and over 30,000 internal Teams messages from hundreds of the tech giant's employees, Wiz says. Microsoft assures in its own report of the incident, however, that "no customer data was exposed, and no other internal services were put at risk." The link was deliberately included with the files so that interested researchers could download pretrained models -- that part was no accident.
AI researchers at Microsoft have made a huge mistake. According to a new report from cloud security company Wiz, the Microsoft AI research team accidentally leaked 38TB of the company's private data. The exposed data included full backups of two employees' computers. These backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and more than 30,000 internal Microsoft Teams messages from more than 350 Microsoft employees. So, how did this happen?
Last week, WIRED published a deep-dive investigation into Trickbot, the prolific Russian ransomware gang. This week, US and UK authorities sanctioned 11 alleged members of Trickbot and its related group, Conti, including Maksim Galochkin, aka Bentley, one of the alleged members whose real-world identity we confirmed through our investigation. In addition to the US and UK sanctions, the US Justice Department also unsealed indictments filed in three US federal courts against Galochkin and eight other alleged Trickbot members for ransomware attacks against entities in Ohio, Tennessee, and California. Because everyone charged is a Russian national, however, it is unlikely they will ever be arrested or face trial. While Russian cybercriminals typically enjoy immunity, the same may not remain true for the country's military hackers.
Organizations have to look at how artificial intelligence (AI) can enable them to do things differently, rather than at a lower cost, in order to stay relevant in the future. In fact, 21st-century companies will not be defined by the quality or the price of their products and services, but by their use of AI, said Mike Walsh, futurist and CEO of tech consultancy Tomorrow. There will be significant shifts in the way these businesses operate in the future, said Walsh, who was speaking at ST Engineering's InnoTech Conference held this week in Singapore. Also: AI is coming to a business near you. But let's sort these problems first Future-oriented companies will move from building products and services to developing data-powered platforms that can be reapplied to adjacent markets, he said.
In one experiment in February, security researchers forced Microsoft's Bing chatbot to behave like a scammer. Hidden instructions on a web page the researchers created told the chatbot to ask the person using it to hand over their bank account details. This kind of attack, where concealed information can make the AI system behave in unintended ways, is just the beginning. Hundreds of examples of "indirect prompt injection" attacks have been created since then. This type of attack is now considered one of the most concerning ways that language models could be abused by hackers.
OpenAI allegedly violated European privacy laws in a bunch of different ways according to a complaint filed in Poland. On Tuesday, cybersecurity and privacy researcher Lukasz Olejnik filed a complaint with the Polish Data Protection Authorities, for breach of the European Union's sweeping General Data Protection Regulation (GDPR). Olejnik, who is represented by Warsaw-based law firm GP Partners, alleges OpenAI violated several of the GDPR's provisions regarding lawful basis, transparency, fairness, data access rights, and privacy by design, according to TechCrunch which reviewed the 17-page complaint. This complaint is one of several legal issues OpenAI is now confronted with, both abroad and in the U.S., where it's based. In June, OpenAI was hit with a class-action lawsuit by a California law firm for allegedly training ChatGPT with "stolen" data.
The UK's cybersecurity agency has warned that chatbots can be manipulated by hackers to cause scary real-world consequences. The National Cyber Security Centre (NCSC) has said there are growing cybersecurity risks of individuals manipulating the prompts through "prompt injection" attacks. This is where a user creates an input or a prompt that is designed to make a language model – the technology behind chatbots – behave in an unintended manner. A chatbot runs on artificial intelligence and is able to give answers to prompted questions by users. They mimic human-like conversations, which they have been trained to do through scraping large amounts of data.
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Six individuals have been arrested in Hong Kong after allegedly using artificial intelligence to generate images for a loan scam. The six accused scammers are charged with doctoring pictures to deceive banks and moneylenders in a loose fraud syndicate busted by city police. "The racket used an AI face-changing program, commonly known as deepfake technology, to apply for loans online with financial institutions," said Cyber Security and Technology Crime Bureau Superintendent Dicken Ko Tik on Friday.