bug bounty
Researchers Propose a Better Way to Report Dangerous AI Flaws
In late 2023, a team of third party researchers discovered a troubling glitch in OpenAI's widely used artificial intelligence model GPT-3.5. When asked to repeat certain words a thousand times, the model began repeating the word over and over, then suddenly switched to spitting out incoherent text and snippets of personal information drawn from its training data, including parts of names, phone numbers, and email addresses. The team that discovered the problem worked with OpenAI to ensure the flaw was fixed before revealing it publicly. It is just one of scores of problems found in major AI models in recent years. In a proposal released today, more than 30 prominent AI researchers, including some who found the GPT-3.5 flaw, say that many other vulnerabilities affecting popular models are reported in problematic ways.
- Information Technology > Security & Privacy (0.53)
- Law (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.47)
Did That Newly Announced ChatGPT Bug Bounty Initiative By OpenAI Undershoot Its Wanted Aims, Asks AI Ethics And AI Law
Is the OpenAI bug bounty for ChatGPT all that it could be, some wonder. I'm sure that you've heard that oft-repeated sage advice. The same utterance has been smarmily used to describe the recently announced Bug Bounty initiative that OpenAI has proclaimed for ChatGPT and their other AI apps such as GPT-4 (successor to ChatGPT). In essence, the skeptics and cynics are suggesting that their Bug Bounty is not up to par and misses the boat in a variety of crucial ways. It misses the devout mark. Time to take this one home. You see, some carp that it undershoots what could have been a much more robust and momentous proclamation aiming to curtail AI-related woes. Not everyone sees things as quite so dismally about the announcement. You might have thought that proffering a bug bounty effort would be appreciated and applauded.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.99)
DoD Chief Digital and Artificial Intelligence Office Launches Hack the Pentagon Website > U.S. Department of Defense > Release
The Chief Digital and Artificial Intelligence Office (CDAO) Directorate for Digital Services (DDS) has launched a website (www.hackthepentagon.mil) to accompany their long-running program: Hack the Pentagon (HtP). DDS launched HtP in 2016, using bug bounties as an innovative way to secure critical Department of Defense (DoD) systems and assets. HtP invites vetted, independent security researchers, known as "ethical hackers", to discover, investigate, and report vulnerabilities, which DoD can then remediate. DDS built the HtP website as a resource for Department of Defense organizations, vendors, and security researchers to learn how to conduct a bug bounty, partner with the CDAO DDS team to support bug bounties, and participate in DoD-wide bug bounties. "With the HtP website launch, CDAO is scaling a long running program, which historically offered services on a project-by-project basis, by offering the Department better access to lessons learned and best practices for hosting bug bounties," said Dr. Craig Martell, Chief Digital and Artificial Intelligence Officer.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Use ChatGPT To Automate Your Bug Bounty
Let's request a straightforward Python script to automate Recon from ChatGPT. Let's ask ChatGPT to develop a more advanced Recon program. Sorry, but it wouldn't be possible to provide a comprehensive program that uses all of the tools you mentioned to automate your bug bounty recon process. It is highly recommended that you have a solid understanding of each tool and how to use it before attempting to automate it because the process of automating reconnaissance tasks can be complicated. But I can show you how to use some of the tools you mentioned in a Python script example.
AIhub monthly digest: January 2022 – new voices in AI, bug bounties, and arXiv hits two million
Welcome to our first monthly digest of 2022! This is the place where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we cover our new series New voices in AI, hear from an ACML award winner, and celebrate an arXiv milestone. We're excited to announce the launch of a new series for AIhub: New voices in AI. Hosted by Joe Daly, this series will highlight the work of PhD students, early career researchers, and those in the field of AI with a fresh perspective.
AI bias is rampant. Bug bounties could help catch it.
The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s. Back then, some companies found they could actually make themselves safer by incentivizing the work of independent "white hat" security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That's how the practice of bug bounties became a cornerstone of cybersecurity today. In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in -- this time, by putting bounties on harms that might originate in their artificial intelligence systems. François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate's probe of Russia's attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and "combines art and research to illuminate the social implications and harms of artificial intelligence."
- North America > United States (1.00)
- Europe > Russia (0.25)
- Asia > Russia (0.25)
Clearview AI exposes source code to controversial facial recognition app and company credentials
Security researchers say a misconfigured server owned by the controversial facial recognition company, Clearview AI, exposed its software's source code as well as internal credentials and keys. According to TechCrunch, which first reported on the flaw, Mossab Hussein, the chief security officer at SpiderSilk, a security firm based in Dubai, uncovered a flawed Clearview server storing sensitive data, allowing users to bypass its password protection. Specifically, Hussein found that a misconfiguration allowed anyone to register as a new user and access the database containing Clearview's code regardless of whether they had entered password. TechCrunch reports that, in addition to source code that would allow anyone to use Clearview's software, the database also contained passwords and other keys that would allow one to access the company's cloud storage buckets. Finished versions of Clearview's apps for iOS and Android as well as pre-developer beta versions were contained in those buckets, TechCrunch reports.
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.26)
- North America > United States > Illinois > Cook County > Chicago (0.06)
We Need Bug Bounties for Bad Algorithms
Amit Elazari Bar On is a doctoral law candidate (J.S.D.) at UC Berkeley School of Law and a CLTC (Center for Long-Term Cybersecurity) Grantee, Berkeley School of Information, as well as a member of AFOG, Algorithmic Fairness and Opacity Working Group at Berkeley. On 2017, Amit was a CTSP Fellow. We are told opaque algorithms and black-boxes are going to control our world, shaping every aspect of our life. They warn us that without accountability and transparency, and generally without better laws, humanity is doomed to a future of machine-generated bias and deception. From calls to open-the-black box to the limitations of explanations of inscrutable machine-learning models, the regulation of algorithms is one of the most pressing policy concerns in today's digital society.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Education > Educational Setting > Higher Education (0.55)
Hacken Joins SingularityNET to Pursue Artificial Intelligence-Powered Cybersecurity
As we stated following our presentation at the World Economic Forum in Davos, there is an arms race for cutting-edge AI tech. SingularityNET is positioned at the center of that opportunity. "Google recently announced a major initiative in the cybersecurity space, and with the data and computing power and AI chops at their disposal, I have no doubt they can do some quality work. However, I worry about the prospect of advanced cybersecurity becoming monopolized by big tech firms, particularly given recent revelations of the close connections between these firms and government surveillance projects," said SingularityNET CEO, Ben Goertzel. Our team is working incessantly to meet the mounting demand for our network.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
How the new age of antivirus software will protect your PC
Antivirus software ain't what it used to be. The sneaky, sophisticated security threats your PC faces now have gone far beyond what traditional software can do. The future of protecting your PC will require a multi-pronged approach involving vigilant updates, bug bounties, and artificial intelligence. Like any software, antivirus is susceptible to bugs. Earlier this summer, Google's Project Zero discovered serious flaws in enterprise and consumer products from Symantec that allowed malicious actors to take control of a computer.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.62)