kilcher
Matrix-Free Two-to-Infinity and One-to-Two Norms Estimation
Tsyganov, Askar, Frolov, Evgeny, Samsonov, Sergey, Rakhuba, Maxim
In this paper, we propose new randomized algorithms for estimating the two-to-infinity and one-to-two norms in a matrix-free setting, using only matrix-vector multiplications. Our methods are based on appropriate modifications of Hutchinson's diagonal estimator and its Hutch++ version. We provide oracle complexity bounds for both modifications. We further illustrate the practical utility of our algorithms for Jacobian-based regularization in deep neural network training on image classification tasks. We also demonstrate that our methodology can be applied to mitigate the effect of adversarial attacks in the domain of recommender systems.
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Government (0.34)
- Information Technology (0.34)
Developer Designs AI that Creates Infinite Bored Ape NFTs - insideBIGDATA
Although the full concept of the NFT market seems to escape us, the exponential growth of this market – peaking at $11.6 billion in 2021 – has brought widespread attention to the subject. This, in turn, has opened the floodgates for new innovation and technology to power the creation of these virtual works of art. One form of innovation includes AI-driven and machine learning technologies that use data attributes from existing NFTs listed on publicly accessible websites to randomly generate thousands of new, unique NFTs a second that could pass off as originals. Experts are projecting that we will see the emergence of these AI-driven NFTs this year. This development could be a positive step in the direction of inclusivity but could also have some negative effects on the valuation of the current NFT population.
Lessons from the GPT-4Chan Controversy
On June 3rd of 2022, YouTuber and AI researcher Yannic Kilcher released a video about how he developed an AI model named'GPT-4chan', and then deployed bots to pose as humans on the message board 4chan. GPT-4chan is a large language model, and so is essentially trained to'autocomplete' text -- given some text as input, it predicts what text is likely to follow -- by being optimized to mimic typical patterns of text in a bunch of files. In this case, the model was made by fine-tuning GPT-J with a previously published dataset to mimic the users of 4chan's /pol/ anonymous message board; many of these users frequently express racist, white supremacist, antisemitic, anti-Muslim, misogynist, and anti-LGBT views. The model thus learned to output all sorts of hate speech, leading Yannic to call it "The most horrible model on the internet" and to say this in his video: The video also contains the following: a brief set of disclaimers, some discussion of bots on the internet, a high level explanation of how the model was developed, some other thoughts on how good the model is, and a description of how a number of bots powered by the model were deployed to post on the /pol/ message board anonymously. The bots collectively wrote over 30,000 posts over the span of a few days, with 15,000 being posted over a span of 24 hours. Many users were at first confused, but the frequency of posting all over the message board soon led them to conclude this was a bot.
- Law > Civil Rights & Constitutional Law (0.54)
- Law Enforcement & Public Safety > Terrorism (0.54)
Is GPT-4chan the worst AI ever?
The bot was trained on three years' worth of posts from 4chan, the repulsive cousin of Reddit. Kilchner fed the bot threads from the Politically Incorrect /pol/ board, a 4chan message board notorious for racist, xenophobic, and hateful content. The bot sparked a heated debate on social media before it went offline. This is the worst AI ever! I trained a language model on 4chan's /pol/ board and the result is…. Watch here (warning: may be offensive):https://t.co/lihsaYAm7l pic.twitter.com/xs7rgtucQb
- North America > Canada > Quebec > Montreal (0.05)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.33)
5 Min AI Newsletter #3
No, Google's AI is not sentient Tech companies are constantly hyping the capabilities of their ever-improving artificial intelligence. But Google was quick to shut down claims that one of its programs had advanced so much that it had become sentient. According to an eye-opening tale in the Washington Post on Saturday, one Google engineer said that after hundreds of interactions with a cutting-edge, unreleased AI system called LaMDA, he believed the program had achieved a level of consciousness. SymForce is a library for symbolic computation and code generation. It allows to code a problem in python, perform symbolic experimentation, generate optimized code and execute extremely competent optimization problems that depend on the original problem definition.
'Hate Speech Machine' Created By AI YouTuber & Researcher On 4Chan
We've all heard the adage, "Every coin has two sides." The same is true for AI; just as it has benefits, it may also have drawbacks if trained improperly. Microsoft discovered the dangers of developing racist AI, but what happens if the intelligence is actively directed at a poisonous forum? Yannic Kilcher, an AI researcher and YouTuber, trained an AI on 3.3 million postings from 4chan's infamously toxic Politically Incorrect /pol/ board. Kilcher released the AI on the board after implementing the model in 10 bots, which resulted in a wave of hatred.
La veille de la cybersécurité
Microsoft inadvertently learned the risks of creating racist AI, but what happens if you deliberately point the intelligence at a toxic forum? As Motherboard and The Verge note, YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan's Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board -- and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.
La veille de la cybersécurité
AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan's infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results--the AI was just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads. After Kilcher posted his video and a copy of the program to Hugging Face, a kind of GitHub for AI, ethicists and researchers in the AI field expressed concern. The bot, which Kilcher called GPT-4chan, "the most horrible model on the internet"--a reference to GPT-3, a language model developed by Open AI that uses deep learning to produce text--was shockingly effective and replicated the tone and feel of 4chan posts. "The model was good in a terrible sense," Klicher said in a video about the project.
Oh no... Someone trained an AI on 4chan
If you're concerned about the biases and bigotry of AI models, you're gonna love the latest addition to the ranks: a text generator trained on 4chan's /pol/ board. Short for "Politically Incorrect," /pol/ is a bastion of hate speech, conspiracy theories, and far-right extremism. These attributes attracted Yannick Kilcher, an AI whizz and YouTuber, to use /pol/ as a testing ground for bots. Kilcher first fine-tuned the GPT-J language model on over 134.5 million posts made on /pol/ across three and a half years. He then incorporated the board's thread structure into the system.
AI trained on 4chan's most hateful board is just as toxic as you'd expect
Microsoft inadvertently learned the risks of creating racist AI, but what happens if you deliberately point the intelligence at a toxic forum? As Motherboard and The Verge note, YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan's Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board -- and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed. Nicknamed GPT-4chan (after OpenAI's GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended "offensiveness, nihilism, trolling and deep distrust."