write
Microsoft is now testing AI-generated text in Windows Notepad
As of yesterday, Microsoft has begun rolling out a new update to Windows 11 Insiders on the Dev and Canary Channels. This update brings new AI features to Notepad, Paint, and the Snipping Tool. Notepad now has the ability to write text from scratch using generative AI, which is meant to aid you by quickly producing drafts based on your prompts and instructions. To use AI text generation, simply right-click anywhere in the document and select Write. Type in your instructions, then either click Keep Text or Discard on the results.
Beyond Outlining: Heterogeneous Recursive Planning for Adaptive Long-form Writing with Language Models
Xiong, Ruibin, Chen, Yimeng, Khizbullin, Dmitrii, Schmidhuber, Jürgen
Long-form writing agents require flexible integration and interaction across information retrieval, reasoning, and composition. Current approaches rely on predetermined workflows and rigid thinking patterns to generate outlines before writing, resulting in constrained adaptability during writing. In this paper we propose a general agent framework that achieves human-like adaptive writing through recursive task decomposition and dynamic integration of three fundamental task types, i.e. retrieval, reasoning, and composition. Our methodology features: 1) a planning mechanism that interleaves recursive task decomposition and execution, eliminating artificial restrictions on writing workflow; and 2) integration of task types that facilitates heterogeneous task decomposition. Evaluations on both fiction writing and technical report generation show that our method consistently outperforms state-of-the-art approaches across all automatic evaluation metrics, which demonstrate the effectiveness and broad applicability of our proposed framework.
- North America > United States (0.14)
- Asia (0.14)
- North America > Canada (0.14)
- Workflow (1.00)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
- Overview > Innovation (0.34)
- Health & Medicine (0.92)
- Law (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- (2 more...)
MAD-MAX: Modular And Diverse Malicious Attack MiXtures for Automated LLM Red Teaming
Schoepf, Stefan, Hameed, Muhammad Zaid, Rawat, Ambrish, Fraser, Kieran, Zizzo, Giulio, Cornacchia, Giandomenico, Purcell, Mark
With LLM usage rapidly increasing, their vulnerability to jailbreaks that create harmful outputs are a major security risk. As new jailbreaking strategies emerge and models are changed by fine-tuning, continuous testing for security vulnerabilities is necessary. Existing Red Teaming methods fall short in cost efficiency, attack success rate, attack diversity, or extensibility as new attack types emerge. We address these challenges with Modular And Diverse Malicious Attack MiXtures (MAD-MAX) for Automated LLM Red Teaming. MAD-MAX uses automatic assignment of attack strategies into relevant attack clusters, chooses the most relevant clusters for a malicious goal, and then combines strategies from the selected clusters to achieve diverse novel attacks with high attack success rates. MAD-MAX further merges promising attacks together at each iteration of Red Teaming to boost performance and introduces a similarity filter to prune out similar attacks for increased cost efficiency. The MAD-MAX approach is designed to be easily extensible with newly discovered attack strategies and outperforms the prominent Red Teaming method Tree of Attacks with Pruning (TAP) significantly in terms of Attack Success Rate (ASR) and queries needed to achieve jailbreaks. MAD-MAX jailbreaks 97% of malicious goals in our benchmarks on GPT-4o and Gemini-Pro compared to TAP with 66%. MAD-MAX does so with only 10.9 average queries to the target LLM compared to TAP with 23.3. WARNING: This paper contains contents which are offensive in nature.
- Research Report (0.82)
- Instructional Material (0.70)
- Workflow (0.68)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Steered Generation via Gradient Descent on Sparse Features
Bhattacharyya, Sumanta, Rooshenas, Pedram
Large language models (LLMs) encode a diverse range of linguistic features within their latent representations, which can be harnessed to steer their output toward specific target characteristics. In this paper, we modify the internal structure of LLMs by training sparse autoencoders to learn a sparse representation of the query embedding, allowing precise control over the model's attention distribution. We demonstrate that manipulating this sparse representation effectively transforms the output toward different stylistic and cognitive targets. Specifically, in an educational setting, we show that the cognitive complexity of LLM-generated feedback can be systematically adjusted by modifying the encoded query representation at a specific layer. To achieve this, we guide the learned sparse embedding toward the representation of samples from the desired cognitive complexity level, using gradient-based optimization in the latent space.
- Research Report (0.64)
- Overview (0.46)
Write.ai – AI Content Generation Tool (SAAS) by Axis96-dev #332121
Write.ai is a powerful AI writing assistant that helps you create unique, high-quality content quickly. It can generate new content, improve existing content, and generate content ideas. Whether you're a blogger, marketer, or creative, Write.ai With an advanced language model and user-friendly interface, Write.ai
What Are Transformer Models and How Do They Work?
Transformers are a new development in machine learning that have been making a lot of noise lately. They are incredibly good at keeping track of context, and this is why the text that they write makes sense. In this blog post, we will go over their architecture and how they work. Transformer models are one of the most exciting new developments in machine learning. They were introduced in the paper Attention is All You Need. Transformers can be used to write stories, essays, poems, answer questions, translate between languages, chat with humans, and they can even pass exams that are hard for humans!
- Africa > Middle East > Algeria (0.05)
- Africa > Burkina Faso (0.04)
- Media > Film (0.52)
- Leisure & Entertainment (0.52)
Coding with ChatGPT (GPT-3.5 and GPT-4) --A Quick Guide
Given the new oracle that is ChatGPT, you may often find yourself tasked with creating prompts for various applications. One of the most significant challenges in this regard is crafting prompts that effectively communicate your requirements and elicit the desired response. In this article, I will provide a comprehensive guide on how to write high-quality prompts for software development, specifically for the ChatGPT language model. Our aim is to help you improve your skills as a prompt engineer, moving beyond generic advice and offering practical tips and examples. To create effective prompts, it is essential to understand the AI language model you are working with.
20 Most Asked Interview Questions of Python - Analytics Vidhya
This article was published as a part of the Data Science Blogathon. Python is a general-purpose and interpreted programming language. It can be used to create a Web application and is widely used in Artificial Intelligence. Due to the implementation of machine learning and deep learning models, it has become the language of demand in the field of Data scientists. Therefore, it becomes indispensable for every Data Scientist aspirant to have a good knowledge of Python.
We Asked GPT-3 to Write an Academic Paper about Itself--Then We Tried to Get It Published
On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company's artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text. As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn't have any high expectations: I'm a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn't my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement.