directory
Windows 11 can still run the PC games you grew up with. Here's how
PCWorld demonstrates how Windows 11 users can run classic PC games from the 80s and 90s using DOSBox, a free emulator that simulates MS-DOS environments. DOSBox supports vintage titles like Shadowlands, The Dig, and Maniac Mansion by emulating essential hardware components including x86 processors and sound cards. The setup involves creating dedicated folders, using mount commands to access drives, and installing games from original floppy disks, CDs, or downloaded disk images for nostalgic gaming experiences. Who doesn't remember PC games such as Maniac Mansion, the King's Quest series and the dubious adventures of Leisure Suit Larry or software such as Microsoft Works and Lotus Smart Suite? These titles originally came from the 80s and 90s, ran under MS-DOS ( whose ancestor, 86-DOS, Microsoft recently open-sourced) or Windows 3.1 and were delivered on floppy discs or CD-ROMs. In our guide, we want to breathe new life into these treasures from the past and get them running on a current PC with Windows 11.
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Artificial Intelligence (1.00)
NetworkGym: Reinforcement Learning Environments
We make use of four internal 12 GB NVIDIA TIT AN Xp GPUs to perform our experiments. At initialization of each environment, four UEs are randomly stationed 1.5 meters above the The L TE base station lies at ( x, z) = (40 m, 3m) . We use random seed values from 0 to 63, inclusive, for this parameter. Do not distribute. of four We train PTD3 for 10,000 steps, instead of 1,000,000 steps, which we do for TD3+BC.
- Information Technology (1.00)
- Health & Medicine (1.00)
- Law (0.93)
- (2 more...)
How Do LLMs Fail In Agentic Scenarios? A Qualitative Analysis of Success and Failure Scenarios of Various LLMs in Agentic Simulations
We investigate how large language models (LLMs) fail when operating as autonomous agents with tool-use capabilities. Using the Kamiwaza Agentic Merit Index (KAMI) v0.1 benchmark, we analyze 900 execution traces from three representative models - Granite 4 Small, Llama 4 Maverick, and DeepSeek V3.1 - across filesystem, text extraction, CSV analysis, and SQL scenarios. Rather than focusing on aggregate scores, we perform fine-grained, per-trial behavioral analysis to surface the strategies that enable successful multi-step tool execution and the recurrent failure modes that undermine reliability. Our findings show that model scale alone does not predict agentic robustness: Llama 4 Maverick (400B) performs only marginally better than Granite 4 Small (32B) in some uncertainty-driven tasks, while DeepSeek V3.1's superior reliability derives primarily from post-training reinforcement learning rather than architecture or size. Across models, we identify four recurring failure archetypes: premature action without grounding, over-helpfulness that substitutes missing entities, vulnerability to distractor-induced context pollution, and fragile execution under load. These patterns highlight the need for agentic evaluation methods that emphasize interactive grounding, recovery behavior, and environment-aware adaptation, suggesting that reliable enterprise deployment requires not just stronger models but deliberate training and design choices that reinforce verification, constraint discovery, and adherence to source-of-truth data.
Kimi-Dev: Agentless Training as Skill Prior for SWE-Agents
Yang, Zonghan, Wang, Shengjie, Fu, Kelin, He, Wenyang, Xiong, Weimin, Liu, Yibo, Miao, Yibo, Gao, Bofei, Wang, Yejie, Ma, Yingwei, Li, Yanhao, Liu, Yue, Hu, Zhenxing, Zhang, Kaitai, Wang, Shuyi, Chen, Huarong, Sung, Flood, Liu, Yang, Gao, Yang, Yang, Zhilin, Liu, Tianyu
A contiguous chunk of lines to search for in the existing sourcecode 4. The dividing line: =======5. The lines to replace into the source code6. The end of the replace block: >>>>>>> REPLACEHere is an example: '''python ### mathweb/flask/app.py<<<<<<< SEARCH from flask import Flask ======= import math from flask import Flask >>>>>>> REPLACE ''' Please note that the * SEARCH/REPLACE * edit REQUIRES PROPER INDENTATION.If you would like to add the line ' print(x)', you mustfully write that out, with all those spaces before the code!Wrap the * SEARCH/REPLACE * edit in blocks '''python...'''.The summary of the key differences between the trajectories should bein the thinking part.
CLIMATEAGENT: Multi-Agent Orchestration for Complex Climate Data Science Workflows
Kim, Hyeonjae, Li, Chenyue, Deng, Wen, Jin, Mengxi, Huang, Wen, Lu, Mengqian, Yuan, Binhang
Climate science demands automated workflows to transform comprehensive questions into data-driven statements across massive, heterogeneous datasets. However, generic LLM agents and static scripting pipelines lack climate-specific context and flexibility, thus, perform poorly in practice. We present ClimateAgent, an autonomous multi-agent framework that orchestrates end-to-end climate data analytic workflows. ClimateAgent decomposes user questions into executable sub-tasks coordinated by an Orchestrate-Agent and a Plan-Agent; acquires data via specialized Data-Agents that dynamically introspect APIs to synthesize robust download scripts; and completes analysis and reporting with a Coding-Agent that generates Python code, visualizations, and a final report with a built-in self-correction loop. To enable systematic evaluation, we introduce Climate-Agent-Bench-85, a benchmark of 85 real-world tasks spanning atmospheric rivers, drought, extreme precipitation, heat waves, sea surface temperature, and tropical cyclones. On Climate-Agent-Bench-85, ClimateAgent achieves 100% task completion and a report quality score of 8.32, outperforming GitHub-Copilot (6.27) and a GPT-5 baseline (3.26). These results demonstrate that our multi-agent orchestration with dynamic API awareness and self-correcting execution substantially advances reliable, end-to-end automation for climate science analytic tasks.
- Asia > China (0.46)
- North America > United States (0.28)
- Asia > Middle East > UAE (0.28)
- Workflow (1.00)
- Research Report > New Finding (0.48)
Standardising the NLP Workflow: A Framework for Reproducible Linguistic Analysis
Pauli, Yves, Marsman, Jan-Bernard, Rabe, Finn, Edkins, Victoria, Hüppi, Roya, Ciampelli, Silvia, Misra, Akhil Ratan, Lang, Nils, Hinzen, Wolfram, Sommer, Iris, Homan, Philipp
The introduction of large language models and other influential developments in AI-based language processing have led to an evolution in the methods available to quantitatively analyse language data. With the resultant growth of attention on language processing, significant challenges have emerged, including the lack of standardisation in organising and sharing linguistic data and the absence of standardised and reproducible processing methodologies. Striving for future standardisation, we first propose the Language Processing Data Structure (LPDS), a data structure inspired by the Brain Imaging Data Structure (BIDS), a widely adopted standard for handling neuroscience data. It provides a folder structure and file naming conventions for linguistic research. Second, we introduce pelican nlp, a modular and extensible Python package designed to enable streamlined language processing, from initial data cleaning and task-specific preprocessing to the extraction of sophisticated linguistic and acoustic features, such as semantic embeddings and prosodic metrics. The entire processing workflow can be specified within a single, shareable configuration file, which pelican nlp then executes on LPDS-formatted data. Depending on the specifications, the reproducible output can consist of preprocessed language data or standardised extraction of both linguistic and acoustic features and corresponding result aggregations. LPDS and pelican nlp collectively offer an end-to-end processing pipeline for linguistic data, designed to ensure methodological transparency and enhance reproducibility.
- Europe (1.00)
- North America > United States > Minnesota (0.28)
- Workflow (1.00)
- Research Report (1.00)
FunReason-MT Technical Report: Advanced Data Synthesis Solution for Real-world Multi-Turn Tool-use
Xu, Zengzhuang, Hao, Bingguang, Wang, Zechuan, Wen, Yuntao, Xu, Xinyi, Liu, Yang, Chen, Long, Wang, Dong, Wang, Maolin, Zhao, Tong, Chen, Yicheng, Peng, Cunyin, Gu, Jinjie, Gan, Leilei, Zhao, Xiangyu, Zhuang, Chenyi, Gu, Shi
Function calling (FC) empowers large language models (LLMs) and autonomous agents to interface with external tools, a critical capability for solving complex, real-world problems. As this ability becomes increasingly central to advanced AI systems, the need for high-quality, multi-turn training data to develop and refine it cannot be overstated. Existing data synthesis methods, such as random environment sampling or multi-agent role-playing, are not powerful enough to generate high-quality data in real-world environments. Practical challenges come in three folds: targeted data synthesis, hard query construction, and multi-turn logical dependency. To address these structural deficiencies, we present FunReason-MT, a novel data synthesis framework for real-world multi-turn tool use. FunReason-MT resolves the complexity barrier in multi-turn FC data by employing 1) Environment-API Graph Interactions to gather varied high-quality trajectories with targeted tool, 2) Advanced Tool-Query Synthesis to simplify hard query construction, and 3) Guided Iterative Chain for sophisticated CoT generation. Evaluations on Berkeley Function-Calling Leaderboard (BFCLv3) demonstrate the power of our framework: a 4B model built upon FunReason-MT generated data achieves state-of-the-art performance among comparable-sized models. Further performance improvements on BFCLv4 confirm that FunReason-MT provides a reliable and robust source for agentic learning.
Simpliflow: A Lightweight Open-Source Framework for Rapid Creation and Deployment of Generative Agentic AI Workflows
Generative Agentic AI systems are emerging as a powerful paradigm for automating complex, multi-step tasks. However, many existing frameworks for building these systems introduce significant complexity, a steep learning curve, and substantial boilerplate code, hindering rapid prototyping and deployment. This paper introduces simpliflow, a lightweight, open-source Python framework designed to address these challenges. simpliflow enables the rapid development and orchestration of linear, deterministic agentic workflows through a declarative, JSON-based configuration. Its modular architecture decouples agent management, workflow execution, and post-processing, promoting ease of use and extensibility. By integrating with LiteLLM, it supports over 100 Large Language Models (LLMs) out-of-the-box. We present the architecture, operational flow, and core features of simpliflow, demonstrating its utility through diverse use cases ranging from software development simulation to real-time system interaction. A comparative analysis with prominent frameworks like LangChain and AutoGen highlights simpliflow's unique position as a tool optimized for simplicity, control, and speed in deterministic workflow environments.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.93)
SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents
Badertdinov, Ibragim, Golubev, Alexander, Nekrashevich, Maksim, Shevtsov, Anton, Karasik, Simon, Andriushchenko, Andrei, Trofimova, Maria, Litvintseva, Daria, Yangel, Boris
LLM-based agents have shown promising capabilities in a growing range of software engineering (SWE) tasks. However, advancing this field faces two critical challenges. First, high-quality training data is scarce, especially data that reflects real-world SWE scenarios, where agents must interact with development environments, execute code and adapt behavior based on the outcomes of their actions. Existing datasets are either limited to one-shot code generation or comprise small, manually curated collections of interactive tasks, lacking both scale and diversity. Second, the lack of fresh interactive SWE tasks affects evaluation of rapidly improving models, as static benchmarks quickly become outdated due to contamination issues. To address these limitations, we introduce a novel, automated, and scalable pipeline to continuously extract real-world interactive SWE tasks from diverse GitHub repositories. Using this pipeline, we construct SWE-rebench, a public dataset comprising over 21,000 interactive Python-based SWE tasks, suitable for reinforcement learning of SWE agents at scale. Additionally, we use continuous supply of fresh tasks collected using SWE-rebench methodology to build a contamination-free benchmark for agentic software engineering. We compare results of various LLMs on this benchmark to results on SWE-bench Verified and show that performance of some language models might be inflated due to contamination issues.