• Home
  • About
  • A Brief History of AI
  • AI-Alerts
  • AI Magazine
  • AAAI Conferences
  • NeurIPS
  • Books
  • Classics

Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model

Aaron Sidford, Mengdi Wang, Xian Wu, Lin Yang, Yinyu Ye

Mar-13-2026, 14:20:54 GMT–Neural Information Processing Systems 

Computing an approximately optimal policy with high probability in this case is known as PAC RL with a generative model.

  algorithm, artificial intelligence, machine learning, (17 more...)

Neural Information Processing Systems

Mar-13-2026, 14:20:54 GMT

Conferences    PDF

Add feedback

  • Country:
    • Europe > United Kingdom
      • England (0.04)
    • North America
      • Canada (0.04)
      • United States
        • California > Santa Clara County
          • Palo Alto (0.05)
        • Massachusetts > Middlesex County
          • Belmont (0.04)
          • Cambridge (0.04)
        • New Jersey > Mercer County
          • Princeton (0.04)
  • Technology:
    • Information Technology > Artificial Intelligence
      • Machine Learning > Learning Graphical Models
        • Undirected Networks > Markov Models (0.47)
      • Representation & Reasoning (0.94)

  • By text
  • By views
  • By concept tags

Duplicate Docs Excel Report

Title
Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model
Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model

Similar Docs  Excel Report  more

TitleSimilaritySource
None found

Site Feedback

© 2026, i2k Connect Inc  ·  All Rights Reserved.
Privacy policy  ·  Terms of use  ·  License  ·  Legal Notices
This is i2kweb version 7.1.0-SNAPSHOT. Logged in as aitopics-guest for 59 more minutes (idle timeout).

Site Feedback

powered by
i2k Connect

aitopics.org uses cookies to deliver the best possible experience. By continuing to use this site, you consent to the use of cookies. Learn more ยป

Add feedback

Send feedback to help us improve this new enhanced search experience.

Thank You!