• Home
  • About
  • A Brief History of AI
  • AI-Alerts
  • AI Magazine
  • AAAI Conferences
  • NeurIPS
  • Books
  • Classics

Learning convex bounds for linear quadratic control policy synthesis

Jack Umenberger, Thomas B. Schön

Nov-20-2025, 21:18:16 GMT–Neural Information Processing Systems 

In addition, there is also a performance objective to optimize, i.e. a reward to be maximized, or

  artificial intelligence, machine learning, reinforcement learning, (19 more...)

Neural Information Processing Systems

Nov-20-2025, 21:18:16 GMT

Conferences    PDF

Add feedback

  • Country:
    • Asia > Middle East
      • Jordan (0.04)
    • Europe > Sweden
      • Uppsala County > Uppsala (0.04)
    • North America
      • Canada > Quebec
        • Montreal (0.04)
      • United States
        • Massachusetts > Middlesex County
          • Belmont (0.04)
        • New Jersey (0.04)
  • Technology:
    • Information Technology > Artificial Intelligence
      • Machine Learning
        • Learning Graphical Models > Directed Networks
          • Bayesian Learning (0.68)
        • Reinforcement Learning (0.94)
      • Representation & Reasoning
        • Optimization (0.93)
        • Uncertainty > Bayesian Inference (0.93)
      • Robots (1.00)

  • By text
  • By views
  • By concept tags

Duplicate Docs Excel Report

Title
Learning convex bounds for linear quadratic control policy synthesis
Learning convex bounds for linear quadratic control policy synthesis

Similar Docs  Excel Report  more

TitleSimilaritySource
None found

Site Feedback

© 2026, i2k Connect Inc  ·  All Rights Reserved.
Privacy policy  ·  Terms of use  ·  License  ·  Legal Notices
This is i2kweb version 7.1.0-SNAPSHOT. Logged in as aitopics-guest for 59 more minutes (idle timeout).

Site Feedback

powered by
i2k Connect

aitopics.org uses cookies to deliver the best possible experience. By continuing to use this site, you consent to the use of cookies. Learn more »

Add feedback

Send feedback to help us improve this new enhanced search experience.

Thank You!