×







We sell 100% Genuine & New Books only!

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems at Meripustak

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems by Sebastien Bubeck, Cesa-bianchi Nicolò , Now Publishers Inc

Books from same Author: Sebastien Bubeck, Cesa-bianchi Nicolò

Books from same Publisher: Now Publishers Inc

Related Category: Author List / Publisher List


  • Price: ₹ 20110.00/- [ 7.00% off ]

    Seller Price: ₹ 18702.00

Estimated Delivery Time : 4-5 Business Days

Sold By: Meripustak      Click for Bulk Order

Free Shipping (for orders above ₹ 499) *T&C apply.

In Stock

We deliver across all postal codes in India

Orders Outside India


Add To Cart


Outside India Order Estimated Delivery Time
7-10 Business Days


  • We Deliver Across 100+ Countries

  • MeriPustak’s Books are 100% New & Original
  • General Information  
    Author(s)Sebastien Bubeck, Cesa-bianchi Nicolò
    PublisherNow Publishers Inc
    ISBN9781601986269
    Pages138
    BindingPaperback
    LanguageEnglish
    Publish YearDecember 2012

    Description

    Now Publishers Inc Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems by Sebastien Bubeck, Cesa-bianchi Nicolò

    A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The goal is to maximize the total payoff obtained in a sequence of allocations. The name bandit refers to the colloquial term for a slot machine (a ""one-armed bandit"" in American slang). In a casino, a sequential allocation problem is obtained when the player is facing many slot machines at once (a ""multi-armed bandit""), and must repeatedly choose where to insert the next coin._x000D__x000D_Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the 1930s, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option._x000D__x000D_In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, it also analyzes some of the most important variants and extensions, such as the contextual bandit model. This monograph is an ideal reference for students and researchers with an interest in bandit problems._x000D_ Table of contents :- _x000D_ 1: Introduction 2: Stochastic bandits: fundamental results 3: Adversarial bandits: fundamental results 4: Contextual Bandits 5: Linear bandits 6: Nonlinear bandits 7: Variants. Acknowledgements. References_x000D_



    Book Successfully Added To Your Cart