International products have separate terms, are sold from abroad and may differ from local products, including fit, age ratings, and language of product, labeling or instructions.
International products have separate terms, are sold from abroad and may differ from local products, including fit, age ratings, and language of product, labeling or instructions.
To calculate the overall star rating and percentage breakdown by star, we do not use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyses reviews to verify trustworthiness.
5.0 out of 5 starsmany examples are very helpful for readers like me. However
28 August 2016 - Published on Amazon.com
Verified Purchase
I've read the textbook dealing with DP up to chapter 6. Since this book is approached mathematically, I think it is very well made except a few typos. A bit of unsatisfactoriness for me is the style of the book. I prefer the style introducing the general result first, and then proves why they are derived and where they are derived from. And then, many examples are very helpful for readers like me. However, this book introduces examples first, and then constructs the general form from and using the examples. I am sure that everybody has different styles. My word is just not my type. Nevertheless, the contents that textbook handles are wonderful.
This set of two books is just an absolute archive of knowledge. Everything you need to know on Optimal Control and Dynamic programming from beginner level to advanced intermediate is here. Plus worked examples are great. They aren't boring examples as well. This set pairs well with Simulation-Based Optimization by Abhijit Gosavi. I guess the point is, this book should be the central framework of any graduate course in optimal control and operations research.
*This is easily the best book on dynamic programming. It certainly is the most up-to-date book on this topic. The first volume covers numerous topics such as deterministic control, HJB equation for the deterministic case, Pontryagin principle, finite horizon MDPs, partially observable MDPs, and rollout heuristics. The second volume treats the infinite horizon case for the regular MDP --- average reward, discounted reward, semi-Markov control, and even some reinforcement learning. *I love the notation. The proofs in this book are much easier than those you will find elsewhere. (This opinion is based on my study of proofs in other texts.) The treatment is very sophisticated and yet very accessible! Furthermore, what is really a bonus here --- something you won't find in the other books --- is a discussion on the stochastic shortest path (SSP). The SSP makes it so easy to analyze the average reward problem and the finite horizon problem with a stationary transition probability structure. *I strongly recommend this book to all readers interested in understanding the basics of DP and the convergence proofs underlying the DP machinery. It is a must on your book shelf if you are working on research in DP or topics related to DP such as reinforcement learning or adaptive (approximate) DP.