登入帳戶  | 訂單查詢  | 購物車/收銀台( 0 ) | 在線留言板  | 付款方式  | 運費計算  | 聯絡我們  | 幫助中心 |  加入書簽
會員登入 新用戶登記
HOME新書上架暢銷書架好書推介特價區會員書架精選月讀2023年度TOP分類瀏覽雜誌 臺灣用戶
品種:超過100萬種各類書籍/音像和精品,正品正價,放心網購,悭钱省心 服務:香港台灣澳門海外 送貨:速遞郵局服務站

新書上架簡體書 繁體書
暢銷書架簡體書 繁體書
好書推介簡體書 繁體書

三月出版:大陸書 台灣書
二月出版:大陸書 台灣書
一月出版:大陸書 台灣書
12月出版:大陸書 台灣書
11月出版:大陸書 台灣書
十月出版:大陸書 台灣書
九月出版:大陸書 台灣書
八月出版:大陸書 台灣書
七月出版:大陸書 台灣書
六月出版:大陸書 台灣書
五月出版:大陸書 台灣書
四月出版:大陸書 台灣書
三月出版:大陸書 台灣書
二月出版:大陸書 台灣書
一月出版:大陸書 台灣書

『簡體書』统计策略搜索强化学习方法及应用

書城自編碼: 3675062
分類:簡體書→大陸圖書→計算機/網絡人工智能
作者: 赵婷婷
國際書號(ISBN): 9787121419591
出版社: 电子工业出版社
出版日期: 2021-09-01

頁數/字數: /
書度/開本: 16开 釘裝: 平装

售價:HK$ 96.4

我要買

 

** 我創建的書架 **
未登入.


新書推薦:
卖掉法拉利的高僧
《 卖掉法拉利的高僧 》

售價:HK$ 70.9
次经导论
《 次经导论 》

售價:HK$ 177.6
叔本华暮年之思
《 叔本华暮年之思 》

售價:HK$ 69.6
故纸留痕:抗日战争时期澳门报刊资料选辑
《 故纸留痕:抗日战争时期澳门报刊资料选辑 》

售價:HK$ 345.6
玩转Photoshop(零基础快速上手,全彩赠视频)
《 玩转Photoshop(零基础快速上手,全彩赠视频) 》

售價:HK$ 57.6
故事力:TED演讲者助力,当代青年克服表达难题(两位TED专业讲者教你掌握故事五大力)
《 故事力:TED演讲者助力,当代青年克服表达难题(两位TED专业讲者教你掌握故事五大力) 》

售價:HK$ 81.6
中国民间神话故事绘(套装共15册)
《 中国民间神话故事绘(套装共15册) 》

售價:HK$ 456.0
无限可能的身体
《 无限可能的身体 》

售價:HK$ 72.0

 

建議一齊購買:

+

HK$ 143.9
《 阿里云天池大赛赛题解析——机器学习篇 》
+

HK$ 98.8
《 深度学习理论及实战(MATLAB版) 》
+

HK$ 144.0
《 工业级知识图谱:方法与实践 》
+

HK$ 118.5
《 零基础学机器学习 》
+

HK$ 52.5
《 智能设计:理论与方法 》
+

HK$ 136.3
《 机器视觉——使用HALCON描述与实现 》
內容簡介:
智能体AlphaGo战胜人类围棋专家刷新了人类对人工智能的认识,也使得其核心技术强化学习受到学术界的广泛关注。本书正是在如此背景下,围绕作者多年从事强化学习理论及应用的研究内容及国内外关于强化学习的近动态等方面展开介绍,是为数不多的强化学习领域的专业著作。该著作侧重于基于直接策略搜索的强化学习方法,结合了统计学习的诸多方法对相关技术及方法进行分析、改进及应用。本书以一个全新的现代角度描述策略搜索强化学习算法。从不同的强化学习场景出发,讲述了强化学习在实际应用中所面临的诸多难题。针对不同场景,给定具体的策略搜索算法,分析算法中估计量和学习参数的统计特性,并对算法进行应用实例展示及定量比较。特别地,本书结合强化学习前沿技术将策略搜索算法应用到机器人控制及数字艺术渲染领域,给人以耳目一新的感觉。后根据作者长期研究经验,对强化学习的发展趋势进行了简要介绍和总结。本书取材经典、全面,概念清楚,推导严密,以期形成一个集基础理论、算法和应用为一体的完备知识体系。
關於作者:
赵婷婷,天津科技大学人工智能学院副教授,主要研究方向为人工智能、机器学习。中国计算机协会(CCF) 会员、YOCSEF 会员、中国人工智能学会会员、人工智能学会模式识别专委会委员,2017年获得天津市\131”创新型人才培养工程第二层次人选称号。
目錄
第1章 强化学习概述···························································································1
1.1 机器学习中的强化学习··········································································1
1.2 智能控制中的强化学习··········································································4
1.3 强化学习分支··························································································8
1.4 本书贡献·······························································································11
1.5 本书结构·······························································································12
参考文献········································································································14
第2章 相关研究及背景知识·············································································19
2.1 马尔可夫决策过程················································································19
2.2 基于值函数的策略学习算法·································································21
2.2.1 值函数·······················································································21
2.2.2 策略迭代和值迭代····································································23
2.2.3 Q-learning ··················································································25
2.2.4 基于小二乘法的策略迭代算法·············································27
2.2.5 基于值函数的深度强化学习方法·············································29
2.3 策略搜索算法························································································30
2.3.1 策略搜索算法建模····································································31
2.3.2 传统策略梯度算法(REINFORCE算法)······························32
2.3.3 自然策略梯度方法(Natural Policy Gradient)························33
2.3.4 期望化的策略搜索方法·····················································35
2.3.5 基于策略的深度强化学习方法·················································37
2.4 本章小结·······························································································38
参考文献········································································································39
第3章 策略梯度估计的分析与改进·································································42
3.1 研究背景·······························································································42
3.2 基于参数探索的策略梯度算法(PGPE算法)···································44
3.3 梯度估计方差分析················································································46
3.4 基于基线的算法改进及分析·························································48
3.4.1 基线的基本思想································································48
3.4.2 PGPE算法的基线······························································49
3.5 实验·······································································································51
3.5.1 示例···························································································51
3.5.2 倒立摆平衡问题········································································57
3.6 总结与讨论····························································································58
参考文献········································································································60
第4章 基于重要性采样的参数探索策略梯度算法··········································63
4.1 研究背景·······························································································63
4.2 异策略场景下的PGPE算法·································································64
4.2.1 重要性加权PGPE算法·····························································65
4.2.2 IW-PGPE算法通过基线减法减少方差····································66
4.3 实验结果·······························································································68
4.3.1 示例···························································································69
4.3.2 山地车任务················································································78
4.3.3 机器人仿真控制任务································································81
4.4 总结和讨论····························································································88
参考文献·····························

 

 

書城介紹  | 合作申請 | 索要書目  | 新手入門 | 聯絡方式  | 幫助中心 | 找書說明  | 送貨方式 | 付款方式 香港用户  | 台灣用户 | 大陸用户 | 海外用户
megBook.com.hk
Copyright © 2013 - 2024 (香港)大書城有限公司  All Rights Reserved.