当前位置: 首页 > 编程日记 > 正文

必看!52篇深度强化学习收录论文汇总 | AAAI 2020

所有参与投票的 CSDN 用户都参加抽奖活动

群内公布奖项,还有更多福利赠送

来源 | 深度强化学习实验室(ID:Deep-RL)

作者 | DeepRL

AAAI 2020 共收到的有效论文投稿超过 8800 篇,其中 7737 篇论文进入评审环节,最终收录数量为 1591 篇,收录率为 20.6%,而被接受论文列表中强化学习有52+篇,录取比约为3%,其中接收论文中就单位而言:Google Brain, DeepMind, Tsinghua University,UCL,Tencent AI Lab,Peking University, IBM, FaceBook等被录取一大片,就作者而言,不但有强化学习老爷子Sutton的文章(第48篇),也有后起之秀等。

论文涉及了环境、理论算法、应用以及多智能体等各个方向。以下是详细列表:

[1]. Google Research Football: A Novel Reinforcement Learning Environment

Karol Kurach (Google Brain)*; Anton Raichuk (Google); Piotr Stańczyk (Google Brain); Michał Zając (Google Brain); Olivier Bachem (Google Brain); Lasse Espeholt (DeepMind); Carlos Riquelme (Google Brain); Damien Vincent (Google Brain); Marcin Michalski (Google); Olivier Bousquet (Google); Sylvain Gelly (Google Brain)

[2]. Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance

Xiaojian Ma (University of California, Los Angeles)*; Mingxuan Jing (Tsinghua University); Wenbing Huang (Tsinghua University); Chao Yang (Tsinghua University); Fuchun Sun (Tsinghua); Huaping Liu (Tsinghua University); Bin Fang (Tsinghua University)

[3]. Proximal Distilled Evolutionary Reinforcement Learning

Cristian Bodnar (University of Cambridge)*; Ben Day (University of Cambridge); Pietro Lió (University of Cambridge)

[4]. Tree-Structured Policy based Progressive Reinforcement Learning for Temporally Language Grounding in Video

Jie Wu (Sun Yat-sen University)*; Guanbin Li (Sun Yat-­sen University); si liu (Beihang University); Liang Lin (DarkMatter AI)

[5]. RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning

Nan Jiang (Tsinghua University)*; Sheng Jin (Tsinghua University); Zhiyao Duan (Unversity of Rochester); Changshui Zhang (Tsinghua University)

[6]. Mastering Complex Control in MOBA Games with Deep Reinforcement Learning

Deheng Ye (Tencent)*; Zhao Liu (Tencent); Mingfei Sun (Tencent); Bei Shi (Tencent AI Lab); Peilin Zhao (Tencent AI Lab); Hao Wu (Tencent); Hongsheng Yu (Tencent); Shaojie Yang (Tencent); Xipeng Wu (Tencent); Qingwei Guo (Tsinghua University); Qiaobo Chen (Tencent); Yinyuting Yin (Tencent); Hao Zhang (Tencent); Tengfei Shi (Tencent); Liang Wang (Tencent); Qiang Fu (Tencent AI Lab); Wei Yang (Tencent AI Lab); Lanxiao Huang (Tencent)

[7]. Partner Selection for the Emergence of Cooperation in Multi‐Agent Systems using Reinforcement Learning

Nicolas Anastassacos (The Alan Turing Institute)*; Steve Hailes (University College London); Mirco Musolesi (UCL)

[8]. Uncertainty-Aware Action Advising for Deep Reinforcement Learning Agents

Felipe Leno da Silva (University of Sao Paulo)*; Pablo Hernandez-Leal (Borealis AI); Bilal Kartal (Borealis AI); Matthew Taylor (Borealis AI)

[9]. MetaLight: Value-based Meta-reinforcement Learning for Traffic Signal Control

Xinshi Zang (Shanghai Jiao Tong University)*; Huaxiu Yao (Pennsylvania State University); Guanjie Zheng (Pennsylvania State University); Nan Xu (University of Southern California); Kai Xu (Shanghai Tianrang Intelligent Technology Co., Ltd); Zhenhui (Jessie) Li (Penn State University)

[10].Adaptive Quantitative Trading: an Imitative Deep Reinforcement Learning Approach

Yang Liu (University of Science and Technology of China)*; Qi Liu (" University of Science and Technology of China, China"); Hongke Zhao (Tianjin University); Zhen Pan (University of Science and Technology of China); Chuanren Liu (The University of Tennessee Knoxville)

[11]. Neighborhood Cognition Consistent Multi‐Agent Reinforcement Learning

Hangyu Mao (Peking University)*; Wulong Liu (Huawei Noah's Ark Lab); Jianye Hao (Tianjin University); Jun Luo (Huawei Technologies Canada Co. Ltd.); Dong Li ( Huawei Noah's Ark Lab); Zhengchao Zhang (Peking University); Jun Wang (UCL); Zhen Xiao (Peking University)

[12]. SMIX(): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning

Chao Wen (Nanjing University of Aeronautics and Astronautics)*; Xinghu Yao (Nanjing University of Aeronautics and Astronautics); Yuhui Wang (Nanjing University of Aeronautics and Astronautics, China); Xiaoyang Tan (Nanjing University of Aeronautics and Astronautics, China)

[13]. Unpaired Image Enhancement Featuring Reinforcement-­Learning-Controlled Image Editing Software

Satoshi Kosugi (The University of Tokyo)*; Toshihiko Yamasaki (The University of Tokyo)

[14]. Crowdfunding Dynamics Tracking: A Reinforcement Learning Approach

Jun Wang (University of Science and Technology of China)*; Hefu Zhang (University of Science and Technology of China); Qi Liu (" University of Science and Technology of China, China"); Zhen Pan (University of Science and Technology of China); Hanqing Tao (University of Science and Technology of China (USTC))

[15]. Model and Reinforcement Learning for Markov Games with Risk Preferences

Wenjie Huang (Shenzhen Research Institute of Big Data)*; Hai Pham Viet (Department of Computer Science, School of Computing, National University of Singapore); William Benjamin Haskell (Supply Chain and Operations Management Area, Krannert School of Management, Purdue University)

[16]. Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning

Liang Tong (Washington University in Saint Louis)*; Aron Laszka (University of Houston); Chao Yan (Vanderbilt UNIVERSITY); Ning Zhang (Washington University in St. Louis); Yevgeniy Vorobeychik (Washington University in St. Louis)

[17]. Toward A Thousand Lights: Decentralized Deep Reinforcement Learning for Large‐Scale Traffic Signal Control

Chacha Chen (Pennsylvania State University)*; Hua Wei (Pennsylvania State University); Nan Xu (University of Southern California); Guanjie Zheng (Pennsylvania State University); Ming Yang (Shanghai Tianrang Intelligent Technology Co., Ltd); Yuanhao Xiong (Zhejiang University); Kai Xu (Shanghai Tianrang Intelligent Technology Co., Ltd); Zhenhui (Jessie) Li (Penn State University)

[18]. Deep Reinforcement Learning for Active Human Pose Estimation

Erik Gärtner (Lund University)*; Aleksis Pirinen (Lund University); Cristian Sminchisescu (Lund University)

[19]. Be Relevant, Non‐redundant, Timely: Deep Reinforcement Learning for Real‐time Event Summarization

Min Yang ( Chinese Academy of Sciences)*; Chengming Li (Chinese Academy of Sciences); Fei Sun (Alibaba Group); Zhou Zhao (Zhejiang University); Ying Shen (Peking University Shenzhen Graduate School); Chenglin Wu (fuzhi.ai)

[20]. A Tale of Two‐Timescale Reinforcement Learning with the Tightest Finite‐Time Bound

Gal Dalal (Technion)*; Balazs Szorenyi (Yahoo Research); Gugan Thoppe (Duke University)

[21]. Reinforcement Learning with Perturbed Rewards

Jingkang Wang (University of Toronto); Yang Liu (UCSC); Bo Li (University of Illinois at Urbana–Champaign)*

[22]. Exploratory Combinatorial Optimization with Reinforcement Learning

Thomas Barrett (University of Oxford)*; William Clements (Unchartech); Jakob Foerster (Facebook AI Research); Alexander Lvovsky (Oxford University)

[23]. Algorithmic Improvements for Deep Reinforcement Learning applied to Interactive Fiction

Vishal Jain (Mila, McGill University)*; Liam Fedus (Google); Hugo Larochelle (Google); Doina Precup (McGill University); Marc G. Bellemare (Google Brain)

[24]. Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents

Xian Yeow Lee (Iowa State University)*; Sambit Ghadai (Iowa State University); Kai Liang Tan (Iowa State University); Chinmay Hegde (New York University); Soumik Sarkar (Iowa State University)

[25]. Modelling Sentence Pairs via Reinforcement Learning: An Actor‐Critic Approach to Learn the Irrelevant Words

MAHTAB AHMED (The University of Western Ontario)*; Robert Mercer (The University of Western Ontario)

[26]. Transfer Reinforcement Learning using Output-­Gated Working Memory

Arthur Williams (Middle Tennessee State University)*; Joshua Phillips (Middle Tennessee State University)

[27]. Reinforcement-­Learning based Portfolio Management with Augmented Asset Movement Prediction States

Yunan Ye (Zhejiang University)*; Hengzhi Pei (Fudan University); Boxin Wang (University of Illinois at Urbana-­ Champaign); Pin-­Yu Chen (IBM Research); Yada Zhu (IBM Research); Jun Xiao (Zhejiang University); Bo Li (University of Illinois at Urbana–Champaign)

[28]. Deep Reinforcement Learning for General Game Playing

Adrian Goldwaser (University of New South Wales)*; Michael Thielscher (University of New South Wales)

[29]. Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning

Jianwen Sun (Nanyang Technological University)*; Tianwei Zhang ( Nanyang Technological University); Xiaofei Xie (Nanyang Technological University); Lei Ma (Kyushu University); Yan Zheng (Tianjin University); Kangjie Chen (Tianjin University); Yang Liu (Nanyang Technology University, Singapore)

[30]. LeDeepChef: Deep Reinforcement Learning Agent for Families of Text-­Based Games

Leonard Adolphs (ETHZ)*; Thomas Hofmann (ETH Zurich)

[31]. Induction of Subgoal Automata for Reinforcement Learning

Daniel Furelos-­Blanco (Imperial College London)*; Mark Law (Imperial College London); Alessandra Russo (Imperial College London); Krysia Broda (Imperial College London); Anders Jonsson (UPF)

[32]. MRI Reconstruction with Interpretable Pixel-­Wise Operations Using Reinforcement Learning

wentian li (Tsinghua University)*; XIDONG FENG (department of Automation,Tsinghua University); Haotian An (Tsinghua University); Xiang Yao Ng (Tsinghua University); Yu-­Jin Zhang (Tsinghua University)

[33]. Explainable Reinforcement Learning Through a Causal Lens

Prashan Madumal (University of Melbourne)*; Tim Miller (University of Melbourne); Liz Sonenberg (University of Melbourne); Frank Vetere (University of Melbourne)

[34]. Reinforcement Learning based Metapath Discovery in Large-­scale Heterogeneous Information Networks

Guojia Wan (Wuhan University); Bo Du (School of Compuer Science, Wuhan University)*; Shirui Pan (Monash University); Reza Haffari (Monash University, Australia)

[35]. Reinforcement Learning When All Actions are Not Always Available

Yash Chandak (University of Massachusetts Amherst)*; Georgios Theocharous ("Adobe Research, USA"); Blossom Metevier (University of Massachusetts, Amherst); Philip Thomas (University of Massachusetts Amherst)

[36]. Reinforcement Mechanism Design: With Applications to Dynamic Pricing in Sponsored Search Auctions

Weiran Shen (Carnegie Mellon University)*; Binghui Peng (Columbia University); Hanpeng Liu (Tsinghua University); Michael Zhang (Chinese University of Hong Kong); Ruohan Qian (Baidu Inc.); Yan Hong (Baidu Inc.); Zhi Guo (Baidu Inc.); Zongyao Ding (Baidu Inc.); Pengjun Lu (Baidu Inc.); Pingzhong Tang (Tsinghua University)

[37]. Metareasoning in Modular Software Systems: On-­the-­Fly Configuration Using Reinforcement Learning

Rich Contextual Representations Aditya Modi (Univ. of Michigan Ann Arbor)*; Debadeepta Dey (Microsoft); Alekh Agarwal (Microsoft); Adith Swaminathan (Microsoft Research); Besmira Nushi (Microsoft Research); Sean Andrist (Microsoft Research); Eric Horvitz (MSR)

[38]. Joint Entity and Relation Extraction with a Hybrid Transformer and Reinforcement Learning Based Model

Ya Xiao (Tongji University)*; Chengxiang Tan (Tongji University); Zhijie Fan (The Third Research Institute of the Ministry of Public Security); Qian Xu (Tongji University); Wenye Zhu (Tongji University)

[39]. Reinforcement Learning of Risk-­Constrained Policies in Markov Decision Processes

Tomas Brazdil (Masaryk University); Krishnendu Chatterjee (IST Austria); Petr Novotný (Masaryk University)*; Jiří Vahala (Masaryk University)

[40]. Deep Model-­Based Reinforcement Learning via Estimated Uncertainty and Conservative Policy Optimization

Qi Zhou (University of Science and Technology of China); Houqiang Li (University of Science and Technology of China); Jie Wang (University of Science and Technology of China)*

[41]. Reinforcement Learning with Non-­Markovian Rewards

Maor Gaon (Ben-­Gurion University); Ronen Brafman (BGU)*

[42]. Modular Robot Design Synthesis with Deep Reinforcement Learning

Julian Whitman (Carnegie Mellon University)*; Raunaq Bhirangi (Carnegie Mellon University); Matthew Travers (CMU); Howie Choset (Carnegie Melon University)

[42]. BAR -­A Reinforcement Learning Agent for Bounding-­Box Automated Refinement

Morgane Ayle (American University of Beirut -­ AUB)*; Jimmy Tekli (BMW Group / Université de Franche-­Comté -­ UFC); Julia Zini (American University of Beirut -­ AUB); Boulos El Asmar (BMW Group / Karlsruher Institut für Technologie -­ KIT); Mariette Awad (American University of Beirut-­ AUB)

[44]. Hierarchical Reinforcement Learning for Open-­Domain Dialog

Abdelrhman Saleh (Harvard University)*; Natasha Jaques (MIT); Asma Ghandeharioun (MIT); Judy Hanwen Shen(MIT); Rosalind Picard (MIT Media Lab)

[45]. Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning

Liqiang Xiao (Artificial Intelligence Institute, SJTU)*; Lu Wang (Khoury College of Computer Science, Northeastern University); Hao He (Shanghai Jiao Tong University); Yaohui Jin (Artificial Intelligence Institute, SJTU)

[46]. Generalizable Resource Allocation in Stream Processing via Deep Reinforcement Learning

Xiang Ni (IBM Research); Jing Li (NJIT); Wang Zhou (IBM Research); Mo Yu (IBM T. J. Watson)*; Kun-­Lung Wu (IBM Research)

[47]. Actor Critic Deep Reinforcement Learning for Neural Malware Control

Yu Wang (Microsoft)*; Jack Stokes (Microsoft Research); Mady Marinescu (Microsoft Corporation)

[48]. Fixed-­Horizon Temporal Difference Methods for Stable Reinforcement Learning

Kristopher De Asis (University of Alberta)*; Alan Chan (University of Alberta); Silviu Pitis (University of Toronto); Richard Sutton (University of Alberta); Daniel Graves (Huawei)

[49]. Sequence Generation with Optimal-­Transport-­Enhanced Reinforcement Learning

Liqun Chen (Duke University)*; Ke Bai (Duke University); Chenyang Tao (Duke University); Yizhe Zhang (Microsoft Research); Guoyin Wang (Duke University); Wenlin Wang (Duke Univeristy); Ricardo Henao (Duke University); Lawrence Carin Duke (CS)

[50]. Scaling All-­Goals Updates in Reinforcement Learning Using Convolutional Neural Networks

Fabio Pardo (Imperial College London)*; Vitaly Levdik (Imperial College London); Petar Kormushev (Imperial College London)

[51]. Parameterized Indexed Value Function for Efficient Exploration in Reinforcement Learning

Tian Tan (Stanford University)*; Zhihan Xiong (Stanford University); Vikranth Dwaracherla (Stanford University)

[52]. Solving Online Threat Screening Games using Constrained Action Space Reinforcement Learning

Sanket Shah (Singpore Management University)*; Arunesh Sinha (Singapore Management University); Pradeep Varakantham (Singapore Management University); Andrew Perrault (Harvard University); Milind Tambe (Harvard University)

关于论文的详细解读请查看Github:

https://github.com/NeuronDance/DeepRL/tree/master/DRL-ConferencePaper/AAAI/2020

(*本文为AI科技大本营转载文章,转载请微信联系1092722531)

精彩推荐

2020年,由 CSDN 主办的「Python开发者日」活动(Python Day)正式启动。我们将与 PyCon 官方授权的 PyCon中国社区合作,联手顶尖企业、行业与技术专家,通过精彩的技术干货内容、有趣多元化的活动等诸多体验,共同为中国 IT 技术开发者搭建专业、开放的技术交流与成长的家园。未来,我们和中国万千开发者一起分享技术、践行技术,铸就中国原创技术力量。

【Python Day——北京站】现已正式启动,「新春早鸟票」火热开抢!2020年,我们还将在全国多个城市举办巡回活动,敬请期待!

活动咨询,可扫描下方二维码加入官方交流群~

CSDN「Python Day」咨询群 ????

来~一起聊聊Python

如果群满100人,无法自动进入,可添加会议小助手微信:婷婷,151 0101 4297(电话同微信)


推荐阅读

  • 集五福,我用Python

  • 微软开源NAS算法Petridish,提高神经网络迁移能力

  • AI 没让人类失业,搞 AI 的人先失业了

  • 我国自主开发的编程语言“木兰”是又一个披着“洋”皮的红芯浏览器吗?

  • 小网站的容器化(上)

  • 好扑科技技术副总裁戎朋:从海豚浏览器技术负责人到区块链,揭秘区块链技术之路

  • 你点的每个“在看”,我都认真当成了AI

相关文章:

朴素、Select、Poll和Epoll网络编程模型实现和分析——Epoll模型

在阅读完《朴素、Select、Poll和Epoll网络编程模型实现和分析——Select模型》和《朴素、Select、Poll和Epoll网络编程模型实现和分析——Poll模型》两篇文章后,我们发现一个问题,不管select函数还是poll函数都不够智能,它们只能告诉我们成功…

Scala 深入浅出实战经典 第88讲:Scala中使用For表达式实现map、flatMap、filter

高级函数 map,flatMap,filter用for循环的实现。package com.dt.scala.forexpressionobject For_Advanced {def main(args: Array[String]) {}def map[A, B](list: List[A], f: A > B): List[B] for(element <- list) yield f(element)def flatMap[A, B](list: List[A], f…

抛弃Python,我们为什么用Go编写机器学习架构?

所有参与投票的 CSDN 用户都参加抽奖活动群内公布奖项&#xff0c;还有更多福利赠送作者 | Caleb Kaiser译者 | 弯月&#xff0c;编辑 | 郭芮来源 | CSDN&#xff08;ID&#xff1a;CSDNnews&#xff09;如今&#xff0c;众所周知Python是机器学习项目中最流行的语言。尽管R、C…

朴素、Select、Poll和Epoll网络编程模型实现和分析——模型比较

经过之前四篇博文的介绍&#xff0c;可以大致清楚各种模型的编程步骤。现在我们来回顾下各种模型&#xff08;转载请指明出于breaksoftware的csdn博客&#xff09; 模型编程步骤对比《朴素、Select、Poll和Epoll网络编程模型实现和分析——朴素模型》中介绍的是最基本的网络编程…

使用VM虚拟机的一点小技巧

今天想为朋友弄一个虚拟机系统文件&#xff0c;这样就可以直接拷贝过去&#xff0c;直接让他用了。哪成想电脑里的系统镜像文件不能用&#xff0c;也不知道是不是VM不支持&#xff0c;反正怎么着也引导不起来了。无奈只好用硬件光驱来装虚拟系统&#xff0c;把2003系统盘装入光…

翻译:AKKA笔记 - Actor消息 -1(二)

消息 我们只是让QuoteRequest到ActorRef去但是我们根本没见过消息类&#xff01; 它是这样的&#xff1a;&#xff08;一个最佳实践是把你的消息类包装在一个完整的对象里以利于更好的组织&#xff09; TeacherProtocol package me.rerun.akkanotes.messaging.protocolsobject …

远程安装oracle 10.2.1 for redhat 5.0 2.6.18-53.el5xen

远程安装oracle <?xml:namespace prefix st1 ns "urn:schemas-microsoft-com:office:smarttags" />10.2.1 for redhat 5.0 2.6.18-53.el5xen<?xml:namespace prefix o ns "urn:schemas-microsoft-com:office:office" />今天有个朋友打电…

伯克利新无监督强化学习方法:减少混沌所产生的突现行为

作者 | Glen Berseth译者 | Arvin编辑 | 夕颜出品 | AI科技大本营&#xff08;ID: rgznai100&#xff09;【导读】所有生命有机体都在环境中占据一席之地&#xff0c;使它们在周围不断增加的熵中可以保持相对可预测性。例如&#xff0c;人类竭尽全力保护自己免受意外袭击--我们…

朴素、Select、Poll和Epoll网络编程模型实现和分析——Poll、Epoll模型处理长连接性能比较

在《朴素、Select、Poll和Epoll网络编程模型实现和分析——模型比较》一文中&#xff0c;我们分析了各种模型在处理短连接时的能力。本文我们将讨论处理长连接时各个模型的性能。&#xff08;转载请指明出于breaksoftware的csdn博客&#xff09; 我们可以想象下场景&#xff0c…

Topcoder SRM 663 DIV 1

ABBADiv1 题意&#xff1a; 规定两种操作&#xff0c;一种是在字符串的末尾添加A&#xff0c;另一种是在末尾添加B然后反转字符串。现在给你一个起始串&#xff0c;一个终点串&#xff0c;然后问你是否能够通过以上两种操作&#xff0c;从起始串变为终点串。 题解&#xff1a; …

跨平台PHP调试器设计及使用方法——立项

作为一个闲不住且希望一直能挑战自己的人&#xff0c;我总是在琢磨能做点什么。自从今年初开始接触PHP&#xff0c;我也总想能在这个领域内产生点贡献。那能做点什么呢&#xff1f;我经常看到很多phper说自己设计了一个什么框架&#xff0c;或者说自己搭建了一个什么系统。虽然…

机器推理文本+视觉,跨模态预训练新进展

作者 | 李根、段楠、周明来源 | 微软研究院AI头条&#xff08;ID:MSRAsia&#xff09;【导读】机器推理要求利用已有的知识和推断技术对未见过的输入信息作出判断&#xff0c;在自然语言处理领域中非常重要。本文将介绍微软亚洲研究院在跨模态预训练领域的研究进展。近年来&…

[LeetCode]:94:Binary Tree Inorder Traversal

题目&#xff1a; Given a binary tree, return the inorder traversal of its nodes values. For example:Given binary tree {1,#,2,3}, 1\2/3return [1,3,2]. 代码&#xff1a; public class Solution {public static ArrayList<Integer> listResult new ArrayList&l…

腾讯 AI 2019这一年

所有参与投票的 CSDN 用户都参加抽奖活动群内公布奖项&#xff0c;还有更多福利赠送近日&#xff0c;腾讯AI实验室总结了 2019 年其取得重大进展的两大研究方向&#xff0c;推动实现的行业应用以及前沿研究探索方面的成果。一、两大难题攻坚&#xff1a;通用人工智能与数字人用…

跨平台PHP调试器设计及使用方法——探索和设计

在《跨平台PHP调试器设计及使用方法——立项》一文中&#xff0c;我确定了使用xdebug作为调试器插件部分的基础组件。xdebug提供了一个远程调试的功能&#xff08;相关资料可以详见https://xdebug.org/docs/remote&#xff09;&#xff0c;我们这个项目便是基于这个功能实现的。…

Ubuntu下允许Root用户直接登录图形界面

ubuntu root是默认禁用了&#xff0c;不允许用root登陆&#xff0c;所以先要设置root密码。 执行&#xff1a;sudo passwd root 接着输入密码和root密码&#xff0c;重复密码。再重新启动就可以用root登陆。 另外&#xff0c;默认情况下是不允许用root帐号直接登陆图形界面的。…

携程App for Apple Watch探索

在Apple Watch发布之后&#xff0c;很多App都针对它设计了相应的版本。旅行作为与Apple Watch时间管理特性契合度较高的场景&#xff0c;同时携程旅行作为国内领先的OTA行业App&#xff0c;也成为了首批适配Apple Watch并荣登Apple官网和App Store推荐的应用之一。InfoQ就App f…

跨平台PHP调试器设计及使用方法——通信

首先引用《跨平台PHP调试器设计及使用方法——探索和设计》中的结构图&#xff08;转载请指明出于breaksoftware的csdn博客&#xff09; 本文要介绍的是我们逻辑和pydbgp通信的实现&#xff08;图中红框内内容&#xff09;。 设计通信之前&#xff0c;我需要先设计一种通信协议…

MVP模式的相关知识

MVP 是从经典的模式MVC演变而来&#xff0c;它们的基本思想有相通的地方&#xff1a;Controller/Presenter负责逻辑的处理&#xff0c;Model提供数据&#xff0c;View负责显示。作为一种新的模式&#xff0c;MVP与MVC有着一个重大的区别&#xff1a;在MVP中View并不直接使用Mod…

“数学不行,还能干点啥?”面试官+CTO:干啥都费劲!

关于数学与程序员的“暧昧”关系&#xff0c;先看看网友的看法&#xff1a;同时编程圈也流传着一个段子&#xff1a;一流程序员靠数学&#xff0c;二流程序员靠算法&#xff0c;末端程序员靠百度&#xff0c;低端看高端就是黑魔法。想一想&#xff0c;我们日常学习、求职、工作…

CentOS7 yum 源的配置与使用

YUM&#xff1a;Yellowdog Updater Modified Yum&#xff08;全称为 Yellow dog Updater, Modified&#xff09;是一个在Fedora和RedHat以及CentOS中的Shell前端软件包管理器。基于RPM包管理&#xff0c;能够从指定的服务器自动下载RPM包并且安装&#xff0c;可以自动处理依赖…

跨平台PHP调试器设计及使用方法——协议解析

在《跨平台PHP调试器设计及使用方法——探索和设计》一文中&#xff0c;我介绍了将使用pydbgp作为和Xdebug的通信库&#xff0c;并让pydbgp以&#xff08;孙&#xff09;子进程的方式存在。《跨平台PHP调试器设计及使用方法——通信》解决了和pydbgp通信的问题&#xff0c;本文…

测试客户端发图图

转载于:https://blog.51cto.com/ericsong/116942

搜狐、美团、小米都在用的Apache Doris有什么好? | BDTC 2019

【导读】12 月 5-7 日&#xff0c;由中国计算机学会主办&#xff0c;CCF 大数据专家委员会承办&#xff0c;CSDN、中科天玑协办的中国大数据技术大会&#xff08;BDTC 2019&#xff09;在北京长城饭店隆重举行。100 顶尖技术专家、1000 大数据从业者齐聚于此&#xff0c;以“大…

cacti邮件告警设置

功能说明对指定流量图形&#xff08;指定接口&#xff09;设置最高或最低流量阀值&#xff0c;当流量出现异常偏高或偏低触发阀值&#xff0c;系统自动将异常信息以邮件形式通知指定收件人。如果收件人邮箱是139邮箱&#xff0c;还可以增设短信通知功能。设置前准备设置该功能之…

跨平台PHP调试器设计及使用方法——高阶封装

在《跨平台PHP调试器设计及使用方法——协议解析》一文中介绍了如何将pydbgp返回的数据转换成我们需要的数据。我们使用该问中的接口已经可以构建一个简单的调试器。但是由于pydbgp存在的一些问题&#xff0c;以及调试器需要的一些高级功能&#xff0c;我们还需要对这些接口进行…

Oracle的口令文件(passwordfile)的讲解(摘录)

初学oracle&#xff0c;很多概念迷糊&#xff0c;今天看到这文章&#xff0c;让我有一个比较清晰的认识。转载[url]http://www.itpub.net/viewthread.php?tid906008&extra&page1[/url]1、os认证oracle安装之后默认情况下是启用了os认证的&#xff0c;这里提到的os认证…

如何优雅地使用pdpipe与Pandas构建管道?

作者 | Tirthajyoti Sarkar译者 | 清儿爸编辑 | 夕颜出品 | AI科技大本营&#xff08;ID: rgznai100&#xff09; 【导读】Pandas 是 Python 生态系统中的一个了不起的库&#xff0c;用于数据分析和机器学习。它在 Excel/CSV 文件和 SQL 表所在的数据世界与 Scikit-learn 或 Te…

第 十 天 : 添 加 硬 盘 和 分 区 挂 载 等

小Q&#xff1a;狼若回头&#xff0c;必有缘由&#xff0c;不是报恩&#xff0c;就是***&#xff1b; 事不三思必有败&#xff0c;人能百忍则无忧。今天的进度虽然慢了&#xff0c;但是学习状态还是一如往常&#xff0c;只不过今天遇到了不少新的知识点&#xff0c;需要好好想想…

从4个月到7天,Netflix开源Python框架Metaflow有何提升性能的魔法?

作者 | Rupert Thomas译者 | 凯隐编辑 | Jane出品 | AI科技大本营&#xff08;ID&#xff1a;rgznai100&#xff09;【导语】Metaflow 是由 Netflix 开发&#xff0c;用在数据科学领域的 Python框架&#xff0c;于 2019 年 12 月正式对外开源。据介绍&#xff0c;Metaflow 解决…