# Deep-Reinforcement-Learning-in-Stock-Trading **Repository Path**: zero2hero/Deep-Reinforcement-Learning-in-Stock-Trading ## Basic Information - **Project Name**: Deep-Reinforcement-Learning-in-Stock-Trading - **Description**: Using deep actor-critic model to learn best strategies in pair trading - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-08-08 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Deep-Reinforcement-Learning-in-Stock-Trading Using deep actor-critic model to learn best strategies in pair trading ## Abstract Partially observed Markov decision process problem of pairs trading is a challenging aspect in algorithmic trading. In this work, we tackle this by utilizing a deep reinforcement learning algorithm called advantage actor-critic by extending the policy network with a critic network, to incorporate both the stochastic policy gradient and value gradient. We have also used recurrent neural network coupled with long-short term memory to preserve information from time series data of stock market. A memory buffer for experience replay and a target network are also employed to reduce the variance from noisy and correlated environment. Our results demonstrate a success on learning a well-performing lucrative model by directly taking data from public available sources and present possibilities for extensions to other time-sensitive applications ## Usage customize the stock pair/period to simulate in runner.py run "python RLMDP/runner.py" ## Credit to Yichen Shen Yiding Zhao ## Based on the previous work by Su Hang Zhaoming Wu Sam Norris