1 / 21

Written by Andrew W.Lo , Harry Mamaysky , and Jiang Wang

Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation. Written by Andrew W.Lo , Harry Mamaysky , and Jiang Wang. Presented by Xiaodai Guo. Main Idea of the Paper.

allie
Download Presentation

Written by Andrew W.Lo , Harry Mamaysky , and Jiang Wang

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation Written by Andrew W.Lo, Harry Mamaysky, and Jiang Wang Presented by Xiaodai Guo

  2. Main Idea of the Paper Combine chart patterns of technical analysis with quantitative trading skills by achieving three sub-goals: Smooth the data . Define technical patterns mathematically, identify them and use them to do algorithm trading. Ways to test the statistical significance of the results (skipped).

  3. Example: What is a technical pattern: Head and Shoulder Top: a signal for sell

  4. Part I: Smoothing the data Why smoothing the data? Raw stock price data in reality is very noisy, and to observe the patterns behind the data, we must filter out the noise:

  5. Part I: Smoothing the data How to smooth the data? A traditional way used by technical analysts to smooth the data: SMA(simple moving average): For any day(Day M): Shortcomings: Every data point is assigned the same weight. At every point of time M, only the information at and before M is used for smoothing.

  6. Part I: Smoothing the data How to smooth the data? A new way proposed by this paper: A smoothing estimator using Kernel Regression: is a weight which is calculated using the Gaussian kernel.

  7. Part I: Smoothing the data How to smooth the data? Intuition: For any time point x, its smoothed estimator should be the weighted average of all the time points t in the time window (t ranges 1 to T). (x) has a value which is proportional to , so the farther t is from x, the less weight point t will have when used for estimating x.

  8. Part I: Smoothing the data How to smooth the data? (skip) Important concepts: Kernel: a weight function which is constructed from a probability density function. Gaussian Kernel: a weight function which is constructed from the density function of normal distribution. Important formulas:

  9. Part I: Smoothing the data Example: After smoothing VS before smoothing

  10. Part I: Smoothing the data Another Example: After smoothing VS before smoothing

  11. Part II: Identifying patterns A. What is “local extrema” Local maximum(minimum): A day whose stock price is higher(lower) than the stock price of the days before and after it.

  12. Part II: Identifying patterns B. Define the technical patterns mathematically

  13. Part II: Identifying patterns B. Define the technical patterns mathematically Head-and-shoulders Reverse head-and-shoulders Broadening tops Broadening bottoms Triangle tops Triangle bottoms Rectangle tops Rectangle bottoms Double tops Double bottoms

  14. Part II: Identifying patterns C. Test out whether patterns exist in smoothed data How do we look for patterns: For every time window of 38 days, do the smoothing, and then test for patterns using the first 35 days’ smoothed data. Constraint: The last local extrema of the pattern must appear on the 35th day.

  15. Part II: Identifying patterns D. Calculate the results How do we trade: According to the author ,for every time window of 38 days, if a pattern is observed, we long/short the stock at the closing price of the 38th day, and close our position at the closing price of the 39th day. A modification: For every time window of 38 days, if a pattern is observed, we long/short the stock at the closing price of the 39thday, and close our position at the closing price of the 40thday.

  16. Part III: Results A. Calculate the results Implement a back testing using Ford’s daily stock price from 1993/9/24 to 2013/9/24.

  17. Part III: Results B. The results of back-testing Number of transactions occurred:130 The probability of one transaction to make money:49.2% Mean return of transactions:0.207% Standard deviation of mean return of transactions:0.263% P-value of the mean return under t-test:0.2927

  18. Part III: Results C. An improved trading strategy An improved trading strategy: After detecting a pattern, instead of holding the stock for one day, we will hold it for five days. The reason for this improvement: Practitioners want to take full advantage of the technical patterns discovered.

  19. Part III: Results D. Back-testing results of the improved trading strategy Number of transactions occurred:130 The probability of one transaction to make money:53.8% Mean return of transactions:0.816% Standard deviation of mean return of transactions:0.488% P-value of the mean return t-test:0.0986

  20. Part IV: Pros and cons • What we can learn from this paper: • How to smooth the data with kernel regression. • How to define technical patterns in a numerical way. • How to use patterns to trade quantitatively

  21. Part IV: Pros and cons • Criticism: • When optimizing the bandwidth h for kernel regression, the author uses the “Cross-Validation” method, which seems to be inappropriate for this problem. Also, the author multiplies this optimized h value by 0.3, which makes this optimization process even less rigorous. • Many of the parameters are ad-hoc and come from “empirical experience”, which is not well-explained in this paper. • Examples: length of the time window; percentage numbers used in definitions of technical patterns.

More Related