1 / 26

Mocha.jl

Mocha.jl. Deep Learning for Julia Chiyuan Zhang (@ pluskid ) CSAIL, MIT. Julia Basics. 10-minute Introduction to Julia. A glance of basic syntax. Beyond Syntax.

dfrench
Download Presentation

Mocha.jl

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mocha.jl Deep Learning for Julia Chiyuan Zhang (@pluskid) CSAIL, MIT

  2. Julia Basics 10-minute Introduction to Julia

  3. A glance of basic syntax

  4. Beyond Syntax Close-to-C performance in native Julia code, typically do not need to explicitly vectorize your code (like what you have been doing in Matlab). Type annotation, LLVM-based just-in-time (JIT) compiler, easy parallelization with co-routine on single machine or over nodes of clusters; blah blah blah…

  5. Convenient FFI Calling C/Fortran functions Calling Python functions (PyCall.jl, PyPlot.jl, IJulia, …) Interaction with C++ functions / objects directly, see Cxx.jl

  6. Powerful Macros OpenPPL, probabilistic programming JuMP, optimization models

  7. Disadvantages of Julia • Still at early stage, so • The ecosystem is still young (653 Julia packages vs. 66,687PyPI packages)e.g. Images.jl still does not have a resize function… • The core language is still evolvinge.g. current v0.4-RC introduced a lot of breaking changes (and also exciting new features) • ??

  8. Mocha.jl Deep Learning in Julia

  9. Image sources: Li Deng and Dong Yu. Deep Learning – Methods and Applications. Zheng, Shuai et al. “Conditional Random Fields as Recurrent Neural Networks.” arXiv.orgcs.CV (2015). Google Deep Mind. Human-level control through deep reinforcement learning. Nature, Feb. 2015. Andrej Karpathy and Li Fei-Fei. Deep Visual-Semantic Alignments for Generating Image Descriptions. CVPR 2015.

  10. Why Deep Learning is Successful? • Theoretical point of view • Nowhere near a complete theoretical understanding of deep learning yet • Practical point of view • Big Data: large amount of (thank Internet) labeled (thank Amazon M-Turk), high-dimensional (large images, videos, speech and text corpus, etc.) • Computational Power: GPUs, large clusters • Human Power: the “deep learning conspiracy” • Software Engineering: network architecture & computation components decoupled

  11. Layers & back-propagate Top: Typical way of visualizing a neural network: clear and intuitive, but does not have well decomposition of computation into layers. Bottom: Alternative way of thinking about neural networks. Each layer is a black box that could carry out forward and backward computation. Important thing: the computation is complete encapsulated inside the layer, the black box does NOT need to know the external environment (e.g. the overall network architecture) to do the computation. e.g. Linear Layer (Input – Output) Forward: Backward: More generally, a deep neural network can be viewed as an directed acyclic graph (optionally with time-delayed recurrent connections)

  12. Advantage of de-coupled view of NN • Highly efficient computation components could be written by programmers and experts in high-performance computing and code optimization. • E.g. cuDNN library from Nvidia • Researchers can try out novel architectures easily without needing to worry too much about internal implementation of commonly used layers • Some examples of complicated networks built with standard components: Network-in-Network, Siamese Networks, Fully-Convolutional Networks, etc. Image Source: J. Long, E. Shelhamer, T. Darrell. Fully Convolutional Networks for Semantic Segmentation. CVPR 2015.

  13. Deep Learning Libraries • C++: Caffe (widely adopted in academia), dmlc/cxxnet, cuda-convnet, CNTK (by MSR), etc. • Python: Theano (auto-differentiation) and various wrappers; NervanaSystems/neon; etc. • Lua: Torch 7 (supported by Facebook and Google DeepMind) • Matlab: MatConvNet (by VGG) • Julia: pluskid/Mocha.jl • …

  14. Why Mocha.jl? • Julia: written in Julia and easy interaction with the rest of the Julia ecosystem. • Minimum dependency: the Julia backend runs out of the box. CUDA backend depends on NvidiacuDNN. • Correctness: all the computation components are unit-tested. • Modular architecture: layers, activation functions, regularizers, network topology, solvers, etc. Julia compiles with LLVM, so potentially Julia code could be compiled directly to GPU devices in the future. After that, writing neural networks in Julia will be really enjoyable!

  15. Image Classification IJulia Demo

  16. Mini-Tutorial: ConvNets on MNIST • MNIST: Handwritten digits • Data preparation: • Following convention, images are represented as 4D tensor: width-by-height-by-channels-by-count • For MNIST: 28-by-28-by-1-by-64 • Mocha.jl supports general ND tensors • Data are stored in HDF5 file format • Commonly supported by Matlab, Numpy, etc. • See examples/mnist/gen-mnist.sh

  17. Defining Network Architecture • A network starts with data layers (inputs), and ends with prediction or loss layers data_layer = AsyncHDF5DataLayer(name="train-data", source="data/train.txt", batch_size=64, shuffle=true) • Source file data/train.txtlists the HDF5 files for training set • 64 images is provided for each mini-batch • the data is shuffled to improve convergence • async data layer use Julia’s @asyncto pre-read data while waiting for computation on CPU / GPU

  18. Convolution Layer LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324. conv_layer = ConvolutionLayer(name="conv1", n_filter=20, kernel=(5,5), bottoms=[:data], tops=[:conv])

  19. Pooling Layer pool_layer = PoolingLayer(name="pool1", kernel=(2,2),stride=(2,2), bottoms=[:conv], tops=[:pool]) • Pooling layer operate on the output of convolution layer • By default, MAX pooling is performed; can switch to MEAN pooling by specifying pooling=Pooling.Mean()

  20. Constructing DAG with tops and bottoms • Network architecture is determined by connecting tops (output) blobs to bottoms (input) blobs with matching blob names. • Layers are automatically sorted and connected as a directed acyclic graph (DAG). • The figure on the right shows the visualization of the LeNet for MNIST: conv-pool x2 + dense x2

  21. Definition of the rest of the layers conv2_layer = ConvolutionLayer(name="conv2", n_filter=50, kernel=(5,5), bottoms=[:pool], tops=[:conv2]) pool2_layer = PoolingLayer(name="pool2", kernel=(2,2), stride=(2,2), bottoms=[:conv2], tops=[:pool2]) fc1_layer = InnerProductLayer(name="ip1", output_dim=500, neuron=Neurons.ReLU(), bottoms=[:pool2], tops=[:ip1]) fc2_layer = InnerProductLayer(name="ip2", output_dim=10, bottoms=[:ip1], tops=[:ip2]) loss_layer = SoftmaxLossLayer(name="loss", bottoms=[:ip2, :label])

  22. The Stochastic Gradient Descent Solver method = SGD() params= make_solver_parameters(method, max_iter=10000, regu_coef=0.0005,mom_policy=MomPolicy.Fixed(0.9),lr_policy=LRPolicy.Inv(0.01, 0.0001, 0.75),load_from=exp_dir) solver = Solver(method, params) • Solvers have many customizable parameters, including learning-rate policy, momentum-policy, etc. Advanced policies like halfingthe learning rate when performance on validation set drops are also supported. • See Mocha.jl document for other available solvers.

  23. Coffee Breaks … for the solver setup_coffee_lounge(solver, save_into="$exp_dir/statistics.jld",every_n_iter=1000) # report training progress every 100 iterations add_coffee_break(solver, TrainingSummary(), every_n_iter=100) # save snapshots every 5000 iterations add_coffee_break(solver, Snapshot(exp_dir), every_n_iter=5000)

  24. Solver Statistics Solver statistics will be automatically saved if coffee lounge is set up. Snapshots save the training progress periodically, can resumetraining from the last snapshot after interruption.

  25. Switching Backends: CPU vs GPU backend = use_gpu ? GPUBackend() : CPUBackend()

  26. Thank you! http://julialang.org https://github.com/pluskid/Mocha.jl

More Related