130 likes | 199 Views
Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.
E N D
EXPECTED TO BE THE BIGGEST YET, GTC FEATURES SESSIONS AND DLI TRAINING ON THE MOST IMPORTANT TOPICS IN COMPUTING TODAY
WHY DLI HANDS-ON TRAINING? ● LEARN HOW TO BUILD APPS ACROSS INDUSTRY SEGMENTS ● GET HANDS-ON EXPERIENCE USING INDUSTRY-STANDARD SOFTWARE, TOOLS & FRAMEWORKS ● GAIN EXPERTISE THROUGH CONTENT DESIGNED WITH INDUSTRY LEADERS
FUNDAMENTALS OF ACCELERATED COMPUTING WITH CUDA PYTHON This course explores how to use Numba—the just-in- time, type-specializing Python function compiler— to accelerate Python programs to run on massively parallel NVIDIA GPUs. You’ll learn how to: ● Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs) Use Numba to create and launch custom CUDA kernels Apply key GPU memory management techniques ● ● ADD TO MY SCHEDULE
ACCELERATING APPLICATIONS WITH CUDA C/C++ The CUDA computing platform enables acceleration of CPU-only applications to run on the world's fastest massively parallel GPUs. Learn how to accelerate C/C++ applications by: ● Exposing the parallelization of CPU-only applications, and refactoring them to run in parallel on GPUs Successfully managing memory Utilizing CUDA parallel thread hierarchy to further increase performance ● ● ADD TO MY SCHEDULE
CUDA ON DRIVE AGX Explore how to write CUDA code and run it on DRIVE AGX. You'll learn about: ● ● ● Hardware architecture of DRIVE AGX Memory Management of iGPU and dGPU GPU acceleration for inferencing ADD TO MY SCHEDULE
ACCELERATING DATA SCIENCE WORKFLOWS WITH RAPIDS The open source RAPIDS project allows data scientists to GPU-accelerate their data science and data analytics applications from beginning to end, creating possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows. Learn how to GPU-accelerate your data science applications by: ● ● Utilizing key RAPIDS libraries like cuDF & cuML Learning techniques and approaches to end-to- end data science Understanding key differences between CPU- driven and GPU-driven data science ● ADD TO MY SCHEDULE
DEBUGGING AND OPTIMIZING CUDA APPLICATIONS WITH NSIGHT PRODUCTS ON LINUX TRAINING Learn how NVIDIA tools can improve development productivity by narrowing down bugs and spotting areas of optimization in CUDA applications on a Linux x86_64 system. Through a set of exercises, you'll gain hands-on experience using NVIDIA's new Nsight Systems and Nsight Compute tools for debugging, narrowing down memory issues, and optimizing a CUDA application. ADD TO MY SCHEDULE
ACCELERATED DATA SCIENCE PIPELINE WITH RAPIDS ON AZURE Learn how to deploy RAPIDS machine learning jobs on NVIDIA's GPUs using Microsoft Azure and explore: ● Azure Portal Permits: a convenient way to perform functional experimentation with RAPIDS. Azure Machine Learning (AML) SDK: enables a batch experimentation mode and where the user can set ranges on different parameters to be run on a RAPIDS program, saving the results for later analysis ● ADD TO MY SCHEDULE
HIGH PERFORMANCE COMPUTING USING CONTAINERS Learn to build, deploy and run containers in an HPC environment. During this session, you will learn: the basics of building container images with Docker and Singularity, how to use HPC Container Maker (HPCCM) to make it easier to build container images for HPC applications, and how to use containers from the NGC with Singularity. ADD TO MY SCHEDULE
INTRODUCTION TO CUDA PYTHON WITH NUMBA Explore an introduction to Numba, a just-in-time function compiler that allows developers to utilize the CUDA platform in Python applications. You'll learn how to: ● Decorate Python functions to be compiled by Numba Use Numba to GPU accelerate NumPy ufuncs ● ADD TO MY SCHEDULE
CUDA PROGRAMMING IN PYTHON WITH NUMBA AND CUPY Combining Numba, an open source compiler that can translate Python functions for execution on the GPU, with the CuPy GPU array library, a nearly complete implementation of the NumPy API for CUDA, creates a high productivity GPU development environment. Learn the basics of using Numba with CuPy, techniques for automatically parallelizing custom Python functions on arrays, and how to create and launch CUDA kernels entirely from Python. ADD TO MY SCHEDULE
REGISTER TODAY FOR GTC AND EXPLORE THE FULL LIST OF CUDA TRAINING, TALKS & EXPERT SESSIONS LEARN MORE