Protected

NCP-ADS content is available after admin verification. Redirecting…

If you are not redirected, login.

Access

Admin only

The NCP-ADS study guide tab is restricted to admin users.

Training / NCP-ADS

NCP-ADS Study Guide

This landing page centralizes study notes and full guide information with direct access to all module pages in exam and priority order.

Sidebar documentation-style landing page for study notes and module planning.

Admin-only · Official NVIDIA blueprint/study guide sources · 6/6 modules published

Recommended Study Priority

Priority Domain Why
Tier 1 Data Manipulation and Software Literacy RAPIDS, Dask, scaling, and memory-heavy NVIDIA workflows.
Tier 1 Machine Learning Multi-GPU, mixed precision, profiling, and experimentation depth.
Tier 2 GPU and Cloud Computing Infrastructure, benchmarking, and scaling practices for GPU environments.
Tier 2 Data Preparation Practical preprocessing foundations used across modeling pipelines.
Tier 3 MLOps Deployment and inference operations including Triton-oriented concepts.
Tier 3 Data Analysis EDA + anomaly + graphs (less technical depth)

Domain 1 - Data Analysis

Exam Weight: 15%

Exploratory analysis, statistical foundations, and model evaluation metrics.

Domain Overview

Demonstrate understanding of exploratory data analysis, statistical foundations, and evaluation metrics.

Objectives

  • Detect anomalies in time-series datasets.
  • Conduct time-series analysis.
  • Create and analyze graph data using GPU-accelerated tools such as cuGraph.
  • Identify when dataset scale requires accelerated or distributed methods.
  • Perform exploratory data analysis (EDA).
  • Visualize time-series data effectively.

Domain 2 - Data Manipulation and Software Literacy

Exam Weight: 20%

Tooling fluency and scalable data manipulation patterns for accelerated data science.

Objectives

  • Design and implement accelerated ETL (extract, transform, load) workflows.
  • Implement caching strategies to reduce shuffle overhead.
  • Use distributed data processing frameworks for large-scale datasets.
  • Implement Dask-based data parallelism for multi-GPU scaling.
  • Profile deep learning workloads with tools such as DLProf.
  • Choose optimal data processing libraries for varying dataset sizes and workloads.

Domain 3 - Data Preparation

Exam Weight: 15%

Feature engineering, data quality, and preprocessing workflows for reliable outcomes.

Objectives

  • Perform data cleansing and preprocessing with cuDF and pandas.
  • Transform and standardize data for model readiness.
  • Apply standardization to ensure feature uniformity where required.
  • Generate synthetic data for augmentation using cuDF and RAPIDS.
  • Identify and acquire suitable datasets for the task.
  • Monitor processing pipelines to recognize bottlenecks.
  • Process, organize, and store datasets for downstream use.

Domain 4 - GPU and Cloud Computing

Exam Weight: 20%

GPU architecture awareness, cloud orchestration, and performance benchmarking concepts.

Objectives

  • Analyze graph data with GPU-accelerated tools such as cuGraph.
  • Optimize data science performance through GPU acceleration.
  • Describe, follow, and execute CRISP-DM process steps.
  • Use dependency management tools such as Docker and Conda to handle versioning conflicts.
  • Determine optimal data type choices for feature columns.
  • Design and implement benchmarks to compare framework performance.

Domain 5 - Machine Learning

Exam Weight: 20%

Modeling workflows, training/evaluation loops, and acceleration-aware experimentation.

Objectives

  • Perform feature engineering for model development.
  • Identify when data scale or workload profile requires acceleration.
  • Run rapid experiments to balance accuracy and inference performance.
  • Optimize machine learning hyperparameters.
  • Train models and compare single-GPU versus multi-GPU strategies.
  • Apply GPU memory optimization techniques such as batching and mixed precision.

Domain 6 - MLOps

Exam Weight: 10%

Deployment, monitoring, versioning, and operational lifecycle management for ML systems.

Objectives

  • Determine optimal data type choices for each feature.
  • Assess and verify dataset memory footprint.
  • Compare required memory against available device memory.
  • Benchmark and optimize GPU-accelerated workflows.
  • Deploy and monitor models in production.