Home


MixedPrecision

Mixed precision training is a technique in deep learning where computations are performed using different numerical precisions—typically a mix of 16-bit floating point (FP16) and 32-bit floating point (FP32)—to accelerate training and reduce memory usage while maintaining model accuracy.

Quantization

With the proliferation of deep learning models in various applications, deploying these models on resource-constrained devices like mobile phones, embedded systems, and IoT devices has become essential. Quantization is a key technique that reduces the model size and computational requirements by converting floating-point numbers to lower-precision representations, such as integers. This tutorial provides an in-depth exploration of quantizing machine learning models. We will delve into the mathematical underpinnings, practical implementations using PyTorch, and advanced topics like mixed precision quantization and layer fusion. By the end of this tutorial, you will have a comprehensive understanding of quantization techniques and how to apply them effectively to optimize your machine learning models.

RANSAC

RANSAC (Random Sample Consensus) is an iterative method for robust parameter estimation. It attempts to find a model that best fits the inlier data points while minimizing the effect of outlier points.

FocalLoss

Focal Loss is a modified version of the standard cross-entropy loss, designed to address the class imbalance problem, especially in tasks like object detection (e.g. RetinaNet) or extremely imbalanced binary classification.

Kmeans

K-Means is a popular clustering algorithm used to partition a dataset into K clusters by minimizing intra-cluster variance. A crucial factor in its performance is how you initialize the cluster centroids.

Ensemble

Ensemble Learning: Bootstrap Sampling, Bagging (Random Forest), Boosting (AdaBoost, Gradient Boosting, XGBoost).

DecisionTrees

A Decision Tree is a recursive, rule-based model that partitions the feature space R^n into disjoint regions and assigns a prediction to each region. It works by splitting the dataset at each node based on feature values to reduce some measure of impurity or error.

SVM

Support Vector Machines, Hard Margin SVM, Soft Margin SVM, Kernel Tricks, and Support Vector Regression