Home

Stabilization

Image (or video) stabilization seeks to remove unwanted motions (often from camera shake or jitter) in consecutive frames of a video.

DVC

Video compression is essential for efficiently storing and transmitting video data. Traditional video compression standards like H.264/AVC and H.265/HEVC rely on hand-crafted algorithms and heuristics to reduce redundancy in video sequences. With the advent of deep learning, researchers have started exploring data-driven approaches to video compression, aiming to learn optimal representations directly from data.

VC

his tutorial explains the entire workflow of traditional video compression, combining motion estimation, motion compensation (including warping), residual calculation, DCT transformation, quantization, and encoding. We'll add mathematical relationships between the blocks to provide a deeper understanding.

MC

Motion estimation and motion compensation are critical components in video compression algorithms. They exploit temporal redundancies between consecutive frames in a video sequence to reduce the amount of data required for efficient storage and transmission. By predicting the motion of objects from one frame to another, we can represent a video more compactly without significantly compromising visual quality.

DenoisingIII

To estimate noise in a single image, especially when the ground truth is not available, you need to make statistical assumptions about the noise and the image content. Here's a full tutorial that outlines multiple practical techniques for noise estimation:

DenoisingII

In image denoising, we often want to remove noise but preserve edges. Classical filters like Gaussian blur are fast but blur across edges, causing loss of detail.

DenoisingI

Image denoising is a fundamental problem in image processing and computer vision. Given a noisy image, the goal is to recover a clean version that preserves important details and structures. Traditional denoising methods often rely on local information.

ImageFiltering

HPF and LPF filtering of images, Gaussian Kernels, Laplacian Pyramids

HDR

Standard cameras and displays work in Standard Dynamic Range (SDR), which only captures a limited range of light intensity (~100 nits). However, real-world scenes span a much larger range (~0.01 to 10,000+ nits). HDR imaging aims to: