{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Image_filtering - AY2020-2021.ipynb", "provenance": [], "collapsed_sections": [ "616MZ-HIpcvs", "xgAbWzVhmOjv" ] }, "kernelspec": { "name": "python3", "display_name": "Python 3" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "5ExxG82ZACpe" }, "source": [ "## **Vision and Cognitive Services Lab - Image Filtering**\n", "\n", "

\n",
"\n",
"\n",
"\n",
"OpenCV (Open source Computer Vision - https://opencv.org/) is a famous programming library for developing real-time computer vision applications. \n",
"\n",
"* Cross-platform;\n",
"* Free functions to be used under the open-source BSD license:\n",
" * Pixel-level image manipulation, camera calibration, 3-D reconstruction, feature points detectors, matching algorithms, motion extraction, feature tracking;\n",
"* Support for models developed with various deep learning frameworks (e.g., TensorFlow, PyTorch, Caffe);\n",
"* Combined with machine learning and DNN modules for image and video manipulation;\n",
"* OpenCV Documentation: (https://docs.opencv.org/)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5PUm7wL6KN_Z"
},
"source": [
"     \n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lV_mRT55CONX"
},
"source": [
"## **Load and visualize an image**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "pU301U0_1Cz5",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 535
},
"outputId": "b68ed489-c305-48d7-f3b3-e205704f3a5f"
},
"source": [
"# Import libraries\n",
"import cv2\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import itertools\n",
"\n",
"# Read images into the workspace\n",
"# retval\t=\tcv.imread(\tfilename[, flags]\t)\n",
"lena = cv2.imread('lena.bmp') # BGR image\n",
"lena_rgb = cv2.cvtColor(lena, cv2.COLOR_BGR2RGB) # RGB image\n",
"lena_gs = cv2.cvtColor(lena, cv2.COLOR_BGR2GRAY) # GS image\n",
"# # Convert image to grayscale when importing\n",
"# lena_gs = cv2.imread('lena.bmp', cv2.IMREAD_GRAYSCALE) \n",
"cameraman = cv2.imread('cameraman.tif')\n",
"sp_phantom = cv2.imread('SheppLogan_Phantom.png')\n",
"\n",
"# Plot rgb/grayscale images\n",
"plt.figure(figsize=(14, 7))\n",
"plt.subplot(1,4,1)\n",
"plt.imshow(lena)\n",
"plt.title('Lena (BGR)')\n",
"plt.subplot(1,4,2)\n",
"plt.imshow(lena_rgb)\n",
"plt.title('Lena (RGB)')\n",
"plt.subplot(1,4,3)\n",
"plt.imshow(lena_gs, cmap=plt.cm.gray)\n",
"plt.title('Lena (GRAY)')\n",
"\n",
"plt.figure(figsize=(14, 7))\n",
"plt.subplot(1,4,1)\n",
"plt.imshow(cameraman)\n",
"plt.title('Cameraman')\n",
"plt.subplot(1,4,2)\n",
"plt.imshow(sp_phantom)\n",
"plt.title('SheppLogan Phantom')\n",
"\n",
"# Print images properties\n",
"print('\"Lena\" Properties')\n",
"print(\"Number of Pixels: \" + str(lena_rgb.size))\n",
"print(\"Image (RGB) shape: \" + str(lena_rgb.shape))\n",
"print(\"Image (GRAY) shape: \" + str(lena_gs.shape))\n",
"print(\"Image type: \" + str(type(lena_rgb)))\n",
"\n",
"# Save image shape\n",
"(height, width, channels) = lena_rgb.shape"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"\"Lena\" Properties\n",
"Number of Pixels: 786432\n",
"Image (RGB) shape: (512, 512, 3)\n",
"Image (GRAY) shape: (512, 512)\n",
"Image type: Image noise is an undesired effect produced by image sensors or external factors which may obscure the information. Linear filtering can be used to remove certain types of noise. The following example shows how to remove salt and pepper noise from an image using a median filter (box filter can also be used). Using the median filter, the value of the output pixel is defined by the median of the neighborhood pixels, rather than the mean. For this reason, the median filter is less sensitive than the mean filter to extreme values (outliers) and it does not reduce the sharpness of the image. In this exercise, you will implement the median filter replacing each pixel of the input image with the median of its neighborhood. The median value is computed by sorting all the neighborhood values of the selected pixel in ascending order and then by replacing its value by the pixel value in the middle. Input and output images must have the same spatial size. The kernel size must be an odd number.\n",
"\n",
"\n",
" While in blurring we reduce the edge content, with sharpening we increase the edge content. A sharpening filter can be obtained in two steps: given the smoothed (blurred) image, it subtracts this image from the original one to obtain the \"details\", and then adds the \"details\" to the original image.