Page 2 of 2 CS 443 and 543 – Machine Learning – Project 3 – Deep Learning (out of 150 points) DUE Wednesday April XXXXXXXXXX In this assignment you will use neural networks to perform practical deep...

1 answer below »
machine learning


Page 2 of 2 CS 443 and 543 – Machine Learning – Project 3 – Deep Learning (out of 150 points) DUE Wednesday April 15 2020 In this assignment you will use neural networks to perform practical deep learning analysis. Specifically, your goal is to develop a step by step process of creating, training, and evaluating deep learning models using TensorFlow and Keras. You are highly encouraged to use Google Colab. You should aim to perform the following objectives: 1) Gain an understanding of the deep learning model life cycle. The steps include a) define the model b) compile the model c) fit the model d) evaluate the model e) make predictions. 2) Develop deep learning models including multilayer perceptron models, convolutional neural network models, and recurrent neural network models 3) Learn how to interpret learning curves and saving models for later use 4) Learn techniques to improve the performance of deep learning models, including a) avoiding overfitting with dropout, b) accelerating training using batch normalization, and c) halt training at the right time with early stopping. 5) Learn and understand how autoencoders can be applied towards image compression (dimensional reduction) and image processing Useful References (there is a lot of nice boilerplate code here ): 1) Keras Documentation: https://keras.io/ 2) TensorFlow : https://www.tensorflow.org/ 3) TensorFlow 2 Tutorial: Get Started in Deep Learning with tf.keras: https://machinelearningmastery.com/tensorflow-tutorial-deep-learning-with-tf-keras/ 4) Google Colab Free GPU Tutorial: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d 5) Building Autoencoders in Keras: https://blog.keras.io/building-autoencoders-in-keras.html 6) How Autoencoders Work: Intro and Use Cases: https://www.kaggle.com/shivamb/how-autoencoders-work-intro-and-usecases 7) Autoencoders with Keras: https://ramhiser.com/post/2018-05-14-autoencoders-with-keras/ Deep Neural Network models Multilayer Perceptron (MLP) Models You are to develop the following MLP Models: a) MLP models for Binary Classification b) MLP models for Multiclass Classification c) MLP for Regression Convolutional Neural Networks (CNNs) d) Convolutional Network Models for image classification. The most popular use of CNNs is for image processing. For this task you should select a suitable dataset (other than MNIST) for image classification tasks. You should start by loading the images and plotting a few images. You will want to then fit the model and evaluate it on a test dataset. You will finally make a prediction for a single image. Recurrent Neural Networks (RNNs) Recurrent Neural Networks are designed to operate on sequences of data. Thus, they are effective for natural language processing problems, as well as time series forecasting and speech recognition. One of the most popular types of RNN is Long Short-Term Memory network, or LSTM. Here you should use LSTM to effectively deal with sequential data (either text documents, audio, or time series) For each of these 5 models perform the following tasks: · Select an appropriate dataset, describe the problem statement, and load the data · Perform pre-processing as necessary and split the data into training and test sets · Define and build the model ( can be defined with either Sequential or Functional API) · This involves defining layers of the model, configuring layers with a number of nodes and activation function, and connecting the layers into a cohesive model · You should specify input and output layers as appropriate · You may add as many layers as you feel appropriate · You should select an appropriate activation function with weight initialization to avoid problems with vanishing gradients · Obtain a text description of the model and a model architecture plot using the appropriate functions · Describe the connections and data flow in your model based on the text description and architecture plot · Compile model · Select an appropriate loss function that you want to optimize · State why the loss function was selected. · Select an algorithm to perform the optimization procedure · You should select appropriate performance metrics to keep track of during the model training process · Fit the model · Select the training configuration (number of epochs and the batch size) · Depending on the complexity of the model, hardware, and size of training dataset, this process can take from seconds to hours to days · Evaluate the model · Save the model to a file · Load the model from file · Make a prediction · Describe and explain your results. · Plot learning curves to gain insight into the learning dynamics of the model · Describe how well the model is learning · Describe whether the model is underfitting or overfitting the training dataset · Are there ways you could improve the model? Please try to run potential improvements and explain your results and outcomes. To obtain better model performance, you should try the following · Reduce overfitting with Dropout · Accelerate the training with batch normalization · Halt training at the appropriate time with Early Stopping · You may also use the learning curve to obtain additional insights into the learning dynamics of the run and when training was halted Autoencoders In this part of the project you will gain an understanding of how autoencoders learn a data representation (encoding) for applications such as dimensional reduction (such as image compression) and performing data denoising. You will first create a simple (single-layer) autoencoder for image compression as follows 1) Select an appropriate dataset of images (other than MNIST). Perform data loading and data preprocessing. 2) Create an autoencoder model. This will include an encoding function (encoder model) and decoding function. You should select an appropriate activation function for the encoding and decoding models. You will also want to provide a compression factor to create some compression of the data. 3) Perform training of the model using an appropriate optimizer and loss function. Iterations should be performed using an appropriate/reasonable number of batches and epochs 4) To check the encoded images and the reconstructed image quality, randomly sample and plot 10 test images. The plots should be of the original image, encoded image and reconstructed image. Describe your results in terms of how the reconstructed images appear in comparison to the original. Next you will add additional layers to create a deep autoencoder: 5) Add 3 to 5 fully connected layers for the encoding and decoding models. 6) Extract the encoder model to visualize the encoded images. Sample the same test images as for the simple autoencoder. Plot the original image, encoded image, and reconstructed image. 7) Describe and explain your results in terms of how the reconstructed images look compared to those from a single-layer autoencoder. Now use a convolutional autoencoder, which will have CNNs as the encoding and decoding models instead of fully-connected networks 8) You may need to reshape the images to the original resolution for use for the CNNs. 9) Build the convolutional autoencoder using Conv2D and MaxPooling2D layers for the encoder, and Conv2D and UpSampling2D layers for the decoder. Encoded images may require flattening for visualization before upscaling back to the original resolution. 10) To extract the encoder model for the autoencoder, create a new Model with the sample input as the autoencoder. The output should be that of the flattening layer. 11) Perform training of the model and evaluate the reconstructed digits. Sample the same test images as for the simple and deep autoencoders. Plot the original image, encoded image, and reconstructed image. Describe how the images look following reconstruction compared to simple and deep autoencoders. Evaluate how autoencoders may be useful for denoising data included in images. 12) Add some noise to the training and test data. 13) Build a convolutional neural network with more parameters. Use the noisy data created as the training and validation data. 14) Plot the original and reconstructed images. Evaluate and describe how the reconstructed images look compared to the original GRADING RUBRIC: (_________ out of 150 points) MLP for Binary Classification ____out of 25 points · Select an appropriate dataset, describe the problem statement, and load the data ___out of 2 points · Perform pre-processing as necessary and split the data into training and test sets ___out of 1 point · Define and build the model ( can be defined with either Sequential or Functional API) · Configure layers with nodes and activation function ___out of 1 point · Select an appropriate activation function with weight initialization to avoid problems with vanishing gradients ___out of 1 point · Obtain a text description of the model and a model architecture plot using the appropriate functions ___out of 1 point · Describe the connections and data flow in your model based on the text description and architecture plot ___out of 1 point · Compile model · Select an appropriate loss function that you want to optimize ___out of 1 point · State why the loss function was selected ___out of 1 point · Select an algorithm to perform the optimization procedure ___out of 1 point · Select appropriate performance metrics to keep track of during the model training process ___out of 1 point · Fit the model ___out of 1 point · Select the training configuration (number of epochs and the batch size) · Evaluate the model ___out of 2 points · Save and load the model ___out of 1 point · Make a prediction ___out of 1 point · Describe and explain your results ___out of 2 points · Plot learning curves to gain insight into the learning dynamics of the model · Describe how well the model is learning ___out of 1 point · Describe whether the model is underfitting or overfitting the training dataset ___out of 1 point · Are there ways you could improve the model? To obtain better model performance, you should try the following: · Reduce overfitting with Dropout ___out of 1 point · Accelerate the training with batch normalization ___out of 1 point · Halt training at the appropriate time with Early Stopping ___out of 1 point · You may also use the learning curve to obtain additional insights into the learning dynamics of the run and when training was halted · Explain your results and outcomes ___out of 2 points MLP for Multiclass Classification _____out of 25 points · Select an appropriate dataset, describe the problem statement, and load the data ___out of 2 points · Perform pre-processing as necessary and split the data into training and test sets ___out of 1 point · Define and build the model ( can be defined with either Sequential or Functional API) · Configure layers with nodes and activation function ___out of 1 point · Select an appropriate activation function with weight initialization to avoid problems with vanishing gradients ___out of 1 point · Obtain a text description of the model and a model architecture plot using the appropriate functions ___out of 1 point · Describe the connections and data flow in your model based on the text description and architecture plot ___out of 1 point · Compile model · Select an appropriate loss function that you want to optimize ___out of 1 point · State why the loss function was selected ___out
Answered Same DayMay 11, 2021

Answer To: Page 2 of 2 CS 443 and 543 – Machine Learning – Project 3 – Deep Learning (out of 150 points) DUE...

Rohith answered on May 11 2021
159 Votes
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "57451.ipynb",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU",
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"7f7940897d7d4d63af37fd3b1c2f7f95": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_b182c4a6480f48e9b2bfb1390b0d4815",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_4474bca1d3ab4c47a99221f66b6cfadd",
"IPY_MODEL_20627597e24d4c93bbbeab41455a6574"
]
}
},
"b182c4a6480f48e9b2bfb1390b0d4815": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"4474bca1d3ab4c47a99221f66b6cfadd": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_e7c42f72068141289030859e3722efcb",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 1,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 1,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_8dd2e6b864d3477bad15b833c14f30d9"
}
},
"20627597e24d4c93bbbeab41455a6574": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_c1fdda8ba5ec46118700728af22f1e57",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 9920512/? [00:01<00:00, 7430301.38it/s]",
"_view_count": null,
"_view_module_version":
"1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_2ad6453392014e9ab9a63ed5a1980b2a"
}
},
"e7c42f72068141289030859e3722efcb": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "initial",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"8dd2e6b864d3477bad15b833c14f30d9": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"c1fdda8ba5ec46118700728af22f1e57": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"2ad6453392014e9ab9a63ed5a1980b2a": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"5cba4940615046d7873ab1733a177a93": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_f948efef3ecd4420a099eaab630656ba",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_6e1285db8c5c4c00b8318cb2673f6397",
"IPY_MODEL_d51fdc2439e54c1f923eefed00fa1b32"
]
}
},
"f948efef3ecd4420a099eaab630656ba": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"6e1285db8c5c4c00b8318cb2673f6397": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_41e91d68046f472e8bc8c209014d9e81",
"_dom_classes": [],
"description": " 0%",
"_model_name": "FloatProgressModel",
"bar_style": "info",
"max": 1,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 0,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_92cea4fc6abe4602acaeb9ecdf7e5062"
}
},
"d51fdc2439e54c1f923eefed00fa1b32": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_7395f0be0abf4a508e3a5014196b6d58",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 0/28881 [00:00<?, ?it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_096ef23bca8140ac85cc8d10b841f998"
}
},
"41e91d68046f472e8bc8c209014d9e81": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "initial",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"92cea4fc6abe4602acaeb9ecdf7e5062": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"7395f0be0abf4a508e3a5014196b6d58": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"096ef23bca8140ac85cc8d10b841f998": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"e8d0dc3eb472476b83ff2ece1df2e566": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_d314920243714f47a9e85cee318cf4d1",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_1143403a73a24f2092c7b80ceb452aad",
"IPY_MODEL_5b2c533124ec4fd6a3c642126dc09935"
]
}
},
"d314920243714f47a9e85cee318cf4d1": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"1143403a73a24f2092c7b80ceb452aad": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_2335215e5d084710a76db82e046ace37",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 1,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 1,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_43bd7d1f92a041bbb323f3362c88cb91"
}
},
"5b2c533124ec4fd6a3c642126dc09935": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_a17af93ca1434066bab79fee1ae58283",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 1654784/? [00:01<00:00, 1653308.19it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_18a5f7dc97ba4bdea984913cd7387f32"
}
},
"2335215e5d084710a76db82e046ace37": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "initial",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"43bd7d1f92a041bbb323f3362c88cb91": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"a17af93ca1434066bab79fee1ae58283": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"18a5f7dc97ba4bdea984913cd7387f32": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"a788a921543e4379bb52aa0e679f68a2": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_24365762ed7a45aab8b2e1be6fceb80e",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_9cf1cc17eebf42e1b6d739c038480ccc",
"IPY_MODEL_6ef640764e114c348858025d1b342918"
]
}
},
"24365762ed7a45aab8b2e1be6fceb80e": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"9cf1cc17eebf42e1b6d739c038480ccc": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_cb0fb1fc8b574701a1f30133a95e876f",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 1,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 1,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_68ee0d9558214e66aee690f655f14943"
}
},
"6ef640764e114c348858025d1b342918": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_7727c0a6477846e5823024e9ee4cfd0a",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 8192/? [00:00<00:00, 22287.21it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_664ff6d6b6bd474c89c78022c100d9ca"
}
},
"cb0fb1fc8b574701a1f30133a95e876f": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "initial",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"68ee0d9558214e66aee690f655f14943": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"7727c0a6477846e5823024e9ee4cfd0a": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"664ff6d6b6bd474c89c78022c100d9ca": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
}
}
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "D9bZFu-Et7z_",
"colab_type": "text"
},
"source": [
"# CNN"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OBFFD-YPse3x",
"colab_type": "text"
},
"source": [
"# Cat vs. Dog Image Classification\n",
"\n",
"1. Explore the example data\n",
"2. Build a small convnet from scratch to solve our classification problem\n",
"3. Evaluate training and validation accuracy"
]
},
{
"cell_type": "code",
"metadata": {
"id": "CehCBFmVrgy7",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 204
},
"outputId": "75b2f44e-b1a8-4401-fa02-881f1e2b26e0"
},
"source": [
"!wget --no-check-certificate \\\n",
" https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \\\n",
" -O /tmp/cats_and_dogs_filtered.zip"
],
"execution_count": 28,
"outputs": [
{
"output_type": "stream",
"text": [
"--2020-05-11 07:45:50-- https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip\n",
"Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.5.208, 2607:f8b0:4007:800::2010\n",
"Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.5.208|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 68606236 (65M) [application/zip]\n",
"Saving to: ‘/tmp/cats_and_dogs_filtered.zip’\n",
"\n",
"/tmp/cats_and_dogs_ 100%[===================>] 65.43M 84.3MB/s in 0.8s \n",
"\n",
"2020-05-11 07:45:51 (84.3 MB/s) - ‘/tmp/cats_and_dogs_filtered.zip’ saved [68606236/68606236]\n",
"\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "UZYe6DIdsmbW",
"colab_type": "code",
"colab": {}
},
"source": [
"import os\n",
"import zipfile\n",
"\n",
"local_zip = '/tmp/cats_and_dogs_filtered.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('/tmp')\n",
"zip_ref.close()"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "BCQBeLfkstlG",
"colab_type": "text"
},
"source": [
"The contents of the .zip are extracted to the base directory `/tmp/cats_and_dogs_filtered`, which contains `train` and `validation` subdirectories for the training and validation datasets (see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/validation/check-your-intuition) for a refresher on training, validation, and test sets), which in turn each contain `cats` and `dogs` subdirectories. Let's define each of these directories:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "nuxPKJ8dsq1v",
"colab_type": "code",
"colab": {}
},
"source": [
"base_dir = '/tmp/cats_and_dogs_filtered'\n",
"train_dir = os.path.join(base_dir, 'train')\n",
"validation_dir = os.path.join(base_dir, 'validation')\n",
"\n",
"# Directory with our training cat pictures\n",
"train_cats_dir = os.path.join(train_dir, 'cats')\n",
"\n",
"# Directory with our training dog pictures\n",
"train_dogs_dir = os.path.join(train_dir, 'dogs')\n",
"\n",
"# Directory with our validation cat pictures\n",
"validation_cats_dir = os.path.join(validation_dir, 'cats')\n",
"\n",
"# Directory with our validation dog pictures\n",
"validation_dogs_dir = os.path.join(validation_dir, 'dogs')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "-siouRSgsu8P",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 51
},
"outputId": "e010d4d4-68cd-4026-edff-718dc11ab3da"
},
"source": [
"train_cat_fnames = os.listdir(train_cats_dir)\n",
"print(train_cat_fnames[:10])\n",
"\n",
"train_dog_fnames = os.listdir(train_dogs_dir)\n",
"train_dog_fnames.sort()\n",
"print(train_dog_fnames[:10])"
],
"execution_count": 31,
"outputs": [
{
"output_type": "stream",
"text": [
"['cat.769.jpg', 'cat.13.jpg', 'cat.596.jpg', 'cat.991.jpg', 'cat.908.jpg', 'cat.816.jpg', 'cat.7.jpg', 'cat.866.jpg', 'cat.383.jpg', 'cat.504.jpg']\n",
"['dog.0.jpg', 'dog.1.jpg', 'dog.10.jpg', 'dog.100.jpg', 'dog.101.jpg', 'dog.102.jpg', 'dog.103.jpg', 'dog.104.jpg', 'dog.105.jpg', 'dog.106.jpg']\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "KdVy0gLHsv8N",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 85
},
"outputId": "0af8b36a-d093-4f1f-fe45-a8c6f0e161f6"
},
"source": [
"print('total training cat images:', len(os.listdir(train_cats_dir)))\n",
"print('total training dog images:', len(os.listdir(train_dogs_dir)))\n",
"print('total validation cat images:', len(os.listdir(validation_cats_dir)))\n",
"print('total validation dog images:', len(os.listdir(validation_dogs_dir)))"
],
"execution_count": 32,
"outputs": [
{
"output_type": "stream",
"text": [
"total training cat images: 1000\n",
"total training dog images: 1000\n",
"total validation cat images: 500\n",
"total validation dog images: 500\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "kFIYZKdrsw_q",
"colab_type": "code",
"colab": {}
},
"source": [
"%matplotlib inline\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"\n",
"# Parameters for our graph; we'll output images in a 4x4 configuration\n",
"nrows = 4\n",
"ncols = 4\n",
"\n",
"# Index for iterating over images\n",
"pic_index = 0"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "_cR99E_Ss4gF",
"colab_type": "text"
},
"source": [
""
]
},
{
"cell_type": "code",
"metadata": {
"id": "aOBnXDCFszaA",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 889
},
"outputId": "745a8b94-64c7-4939-c0bb-60267cad8bd2"
},
"source": [
"# Set up matplotlib fig, and size it to fit 4x4 pics\n",
"fig = plt.gcf()\n",
"fig.set_size_inches(ncols * 4, nrows * 4)\n",
"\n",
"pic_index += 8\n",
"next_cat_pix = [os.path.join(train_cats_dir, fname) \n",
" for fname in train_cat_fnames[pic_index-8:pic_index]]\n",
"next_dog_pix = [os.path.join(train_dogs_dir, fname) \n",
" for fname in train_dog_fnames[pic_index-8:pic_index]]\n",
"\n",
"for i, img_path in enumerate(next_cat_pix+next_dog_pix):\n",
" # Set up subplot; subplot indices start at 1\n",
" sp = plt.subplot(nrows, ncols, i + 1)\n",
" sp.axis('Off') # Don't show axes (or gridlines)\n",
"\n",
" img = mpimg.imread(img_path)\n",
" plt.imshow(img)\n",
"\n",
"plt.show()\n"
],
"execution_count": 34,
"outputs": [
{
"output_type": "display_data",
"data": {
"image/png":...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here