Skip to the content.

Wednesday (7/7) Response

(1) From the Preprocess the data section of the script, modify the training image to produce three new images.

img.png img_1.png img_2.png

(2) Under the Make predictions section, present the array of predictions for an image from the test set other than the one given in the example script. What does this array represent? How were the Softmax() and argmax() functions applied? Read this post from DeepAI, What is the Softmax function for a description of the two functions (focus on the softmax formula, calculating softmax and the softmax vs. argmax sections). Does the output from np.argmax() match the label from your test_labels dataset?

  - predictions[50]:  array([4.2334945e-05, 2.7757805e-08, 4.8826568e-02, 6.8814859e-10,
   4.1657707e-01, 3.3460002e-13, 5.3455389e-01, 5.8882500e-13,
   7.5764532e-08, 3.8225739e-08])

(3) Under the Verify predictions section, plot two additional images (other than either of the two given in the example script) and include the graph of their predicted label as well as the image itself.

img_3.png img_4.png

(4) Under the Use the trained model section, again select a new image from the test dataset. Produce the predictions for this newly selected image. Does the predicted value match the test label? Although you applied the argmax() function in this second instance, you did not use Softmax() a second time. Why is that so (please be specific)?

img_5.png

(5) Produce a plot of 25 handwritten numbers from the data with their labels indicated below each image. Fit the model and report the accuracy of the training dataset. Likewise report the accuracy of the test dataset. As in the above example, from the Verify predictions section, plot two images and include the graph of their predicted label as well as the image itself.

img_6.png

(6) Which of the two models is more accurate? Why do you think this is so?