00:06:22 Brian Stucky: On my way. 00:06:43 Brian Stucky: Sorry, all - that was supposed to be to one participant. 00:10:40 Aaron Szczepanek: why didn't we get a decrease in size after convolutions 00:22:53 Aaron Szczepanek: has shape of (1,224,224,3) 00:25:46 Laura Boucheron: image2 = (image.squeeze()-image.min()) image2 = image2/image2.max() plt.figure() plt.imshow(image2) plt.axis('off') plt.show() 00:30:45 Jonathan Shao: Is it better for use to resize the image to 224x224 to use as input to minimize preprocessing 00:32:24 Jonathan Shao: It kinda of worked my sunflower got classified to a daisy, assuming there was no sunflower label 00:32:41 Jonathan Shao: 52.2 percent 00:33:21 WGMeikle: I showed it a picture of me holding up a frame from a bee hive. It got: apiary (83.71%) honeycomb (16.29%) chest (0.00%) 00:34:26 Tavis Anderson: It predicted my tie as a Windsor_tie - which is sort of correct, it’s a half Windsor 00:35:27 Peihua_UMD: my picture is a shrimp, rotisserie (56.45%) pizza (12.76%) pretzel (7.22%) 00:35:41 Scott Tsukuda: blurry photo of fish reported as: great_white_shark (27.17%) loggerhead (10.34%) scuba_diver (7.77%) 00:37:47 Sean Kearney (USDA-ARS): picture of ponderosa pine cones came out as: cardoon (29.98%) sea_urchin (28.12%) buckeye (9.12%) 00:38:10 Sean Kearney (USDA-ARS): neither do i 00:38:20 Sean Kearney (USDA-ARS): Looks like an artichoke! 00:38:38 Jonathan Shao: cards (king, queen jack etc) got classified to refrigerator, sewing machine and envelope. 00:40:14 Yanbo Huang: tiger (85.27%) tiger_cat (14.36%) zebra (0.25%) 00:40:31 Yanbo Huang: tiger 00:40:59 Tavis Anderson: I used an image of a tumbling gymnast, and it returned: balance_beam (68.91%) horizontal_bar (16.05%) racket (6.62%) 00:41:50 Tavis Anderson: There was no balance beam or horizontal bars - very reasonable guess! 00:42:13 Nishan Bhattarai: image of hamster - 82% hampster, 5% wombat, 3.76% fox_squirrel 00:43:10 Joe Kawash: various berries are classified as "grocery_store", "pomegranate" and "hip" 00:44:42 Yanbo Huang: I put Olympic rings but came out tiger (85.27%) tiger_cat (14.36%) zebra (0.25%) 00:44:46 Yanbo Huang: hahahha 00:45:02 Joe Kawash: all were images of smaller berries (cranberries, blueberries) 00:45:30 Joe Kawash: so it makes sense that it does not necessarily have a specific class 00:45:40 Yanbo Huang: whistle (26.09%) rubber_eraser (5.30%) envelope (4.33%) 00:45:42 Yanbo Huang: sorry 01:01:04 Jonathan Shao: What happens if you make a mistake and your 101_ObejectCategories folder had 75 folders instead instead of 101 and vice versa 01:01:38 Brian Stucky: It will crash when you try to train it. 01:01:46 Brian Stucky: I just helped someone debug this. :-) 01:01:54 Jonathan Shao: thanks 01:03:32 Jonathan Shao: Intresting I trained the network on one epoch and got 71% Then I changed the epoch to 2 and hit run again and it started out in the 90's ending up at .9703% for the first epoch and .9883 for the second epoch. Does this mean that the network remembers by previous traning. 01:06:16 Yanbo Huang: I only run 1 epoch but look like it will take long time in my laptop :( 01:07:09 Brian Stucky: @Jonathan - The 71% you saw at the end of epoch 1 is the average accuracy across the whole epoch, I think. When you start again at epoch 2, it picks up where it ended after epoch 1, which was with accuracy at 90%, evidently. 01:07:21 Jonathan Shao: What line of code attaches the 101 labels to the last layer 01:08:41 Jonathan Shao: got it thanks 01:08:49 Jonathan Shao: alphabetical or random 01:09:49 Jonathan Shao: So here you no longer need the annotation folder 01:10:14 Brian Stucky: @Jonathan - correct, we only need the whole-image labels. 01:11:07 Jonathan Shao: I see capitals first and then alphabetical 01:14:13 Jonathan Shao: Are these already in our conda environment 01:14:36 Brian Stucky: Let me check... 01:15:57 Brian Stucky: It looks like tf_keras_vis is already in the environment (on Linux, at least). 01:16:42 Brian Stucky: Oh, wait, sorry - you will need to install tf_keras_vis 01:18:42 Jonathan Shao: It has to be installed in our conda environment correct tf_keras_vis and not just our native system 01:18:58 Brian Stucky: Correct, @Jonathan. 01:20:19 Jonathan Shao: I have an off topic question 01:20:29 Jonathan Shao: I used to use google's inception 01:20:38 Jonathan Shao: However, I have found in the last year 01:20:49 Jonathan Shao: that the network works, but the labels get screwed up