00:19:44 Heather Jones: Good morning. Please see materials at https://kerriegeil.github.io/NMSU-USDA-ARS-AI-Workshops/ 00:27:18 Heather Jones: Good morning. Please see materials at https://kerriegeil.github.io/NMSU-USDA-ARS-AI-Workshops/ 00:46:55 Yanbo Huang: the list not pop up for me 00:48:18 Alex Styer (he/him): I had run the cell as written initially. Had to re-run with jedi set to True which seemed to enable tab completion. 00:49:26 Yanbo Huang: ok 01:02:07 Yanbo Huang: Rerun jedi but still not work 01:06:24 Aaron Szczepanek: the flatten was interesting. 01:06:34 Aaron Szczepanek: I thought It was just transforming 01:08:05 Aaron Szczepanek: what is happening to get to the second dense 01:08:16 Aaron Szczepanek: yea 01:12:34 Sean Kearney (USDA-ARS): Is there some specific benefit to using two dense layers? What happens if we go straight from 4608 to 10 neurons? 01:12:39 Aaron Szczepanek: how is it choosing what to keep? or how is it combining them? 01:14:12 Jonathan Shao: Does this mean If you changed the 32 to 64 filters then the 128 number would likely be larger. 01:15:29 Aaron Szczepanek: the classifier is using the loss function? 01:15:33 Sean Kearney (USDA-ARS): Thanks! That makes perfect sense. 01:16:12 Jonathan Shao: thanks 01:17:48 Jonathan Shao: Can you tell which of the 4608 features was important 01:28:38 Sean Kearney (USDA-ARS): If we clone and then set weights and then predict on new data, do we get the exact same results as the original model? Or is there anything else we needed to set besides weights? 01:28:57 Brian Stucky: Yes, should give the exact same results! 01:29:04 Peihua_UMD: -1->599520 parameters;-2->9568 parameters 01:29:17 Peihua_UMD: -2->9568 parameters 01:29:58 Peihua_UMD: -3 to -5, the result are same 01:32:32 Jonathan Shao: With so many parameters in the dense layer, Intuitively how should we think of what a parameter is.. Should we think of it as some property of the image 01:34:15 Jonathan Shao: thanks 01:53:22 Peihua_UMD: (1, 28, 28, 1) (28, 28, 1) (28, 28) 02:00:50 Peihua_UMD: (1, 26, 26, 32) float32 0.0 0.6767463 02:01:07 Nishan Bhattarai: 1,26,26,32 02:01:45 Sean Kearney (USDA-ARS): (1, 26, 26, 32) float32 0.0 0.7645731 02:01:56 Nishan Bhattarai: min 0, max 0.6134 02:05:14 Jonathan Shao: converge to .45 02:05:49 Jonathan Shao: yes 02:06:02 Jonathan Shao: (0,.45) 02:16:02 Jonathan Shao: I have more dead neurons ~5 02:23:29 Kerrie Geil: I will 02:23:41 Kerrie Geil: Oh, it signe dme in 03:21:34 Aaron Szczepanek: i have one with very strong background 03:21:59 Aaron Szczepanek: when not normalized 03:25:13 Aaron Szczepanek: just my images for reference 03:29:24 Aaron Szczepanek: i have a max int over 1? 03:29:43 Sean Kearney (USDA-ARS): Do we expect max pooling to improve classification, or is it just to make computation more efficient? 03:30:00 Aaron Szczepanek: oh this is pooling 03:30:07 Aaron Szczepanek: yea 03:30:37 Aaron Szczepanek: oh i had thought because we changed our input image to values 0-1 03:30:43 Aaron Szczepanek: its 1.704 03:37:36 Nishan Bhattarai: could you please show the code for a minute for max pool payer? 03:38:07 Nishan Bhattarai: thanks 03:51:37 WGMeikle: if the model was confused about an integer, would it show two white bars or shades of gray? 03:56:16 WGMeikle: I used X_test[8], which is a weird 5. Difference between first and second conv layers more striking than with the 7. 03:59:31 Sean Kearney (USDA-ARS): X_test[33] was interesting in my model. Still correctly identified as 4, but with a somewhat strong final probability for 0 04:01:30 Sean Kearney (USDA-ARS): I had 78% it was a 4 and 20% it was a 0 04:03:13 Sean Kearney (USDA-ARS): Did you say 217 for that last one? 04:03:40 Sean Kearney (USDA-ARS): huh. mine was 98% that it was a 6 04:04:12 WGMeikle: mine is >99% sure about 217 being a 6 04:09:55 Laura Boucheron: https://yosinski.com/deepvis 04:14:37 Aaron Szczepanek: these types of activations are coming from simple 3,3 filters like ours? 04:15:01 Aaron Szczepanek: or they begin to appear like that after multiple convolutions? 04:15:36 Brian Stucky: He was mostly showing activations from several layers deep into the network. 04:16:22 Aaron Szczepanek: i guess im thinking a bigger kernal 04:16:55 Aaron Szczepanek: ok yea deeper in the convolutional layers 04:17:12 Aaron Szczepanek: cool! 04:17:23 Brian Stucky: Oh, I see. The original AlexNet did also have some larger kernels at the beginning, but later conv layers were 3x3. 04:17:48 Brian Stucky: (The guy in the video said he was looking at AlexNet.) 04:18:03 Aaron Szczepanek: interesting! 04:20:02 Sean Kearney (USDA-ARS): Laura - will we be getting into pixel-wise classification? (e.g., SegNet) 04:20:52 Laura Boucheron: We will talk about YOLO which can find bounding boxes for objects in images. Unfortunately, actual segmentation networks are a bear to get working on a machine. 04:21:19 Laura Boucheron: I desperately tried to find one to use, but there was nothing that was reasonable to get working. 04:36:34 Aaron Szczepanek: will we be learning automated ways of cropping? 04:37:48 Sean Kearney (USDA-ARS): I'm having issues with the all-in-one command: skimage.transform.resize(test,(1,28,28,1)) 04:38:08 Sean Kearney (USDA-ARS): ok! 04:38:32 Aaron Szczepanek: the future algorythms will be able to create bouding boxes using the full image? 04:39:08 Sean Kearney (USDA-ARS): oh I think that is the issue. I had the conversion to greyscale afterward! 04:39:09 Laura Boucheron: I_gray = skimage.color.rgb2gray(I) # convert to grayscale 04:40:27 Aaron Szczepanek: i was thinking of taking picture of small seeds, cropping to the seed, then identifying a phenotype 04:42:08 Joe Kawash: might be easier to use a sliding window approach, where you can define the background (such as white) and if you can set your object (or number) as being sufficiently within the background, consider the individual cropped. 04:43:04 Aaron Szczepanek: @joe, I was thinking that. Then take the centroid of the blobs 04:44:35 Laura Boucheron: skimage.measure.label 04:44:44 Jonathan Shao: Over many examples won't the classifier or network figure it out even with the background for 2 phenotypes of seeds on white paper 04:48:20 Jeremy Edwards: we use 3D printed trays to keep the seeds from touching prior to imaging, but that's certainly not an AI solution to the problem 04:48:34 Laura Boucheron: some other hints: digit 1: (355-505, 2035-2190), digit 2: (425-625, 2900-3100), digit 3: (465-665, 3775-3975), digit 4: (1250-1400, 1140-1290), digit 5: (1270-1460, 1950-2140), digit 6: (1365-1515, 2855-2995), digit 7: (1375-1565, 3705-3895), digit 8: (1890-2090, 1100-1300), digit 9: (1915-2100, 1890-2075) 04:50:16 Sean Kearney (USDA-ARS): The cropping size really makes a difference. I could not get 0 to predict accurately with two other trys, and then it predicted perfectly with your cropping. 04:51:17 Peihua_UMD: I am struggling with the wrong plot, where even after np.squeeze(i_n), I still cannot see the right number area 04:51:36 Peihua_UMD: Its a plot of line with different intensive 04:51:40 Jonathan Shao: How did you figure out the indexes 04:51:50 Aaron Szczepanek: oh yeah mine was 0.69 sure 04:52:12 Aaron Szczepanek: 69% sure nice 04:52:19 Jonathan Shao: I see nevermind 04:53:41 Peihua_UMD: i_n = skimage.transform.resize(I_gray[355:505, 2035:2490],(1,28,28,1)) plt.imshow(np.squeeze(i_n[0]),cmap='gray') plt.show() 04:56:10 Peihua_UMD: same, I tried before 04:56:25 Peihua_UMD: plt.imshow(np.squeeze(i_n),cmap='gray') 04:59:28 Peihua_UMD: it's weird 05:01:07 Peihua_UMD: channel first or channel last? 05:01:20 Peihua_UMD: does this cause the problem? 05:02:10 Peihua_UMD: Never mind, we can move on and discuss it later 05:11:23 Aaron Szczepanek: that was the certainty for one digit 05:14:25 Laura Boucheron: I_gray = skimage.color.rgb2gray(I) # convert to grayscale I_gray = 1-I_gray # invert colors I0 = I_gray[295:445,1160:1310] # crop out the digit 0 I0 = skimage.transform.resize(I0,(28,28)) # resize to 28x28 05:14:41 Laura Boucheron: print('Actual 0') digit = I0.reshape(1, 28, 28, 1) # reshape to 1x28x28x1 Y = model1.predict(digit,verbose=1) # predict label print(Y) y = np.argmax(Y) print(y) print('') 05:19:49 Sean Kearney (USDA-ARS): Probability for 0 went from 91% to 12% 05:21:01 Peihua_UMD: 13.48% 05:21:26 Peihua_UMD: from 84.6% 05:21:29 Sean Kearney (USDA-ARS): no 05:21:34 Nishan Bhattarai: 88 to 23% for 0..got incorrect 05:21:37 Sean Kearney (USDA-ARS): There were three other numbers with probs ~25% 05:22:11 Sean Kearney (USDA-ARS): For 1 it went from 96% to 0% 05:22:35 Nishan Bhattarai: digits 8 was still 65%..correct too