00:27:46 Timothy: mine took 22 seconds on local machine 00:30:49 Kerrie Geil: source activate wget https://kerriegeil.github.io/NMSU-USDA-ARS-AI-Workshops/aiworkshop.yml conda env create --prefix /project/your_project_name/envs/aiworkshop -f aiworkshop.yml 00:33:36 Elizabeth Chin: after loading the model, is there a way to get some kind of summary about it? (like the n of layers etc.) 00:33:54 Elizabeth Chin: oh! ok thanks :D 00:41:59 Jennifer Woodward-Greene: I ran all the above cells without error, however, I am getting no output from the model1.summary() command. 00:42:19 Jennifer Woodward-Greene: Yes 00:42:46 Jennifer Woodward-Greene: copy and paste which code? 00:43:00 Jennifer Woodward-Greene: I ran it, all and did not change anythign 00:43:17 Jennifer Woodward-Greene: No, and I see the model in the file 00:43:39 Andrew.French: getting error too, same copy and paste 00:43:43 Jennifer Woodward-Greene: O... 00:44:01 Andrew.French: OS error 00:44:40 Andrew.French: same setup from Friday.... tryihng again 00:44:43 Jennifer Woodward-Greene: No love... will restart kernel 00:46:25 Jennifer Woodward-Greene: Neither gave the summary. Still no errors 00:49:07 Andrew.French: download finishes with this message 00:49:11 Andrew.French: Epoch 1/1 60000/60000 [==============================] - 45s 742us/step - loss: 0.1517 - accuracy: 0.9543s - loss: 0.153 00:50:56 Andrew.French: ok but it is at the end of the3 download... don't want to hold up the workshop... ill work on this 01:07:32 Jennifer Woodward-Greene: Can you say again what you mean by 'fully connected', or not, layers? 01:09:08 Jennifer Woodward-Greene: yes thanks! 01:11:15 zhanyou xu: what is the formula to calculate the number of parameters? 01:18:14 Jennifer Woodward-Greene: Can the kernel (i.e. 3x5) be non-symmetrical? Is there a pro/con to this in CNN? 01:21:44 Lucas Heintzman: Is transfer learning sensitive to "orientation" of images (i.e. the frozen inputs are in a perpendicular formation, and our data is not I.e. images from camera are off by a few degrees)? 01:23:53 Lucas Heintzman: Ok thank you. 02:01:52 Andrew.French: is there a range method so we get min and max at same time 02:07:11 zhanyou xu: Can you set up seed number for repeatability here? 02:09:46 Matthew.McEntire: It looks black, but has a very faint hoz line 02:11:48 Elizabeth Chin: how do you incorporate these visualizations in your workflow? for example, do you use this on images that are misclassified, on random subsets of your data, or before transfer learning? 02:13:09 Andrew.French: works but getting matplotlib deprecation warning passing non-integers 02:14:12 Elizabeth Chin: thank yoU! 02:14:27 Jennifer Woodward-Greene: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later. 02:14:35 Andrew.French: only when changing 02:15:00 Maria Laura Cangiano: more brighter pixels 02:15:11 Maria Laura Cangiano: I get that warning in both 02:15:16 Jennifer Woodward-Greene: That worked for me 02:15:22 Jennifer Woodward-Greene: yes 02:15:29 Matthew.McEntire: ditto 02:15:35 Andrew.French: warning goes away now2 02:18:28 Jennifer Woodward-Greene: two completely black activations are the same in both visualizations... though some of the others changed a lot. 02:19:07 Jennifer Woodward-Greene: Can you remove them... 02:20:55 Jennifer Woodward-Greene: Thanks! 02:27:57 Jennifer Woodward-Greene: Running ok, but the following warning: WARNING:tensorflow:6 out of the last 6 calls to .predict_function at 0x000001BB0DDCDC18> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. 02:30:21 Jennifer Woodward-Greene: Then so shall I! 03:54:47 Andrew.French: trying to pull together the process flow, has the backpropagation already been done for the layers we are visualizing 03:56:04 Andrew.French: ok got it 04:04:01 Lucas Heintzman: If you discover that a particular "neuron" is exceptional at picking up your desired detail. Is there a means by which to only clone that "cell line" into a new ML system? 04:14:44 Lucas Heintzman: Apologies if this "went down the rabbit hole" too far. 04:18:53 Matthew.McEntire: The dark layer in mine when looking at a sloppy three, output for the 1st conv layer was dark and the 2nd conv layer was a bright 3. 04:20:15 Matthew.McEntire: both the 7 layers were dark and then with the 3 it was as the above 05:05:26 Andrew.French: stuck 05:05:33 Jennifer Woodward-Greene: happily computing... 05:05:38 Timothy: still working on it 05:05:57 Maria Laura Cangiano: good but just at number 6 05:06:06 Andrew.French: reshape not making into tensor 05:06:07 Kerrie Geil: can you paste the ranges for all the numbers in the chat so I can speed up the process? 05:06:17 ARS - Kossi Nouwakpo: I may have missed it but do we need to scale the intensity? 05:06:23 Jerry M: it gets 2,5, and 8 right 05:06:51 ARS - Kossi Nouwakpo: ok thanks 05:07:19 Jerry M: reversed it, now it only misses 2 and 9 05:07:58 Laura Boucheron: I0 = I_gray[295:445,1160:1310] # crop out the digit 0 I1 = I_gray[355:505,2035:2190] I2 = I_gray[425:625,2900:3100] I3 = I_gray[465:665,3775:3975] I4 = I_gray[1250:1400,1140:1290] I5 = I_gray[1270:1460,1950:2140] I6 = I_gray[1375:1505,2865:2985] I7 = I_gray[1395:1545,3725:3875] I8 = I_gray[1890:2090,1100:1300] I9 = I_gray[1925:2090,1900:2065] 05:08:16 Kerrie Geil: thanks@ 05:10:15 ARS - Kossi Nouwakpo: works great 05:19:05 Kerrie Geil: Did anybody else on the HPC get disconnected from Jupyter lab in the last 10 minutes or so? I had to ssh in to cancel my job and then log in again with JupyterHub 05:19:36 Elizabeth Chin: yes i got disconnected 05:19:42 Elizabeth Chin: I’m trying to log back in but haven’t been able to 05:19:43 Maximilian Feldman: Yup. I’ll need to clean up my HPC jobs after 05:22:29 Elizabeth Chin: have you all been able to spawn a new job through jupyter hub? I can’t even though I killed the original job via ssh. 05:23:10 Kerrie Geil: Yes, I got back in successfully right after I killed the previous job 05:24:54 Kerrie Geil: let me check the status of our reserved nodes 05:26:32 Lucas Heintzman: Interesting, I have a predicted 7.. but the highest probability is actually a 3? 05:26:43 Maria Laura Cangiano: what command do you use to reshape the images? I'm trying to use .reshape(1, 28, 28,1) but it is not working 05:28:34 Andrew.French: can you do that in one step, in the skimage.transform?? 05:29:52 ARS - Kossi Nouwakpo: Seems to work ok on non-inverted image but better on inverted. Quite surprise it even worked on non-inverted image. 05:31:26 Kerrie Geil: Elizabeth, there's a lot of drain or fail nodes currently but it doesn't look like our reserved nodes are affected. I see 8 cores in use which probably means there are two of us on there currently. Sometimes JupyterHub takes a while (6 minutes or more) to reset itself. Keep trying your log in with the reservation. Sorry I can't be more helpful! 05:31:53 Elizabeth Chin: ok thanks!