00:45:06 Elizabeth Chin: thank you! 00:52:01 Andrew.French: not familiar with this LHS syntax, is this standard python? 00:52:13 Andrew.French: left hand side 00:57:51 Lucas Heintzman: Is there a numerical limit on the dimensionality of a tensor? 01:02:33 Andrew.French: is there an attribute in X.train that says how large it is, ie I can ask for more images than present and get an error, 01:02:41 Andrew.French: but would be nice to prevent overflow 01:04:51 Lucas Heintzman: Given that the mnist dataset is "preloaded" for X_test and others. Has there been an equal number of each digit ensured for training vs. testing? 01:06:53 Laura Boucheron: for d in range(0,10): print('There are '+str((y_train==d).sum())+\ ' images of digit '+str(d)) 01:08:01 Lucas Heintzman: Thank you, this was just another sanity check... 01:13:31 Kerrie Geil: For anyone still having problems accessing the Conda environment inside of your Jupyter notebook: try launching a whole new notebook (File>New>Notebook) and then try selecting the workshop kernel (Kernel>Change Kernel>select aiworkshop). That seemed to work for a couple of people 01:19:46 Andrew.French: why doesn't label command work 01:20:09 Andrew.French: just ins them ain plt 01:21:03 Andrew.French: plt.imshow(X_train[10],label=y_train[10]) 01:21:40 Andrew.French: no error but not sjhowing anythi 01:28:55 Kerrie Geil: don't forget on Monday we will ask everyone to turn on their video for just a minute or so to capture a good thumbnail frame for the website. If you are having problems with your camera no worries. We'd just like to have most people sharing video for a minute or two on Monday 01:38:59 Kerrie Geil: direct link to the poll https://www.menti.com/t16nttegjf 01:51:37 Elizabeth Chin: what kind of labels should our data have? just classes, or bounding boxes needed like in yesterday’s examples? 01:52:32 Elizabeth Chin: ok thanks! 01:52:46 Jennifer Woodward-Greene: for a classification problem, can you just put them in different folders? 02:00:57 Andrew.French: are the dimension types labeled in keras 02:01:31 Andrew.French: want to know if you can tag the dimension for what they are 02:03:30 Timothy: It worked for me. Same as above 02:03:38 Maria Laura Cangiano: no error 02:03:40 Jennifer Woodward-Greene: same 02:15:32 ARS - Kossi Nouwakpo: In case I need to use min an max from the data to normalize, should I normalize training and testing data separately? 02:17:31 ARS - Kossi Nouwakpo: That makes sense. Thanks. 02:18:56 Andrew.French: hmm getting X_test 0 to 1.5e-5 02:19:36 zhanyou xu: all image data range from 0 to 255. Is normalization necessary? 02:20:40 Suzy Stillman: Andrew, did you run that block of code more than once? Each time you run it, it will divide by 255 02:20:56 Andrew.French: ok 02:22:06 Lucas Heintzman: In each of these examples we have assumed uint8. If data are signed in nature natively, how would you effectively rescale them to 0.0 - 1.0? 02:35:55 ARS - Kossi Nouwakpo: Did the svm last time coded the word categories to numeric values behind the scene? 02:37:07 ARS - Kossi Nouwakpo: Thanks 02:39:53 Maria Laura Cangiano: The probabilities are always going to be 0 or 1? 02:40:54 Maria Laura Cangiano: yes, thank you 03:48:51 Andrew.French: but on Monday you mentioned that convolution matrix was linear not non-linear? 03:54:14 Elizabeth Chin: did anyone else get this warning: WARNING:tensorflow:From /anaconda3/envs/aiworkshop/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead. 03:54:49 Elizabeth Chin: thanks! 03:58:22 ARS - Kossi Nouwakpo: Are convolutions kernels randomly tried and optimized? Or is there a set series of kernels that are applied? Also, does the number of convolutions in each layer influence the number of iterations / epochs? 04:01:29 ARS - Kossi Nouwakpo: Thanks 04:02:58 Jennifer Woodward-Greene: I am getting ValueError: Input 0 of layer sequential_1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, 28, 28] 04:03:18 Jennifer Woodward-Greene: I reran it again and got the saem 04:03:20 Jennifer Woodward-Greene: same 04:03:30 Jennifer Woodward-Greene: looking 04:04:31 Jennifer Woodward-Greene: No complaints! 04:04:45 Jennifer Woodward-Greene: yes 04:06:42 Kerrie Geil: loosing the edge on all sides 04:06:59 Elizabeth Chin: I get the same error as before, and the model won’t fit. Then if I try to re-run the chunk it tells me model1 isn’t found. Re-running everything gives the same error. 04:07:51 Lucas Heintzman: Which convolution type is used (mirrored/nearest-neighbor)? 04:19:17 Jennifer Woodward-Greene: to confirm, the 26 is taking care of the edges? 04:20:59 Jennifer Woodward-Greene: and again in each subsequent layer? 04:21:30 Jennifer Woodward-Greene: thanks 04:23:19 Kerrie Geil: so essentially each filter in the second convolve is operating on the whole stack of output from the first layer? 04:34:51 Lucas Heintzman: Does the Pooling Mask need to be equivalent in size to initial filter? And follow up I though most masking procedures should be odd shaped to assure a central pixel? 04:38:47 Kerrie Geil: if we think of the output of the convolve and pool layers as "feature maps", how do we think about the output of the first dense layer 04:43:51 Jennifer Woodward-Greene: Do you set the threshold for loss function, and/or do or can you set a threshold to accuracy? 04:44:23 Andrew.French: how can you back propagate if you've chopped the negative values so cant recover them? 04:44:48 Lucas Heintzman: Is this "error" a probability or a binary designation. How is this "error" (+/-) to the cell values? 04:47:04 Kerrie Geil: What is the effect of batch size on the final network results? 04:50:20 Jerry M: Are there ways to avoid converging on degenerate solutions? For example, how to stop this network claiming 11% accuracy by assigning 1 no matter the input image. 04:55:01 Jerry M: yes 04:57:48 Kerrie Geil: maybe you already said this but how did you choose the 128 for the first dense layer? 04:58:58 Kerrie Geil: so a bit of testing of different values for that might be good when designing a CNN 04:59:19 Kerrie Geil: thanks 05:14:01 Kerrie Geil: max if you are on the HPC sometimes this happens during compute especially if you logged in with JupyterHub using only 2 cores. I usually don't experience the kernel dying if I log in with 4 cores though 05:17:15 Maximilian Feldman: Good to know. Thanks Kerrie! 05:26:36 Elizabeth Chin: when I evaluate on the test data, the output states 313/313 instead of 10000/10000 to the left of the progress bar. Are these numbers supposed to be the n of samples in the test data? I confirmed the shape that X_test and Y_test are 10000 images. 05:26:51 Jennifer Woodward-Greene: same here 05:26:57 Kerrie Geil: It does that on the HPC too and it's related to batch size 05:27:00 Jennifer Woodward-Greene: confirmed shape too 05:27:05 Jennifer Woodward-Greene: I am on pc 05:27:13 Elizabeth Chin: I’m on the HPC 05:27:55 Jennifer Woodward-Greene: same 05:28:55 Amy Hudson (she/her): 157/157 on hpc 05:29:22 Elizabeth Chin: 313/313 on Hpc with None 05:29:26 Jennifer Woodward-Greene: I got 157/157 05:29:48 Jennifer Woodward-Greene: None, I got 313/313 05:30:00 Jennifer Woodward-Greene: yes 05:30:12 Amy Hudson (she/her): Y_predict.shape is (10000,10) 05:30:36 Jennifer Woodward-Greene: That worked... now n/10000 05:30:51 Jennifer Woodward-Greene: batch_size=1 05:31:09 Jennifer Woodward-Greene: .98 05:39:02 Ren Ortega: I think I missed something, I'm getting predicted label series of all 1's 05:39:13 Maria Laura Cangiano: mine predicted 4 instead of 8 since I had 0.98 prob of it being a 4 05:39:27 Ren Ortega: Mine is predicting all 1's 05:41:31 Ren Ortega: Haha yeah I definitely have something wrong, then, I have 8000+ incorrectly identified 05:43:08 Jennifer Woodward-Greene: At .98 there were 157 incorrect, and the display of misses shows some pretty bad handwriting... LOL! 05:44:07 Jerry M: Is this algorithm irrotational? 05:50:21 ARS - Kossi Nouwakpo: How do you handle situations where you have a sequence of images over time and we