I have a submission problem

I have a submission problem  

  By: yc.lee on Feb. 10, 2022, 11:29 p.m.

Hi, Thanks for organizing this international challenge!

I have a submission problem during preliminary test phase 1. But my docker image has no problem in my server. Could you post my submission log please?

Thank you!

Re: I have a submission problem  

  By: coendevente on Feb. 11, 2022, 9:49 a.m.

Dear yc.lee,

Thank you for your interest in our challenge!

It seems your algorithm takes too long to run. All images within the TIFF stack need to be processed within 10 seconds. Per TIFF stack, an additional 5 minutes will be available to load model weights etc.

Are you sure your algorithm makes use of the GPU? Did you click "Requires GPU" when submitting your Docker? If not, you can still edit that in the algorithm settings under "Containers" in the left menu on your algorithm page and then click the "i"-button next to your latest container.

Hope that helps.

Best regards, Coen de Vente

Re: I have a submission problem  

  By: yc.lee on Feb. 12, 2022, 12:47 p.m.

Thanks for reply!

It works now. Thank you. One more thing, I have a question. I want to know that it's sure that a variable 'input _ image _ array' in function 'predict' in 'process.py' means **a single image. **

And, one more thing for clarficiation! You defined a ungrandable image as 'U'. Does the higher scalar value for 'U' means much ungradable for RG? It can say that 'True' for 'U' means the model predicted 'Ungradable'.

 Last edited by: yc.lee on Aug. 15, 2023, 12:55 p.m., edited 3 times in total.

Re: I have a submission problem  

  By: coendevente on Feb. 14, 2022, 2:02 p.m.

Dear yc.lee,

Happy to hear your submission was successful now.

Indeed, input_image_array is a single image.

I am not entirely sure what you are trying to ask with your second question. Do you mean whether a higher value for output "O3" corresponds to a lower or higher probability of ungradability? If so, a higher value of "O3" means a higher probability of ungradability. For further clarification you could also take a look at our evaluation scripts, and the functinos ungradability_kappa and ungradability _ auc in particular.

Best regards, Coen de Vente

 Last edited by: coendevente on Aug. 15, 2023, 12:55 p.m., edited 1 time in total.

Re: I have a submission problem  

  By: yc.lee on Feb. 16, 2022, 7:09 a.m.

Dear Coen de Vente,

Thanks to your sincere reply.

Now I successfully installed several GPU requirements in my docker. I tested and sawed the results that the single image was predicted within 10 seconds.

But in preliminary phase 2, I failed my algorithm... Can you show my logs one more time? I don't understand the failure...

Thanks!

Re: I have a submission problem  

  By: coendevente on Feb. 16, 2022, 10:56 a.m.

Dear yc.lee,

Below are the logs for your three failed submissions.

test13 2022-02-15T13:41:53+00:00 0%| | 0/234 [00:00<?, ?it/s]2022-02-15 22:41:53.228917: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:53+00:00 2022-02-15 22:41:53.237291: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:53+00:00 2022-02-15 22:41:53.237940: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:53+00:00 2022-02-15 22:41:53.238788: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA 2022-02-15T13:41:53+00:00 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-02-15T13:41:53+00:00 2022-02-15 22:41:53.239443: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:53+00:00 2022-02-15 22:41:53.240081: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:53+00:00 2022-02-15 22:41:53.240661: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:54+00:00 2022-02-15 22:41:54.082353: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:54+00:00 2022-02-15 22:41:54.083026: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:54+00:00 2022-02-15 22:41:54.083628: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-15T13:41:54+00:00 2022-02-15 22:41:54.084201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13793 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5 2022-02-15T13:42:00+00:00 2022-02-15 22:42:00.969723: I tensorflow/stream_executor/cuda/cuda_dnn.cc:366] Loaded cuDNN version 8204 2022-02-15T13:42:01+00:00 2022-02-15 22:42:01.631010: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-15T13:42:01+00:00 2022-02-15 22:42:01.631453: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-15T13:42:01+00:00 2022-02-15 22:42:01.631473: W tensorflow/stream_executor/gpu/asm_compiler.cc:80] Couldn't get ptxas version string: INTERNAL: Couldn't invoke ptxas --version 2022-02-15T13:42:01+00:00 2022-02-15 22:42:01.631918: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-15T13:42:01+00:00 2022-02-15 22:42:01.631986: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] INTERNAL: Failed to launch ptxas 2022-02-15T13:42:01+00:00 Relying on driver to perform ptx compilation. 2022-02-15T13:42:01+00:00 Modify $PATH to customize ptxas location. 2022-02-15T13:42:01+00:00 This message will be only logged once. 2022-02-15T13:42:06+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer 2022-02-15T13:42:06+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay 2022-02-15T13:42:06+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate 2022-02-15T13:42:06+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.momentum 2022-02-15T13:42:06+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter 2022-02-15T13:42:06+00:00 WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details. 2022-02-15T13:42:11+00:00 0%| | 0/234 [00:21<?, ?it/s] 2022-02-15T13:42:11+00:00 Traceback (most recent call last): 2022-02-15T13:42:11+00:00 File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main 2022-02-15T13:42:11+00:00 return _run_code(code, main_globals, None, 2022-02-15T13:42:11+00:00 File "/usr/lib/python3.8/runpy.py", line 87, in _run_code 2022-02-15T13:42:11+00:00 exec(code, run_globals) 2022-02-15T13:42:11+00:00 File "/opt/algorithm/process.py", line 180, in <module> 2022-02-15T13:42:11+00:00 airogs_algorithm().process() 2022-02-15T13:42:11+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 183, in process 2022-02-15T13:42:11+00:00 self.process_cases() 2022-02-15T13:42:11+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 191, in process_cases 2022-02-15T13:42:11+00:00 self._case_results.append(self.process_case(idx=idx, case=case)) 2022-02-15T13:42:11+00:00 File "/opt/algorithm/process.py", line 93, in process_case 2022-02-15T13:42:11+00:00 results.append(self.predict(input_image_array=input_image_array)) 2022-02-15T13:42:11+00:00 File "/opt/algorithm/process.py", line 134, in predict 2022-02-15T13:42:11+00:00 pred_y = model([disc_img, fundus_img], training=True) 2022-02-15T13:42:11+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler 2022-02-15T13:42:11+00:00 raise e.with_traceback(filtered_tb) from None 2022-02-15T13:42:11+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/keras/engine/input_spec.py", line 263, in assert_input_compatibility 2022-02-15T13:42:11+00:00 raise ValueError(f'Input {input_index} of layer "{layer_name}" is ' 2022-02-15T13:42:11+00:00 ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 608, 608, 1), found shape=(1, 608, 559, 1)

test14 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.620931: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.630304: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.630980: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.631787: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA 2022-02-16T04:08:41+00:00 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.632459: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.633066: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:41+00:00 2022-02-16 13:08:41.633638: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:42+00:00 2022-02-16 13:08:42.223436: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:42+00:00 2022-02-16 13:08:42.224148: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:42+00:00 2022-02-16 13:08:42.224758: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T04:08:42+00:00 2022-02-16 13:08:42.225326: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13793 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5 2022-02-16T04:08:56+00:00 0%| | 0/234 [00:00<?, ?it/s]2022-02-16 13:08:56.002094: I tensorflow/stream_executor/cuda/cuda_dnn.cc:366] Loaded cuDNN version 8204 2022-02-16T04:08:56+00:00 2022-02-16 13:08:56.674469: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-16T04:08:56+00:00 2022-02-16 13:08:56.674960: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-16T04:08:56+00:00 2022-02-16 13:08:56.674984: W tensorflow/stream_executor/gpu/asm_compiler.cc:80] Couldn't get ptxas version string: INTERNAL: Couldn't invoke ptxas --version 2022-02-16T04:08:56+00:00 2022-02-16 13:08:56.675409: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-16T04:08:56+00:00 2022-02-16 13:08:56.675471: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] INTERNAL: Failed to launch ptxas 2022-02-16T04:08:56+00:00 Relying on driver to perform ptx compilation. 2022-02-16T04:08:56+00:00 Modify $PATH to customize ptxas location. 2022-02-16T04:08:56+00:00 This message will be only logged once. 2022-02-16T04:09:01+00:00 0%| | 0/234 [00:09<?, ?it/s] 2022-02-16T04:09:01+00:00 Traceback (most recent call last): 2022-02-16T04:09:01+00:00 File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main 2022-02-16T04:09:01+00:00 return _run_code(code, main_globals, None, 2022-02-16T04:09:01+00:00 File "/usr/lib/python3.8/runpy.py", line 87, in _run_code 2022-02-16T04:09:01+00:00 exec(code, run_globals) 2022-02-16T04:09:01+00:00 File "/opt/algorithm/process.py", line 185, in <module> 2022-02-16T04:09:01+00:00 airogs_algorithm().process() 2022-02-16T04:09:01+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 183, in process 2022-02-16T04:09:01+00:00 self.process_cases() 2022-02-16T04:09:01+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 191, in process_cases 2022-02-16T04:09:01+00:00 self._case_results.append(self.process_case(idx=idx, case=case)) 2022-02-16T04:09:01+00:00 File "/opt/algorithm/process.py", line 107, in process_case 2022-02-16T04:09:01+00:00 results.append(self.predict(cls_model=cls_model, seg_model=seg_model, input_image_array=input_image_array)) 2022-02-16T04:09:01+00:00 File "/opt/algorithm/process.py", line 141, in predict 2022-02-16T04:09:01+00:00 pred_y = cls_model([disc_img, fundus_img], training=True) 2022-02-16T04:09:01+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler 2022-02-16T04:09:01+00:00 raise e.with_traceback(filtered_tb) from None 2022-02-16T04:09:01+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/keras/engine/input_spec.py", line 263, in assert_input_compatibility 2022-02-16T04:09:01+00:00 raise ValueError(f'Input {input_index} of layer "{layer_name}" is ' 2022-02-16T04:09:01+00:00 ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 608, 608, 1), found shape=(1, 608, 559, 1) 2022-02-16T04:09:02+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer 2022-02-16T04:09:02+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay 2022-02-16T04:09:02+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate 2022-02-16T04:09:02+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.momentum 2022-02-16T04:09:02+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter 2022-02-16T04:09:02+00:00 WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

test15 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.822555: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.832034: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.832676: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.833496: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA 2022-02-16T06:45:24+00:00 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.834132: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.834740: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:24+00:00 2022-02-16 15:45:24.835338: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:25+00:00 2022-02-16 15:45:25.764199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:25+00:00 2022-02-16 15:45:25.764899: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:25+00:00 2022-02-16 15:45:25.765494: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-16T06:45:25+00:00 2022-02-16 15:45:25.766054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13793 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5 2022-02-16T06:45:40+00:00 0%| | 0/234 [00:00<?, ?it/s]2022-02-16 15:45:40.081145: I tensorflow/stream_executor/cuda/cuda_dnn.cc:366] Loaded cuDNN version 8204 2022-02-16T06:45:40+00:00 2022-02-16 15:45:40.767710: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-16T06:45:40+00:00 2022-02-16 15:45:40.768169: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-16T06:45:40+00:00 2022-02-16 15:45:40.768192: W tensorflow/stream_executor/gpu/asm_compiler.cc:80] Couldn't get ptxas version string: INTERNAL: Couldn't invoke ptxas --version 2022-02-16T06:45:40+00:00 2022-02-16 15:45:40.768679: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-16T06:45:40+00:00 2022-02-16 15:45:40.768753: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] INTERNAL: Failed to launch ptxas 2022-02-16T06:45:40+00:00 Relying on driver to perform ptx compilation. 2022-02-16T06:45:40+00:00 Modify $PATH to customize ptxas location. 2022-02-16T06:45:40+00:00 This message will be only logged once. 2022-02-16T06:45:45+00:00 0%| | 0/234 [00:09<?, ?it/s] 2022-02-16T06:45:45+00:00 Traceback (most recent call last): 2022-02-16T06:45:45+00:00 File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main 2022-02-16T06:45:45+00:00 return _run_code(code, main_globals, None, 2022-02-16T06:45:45+00:00 File "/usr/lib/python3.8/runpy.py", line 87, in _run_code 2022-02-16T06:45:45+00:00 exec(code, run_globals) 2022-02-16T06:45:45+00:00 File "/opt/algorithm/process.py", line 185, in <module> 2022-02-16T06:45:45+00:00 airogs_algorithm().process() 2022-02-16T06:45:45+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 183, in process 2022-02-16T06:45:45+00:00 self.process_cases() 2022-02-16T06:45:45+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 191, in process_cases 2022-02-16T06:45:45+00:00 self._case_results.append(self.process_case(idx=idx, case=case)) 2022-02-16T06:45:45+00:00 File "/opt/algorithm/process.py", line 107, in process_case 2022-02-16T06:45:45+00:00 results.append(self.predict(cls_model=cls_model, seg_model=seg_model, input_image_array=input_image_array)) 2022-02-16T06:45:45+00:00 File "/opt/algorithm/process.py", line 141, in predict 2022-02-16T06:45:45+00:00 pred_y = cls_model([disc_img, fundus_img], training=True) 2022-02-16T06:45:45+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler 2022-02-16T06:45:45+00:00 raise e.with_traceback(filtered_tb) from None 2022-02-16T06:45:45+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/keras/engine/input_spec.py", line 263, in assert_input_compatibility 2022-02-16T06:45:45+00:00 raise ValueError(f'Input {input_index} of layer "{layer_name}" is ' 2022-02-16T06:45:45+00:00 ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 608, 608, 1), found shape=(1, 608, 559, 1) 2022-02-16T06:45:46+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer 2022-02-16T06:45:46+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay 2022-02-16T06:45:46+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate 2022-02-16T06:45:46+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.momentum 2022-02-16T06:45:46+00:00 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter 2022-02-16T06:45:46+00:00 WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

Best regards, Coen de Vente

 Last edited by: coendevente on Aug. 15, 2023, 12:55 p.m., edited 2 times in total.

Re: I have a submission problem  

  By: yc.lee on Feb. 19, 2022, 2:09 a.m.

Thanks, it was helpful! But the failures continue.. Can you share the logs once?

Thanks for your kindness.

 Last edited by: yc.lee on Aug. 15, 2023, 12:55 p.m., edited 1 time in total.

Re: I have a submission problem  

  By: coendevente on Feb. 21, 2022, 10:33 a.m.

This is the stderr of your latest submission: 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.803277: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.812816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.813517: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.814314: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA 2022-02-21T07:51:31+00:00 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.815002: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.815640: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:31+00:00 2022-02-21 16:51:31.816231: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:32+00:00 2022-02-21 16:51:32.840297: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:32+00:00 2022-02-21 16:51:32.840911: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:32+00:00 2022-02-21 16:51:32.841455: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-02-21T07:51:32+00:00 2022-02-21 16:51:32.841984: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13793 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5 2022-02-21T07:51:36+00:00 0%| | 0/300 [00:00<?, ?it/s]2022-02-21 16:51:36.490114: I tensorflow/stream_executor/cuda/cuda_dnn.cc:366] Loaded cuDNN version 8204 2022-02-21T07:51:37+00:00 2022-02-21 16:51:37.175533: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-21T07:51:37+00:00 2022-02-21 16:51:37.175987: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-21T07:51:37+00:00 2022-02-21 16:51:37.176012: W tensorflow/stream_executor/gpu/asm_compiler.cc:80] Couldn't get ptxas version string: INTERNAL: Couldn't invoke ptxas --version 2022-02-21T07:51:37+00:00 2022-02-21 16:51:37.176456: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-02-21T07:51:37+00:00 2022-02-21 16:51:37.176522: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] INTERNAL: Failed to launch ptxas 2022-02-21T07:51:37+00:00 Relying on driver to perform ptx compilation. 2022-02-21T07:51:37+00:00 Modify $PATH to customize ptxas location. 2022-02-21T07:51:37+00:00 This message will be only logged once. 2022-02-21T07:54:32+00:00 0%| | 1/300 [00:04<21:12, 4.26s/it] 1%| | 2/300 [00:05<13:26, 2.71s/it] 1%| | 3/300 [00:06<09:13, 1.86s/it] 1%|▏ | 4/300 [00:09<10:42, 2.17s/it] 2%|▏ | 5/300 [00:10<08:42, 1.77s/it] 2%|▏ | 6/300 [00:11<07:33, 1.54s/it] 2%|▏ | 7/300 [00:12<06:54, 1.41s/it] 3%|▎ | 8/300 [00:13<06:27, 1.33s/it] 3%|▎ | 9/300 [00:14<06:01, 1.24s/it] 3%|▎ | 10/300 [00:17<07:44, 1.60s/it] 4%|▎ | 11/300 [00:18<06:58, 1.45s/it] 4%|▍ | 12/300 [00:19<06:24, 1.33s/it] 4%|▍ | 13/300 [00:21<06:52, 1.44s/it] 5%|▍ | 14/300 [00:24<09:09, 1.92s/it] 5%|▌ | 15/300 [00:26<10:08, 2.14s/it] 5%|▌ | 16/300 [00:27<08:04, 1.71s/it] 6%|▌ | 17/300 [00:29<08:36, 1.83s/it] 6%|▌ | 18/300 [00:30<07:51, 1.67s/it] 6%|▋ | 19/300 [00:32<07:08, 1.52s/it] 7%|▋ | 20/300 [00:33<06:34, 1.41s/it] 7%|▋ | 21/300 [00:34<05:54, 1.27s/it] 7%|▋ | 22/300 [00:34<05:05, 1.10s/it] 8%|▊ | 23/300 [00:36<06:26, 1.39s/it] 8%|▊ | 24/300 [00:38<06:49, 1.48s/it] 8%|▊ | 25/300 [00:39<06:16, 1.37s/it] 9%|▊ | 26/300 [00:40<05:18, 1.16s/it] 9%|▉ | 27/300 [00:41<05:15, 1.16s/it] 9%|▉ | 28/300 [00:44<07:15, 1.60s/it] 10%|▉ | 29/300 [00:45<06:35, 1.46s/it] 10%|█ | 30/300 [00:46<05:58, 1.33s/it] 10%|█ | 31/300 [00:48<06:59, 1.56s/it] 11%|█ | 32/300 [00:50<07:03, 1.58s/it] 11%|█ | 33/300 [00:50<05:52, 1.32s/it] 11%|█▏ | 34/300 [00:52<06:54, 1.56s/it] 12%|█▏ | 35/300 [00:54<06:19, 1.43s/it] 12%|█▏ | 36/300 [00:55<05:54, 1.34s/it] 12%|█▏ | 37/300 [00:56<05:38, 1.29s/it] 13%|█▎ | 38/300 [00:57<05:53, 1.35s/it] 13%|█▎ | 39/300 [00:58<05:01, 1.16s/it] 13%|█▎ | 40/300 [00:59<04:51, 1.12s/it] 14%|█▎ | 41/300 [01:00<04:53, 1.13s/it] 14%|█▍ | 42/300 [01:03<06:49, 1.59s/it] 14%|█▍ | 43/300 [01:06<08:09, 1.90s/it] 15%|█▍ | 44/300 [01:07<07:21, 1.72s/it] 15%|█▌ | 45/300 [01:09<08:14, 1.94s/it] 15%|█▌ | 46/300 [01:12<09:05, 2.15s/it] 16%|█▌ | 47/300 [01:13<08:13, 1.95s/it] 16%|█▌ | 48/300 [01:14<06:46, 1.61s/it] 16%|█▋ | 49/300 [01:15<06:05, 1.46s/it] 17%|█▋ | 50/300 [01:18<07:32, 1.81s/it] 17%|█▋ | 51/300 [01:19<07:04, 1.71s/it] 17%|█▋ | 52/300 [01:22<07:29, 1.81s/it] 18%|█▊ | 53/300 [01:23<06:48, 1.65s/it] 18%|█▊ | 54/300 [01:24<05:47, 1.41s/it] 18%|█▊ | 55/300 [01:25<05:26, 1.33s/it] 19%|█▊ | 56/300 [01:26<05:10, 1.27s/it] 19%|█▉ | 57/300 [01:27<05:11, 1.28s/it] 19%|█▉ | 58/300 [01:30<06:48, 1.69s/it] 20%|█▉ | 59/300 [01:32<07:25, 1.85s/it] 20%|██ | 60/300 [01:33<06:28, 1.62s/it] 20%|██ | 61/300 [01:36<07:23, 1.86s/it] 21%|██ | 62/300 [01:38<07:33, 1.91s/it] 21%|██ | 63/300 [01:39<06:35, 1.67s/it] 21%|██▏ | 64/300 [01:41<07:33, 1.92s/it] 22%|██▏ | 65/300 [01:42<06:36, 1.69s/it] 22%|██▏ | 66/300 [01:43<05:31, 1.42s/it] 22%|██▏ | 67/300 [01:44<04:40, 1.20s/it] 23%|██▎ | 68/300 [01:45<04:46, 1.24s/it] 23%|██▎ | 69/300 [01:48<06:22, 1.65s/it] 23%|██▎ | 70/300 [01:49<05:37, 1.47s/it] 24%|██▎ | 71/300 [01:51<06:18, 1.65s/it] 24%|██▍ | 72/300 [01:53<06:13, 1.64s/it] 24%|██▍ | 73/300 [01:54<06:13, 1.64s/it] 25%|██▍ | 74/300 [01:56<06:15, 1.66s/it] 25%|██▌ | 75/300 [01:57<05:38, 1.51s/it] 25%|██▌ | 76/300 [01:58<05:06, 1.37s/it] 26%|██▌ | 77/300 [01:59<04:50, 1.30s/it] 26%|██▌ | 78/300 [02:00<04:38, 1.25s/it] 26%|██▋ | 79/300 [02:02<05:34, 1.52s/it] 27%|██▋ | 80/300 [02:04<05:40, 1.55s/it] 27%|██▋ | 81/300 [02:05<05:24, 1.48s/it] 27%|██▋ | 82/300 [02:06<04:32, 1.25s/it] 28%|██▊ | 83/300 [02:07<04:27, 1.23s/it] 28%|██▊ | 84/300 [02:10<05:56, 1.65s/it] 28%|██▊ | 85/300 [02:11<05:22, 1.50s/it] 29%|██▊ | 86/300 [02:12<04:51, 1.36s/it] 29%|██▉ | 87/300 [02:13<04:33, 1.28s/it] 29%|██▉ | 88/300 [02:14<04:20, 1.23s/it] 30%|██▉ | 89/300 [02:16<04:40, 1.33s/it] 30%|███ | 90/300 [02:19<06:00, 1.72s/it] 30%|███ | 91/300 [02:20<05:16, 1.51s/it] 31%|███ | 92/300 [02:21<04:53, 1.41s/it] 31%|███ | 93/300 [02:22<05:05, 1.47s/it] 31%|███▏ | 94/300 [02:23<04:24, 1.29s/it] 32%|███▏ | 95/300 [02:24<04:07, 1.21s/it] 32%|███▏ | 96/300 [02:26<04:34, 1.34s/it] 32%|███▏ | 97/300 [02:28<05:21, 1.59s/it] 33%|███▎ | 98/300 [02:29<04:26, 1.32s/it] 33%|███▎ | 99/300 [02:30<03:55, 1.17s/it] 33%|███▎ | 100/300 [02:32<05:21, 1.61s/it] 34%|███▎ | 101/300 [02:34<05:25, 1.64s/it] 34%|███▍ | 102/300 [02:35<05:03, 1.53s/it] 34%|███▍ | 103/300 [02:36<04:38, 1.42s/it] 35%|███▍ | 104/300 [02:38<05:17, 1.62s/it] 35%|███▌ | 105/300 [02:39<04:30, 1.39s/it] 35%|███▌ | 106/300 [02:41<05:11, 1.60s/it] 36%|███▌ | 107/300 [02:42<04:31, 1.41s/it] 36%|███▌ | 108/300 [02:44<04:44, 1.48s/it] 36%|███▋ | 109/300 [02:45<04:22, 1.38s/it] 37%|███▋ | 110/300 [02:46<04:18, 1.36s/it] 37%|███▋ | 111/300 [02:47<03:47, 1.21s/it] 37%|███▋ | 112/300 [02:50<04:58, 1.59s/it] 38%|███▊ | 113/300 [02:52<05:24, 1.73s/it] 38%|███▊ | 114/300 [02:54<06:13, 2.01s/it] 38%|███▊ | 115/300 [02:57<06:16, 2.03s/it] 38%|███▊ | 115/300 [02:57<04:45, 1.54s/it] 2022-02-21T07:54:32+00:00 Traceback (most recent call last): 2022-02-21T07:54:32+00:00 File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main 2022-02-21T07:54:32+00:00 return _run_code(code, main_globals, None, 2022-02-21T07:54:32+00:00 File "/usr/lib/python3.8/runpy.py", line 87, in _run_code 2022-02-21T07:54:32+00:00 exec(code, run_globals) 2022-02-21T07:54:32+00:00 File "/opt/algorithm/process.py", line 199, in <module> 2022-02-21T07:54:32+00:00 airogs_algorithm().process() 2022-02-21T07:54:32+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 183, in process 2022-02-21T07:54:32+00:00 self.process_cases() 2022-02-21T07:54:32+00:00 File "/home/algorithm/.local/lib/python3.8/site-packages/evalutils/evalutils.py", line 191, in process_cases 2022-02-21T07:54:32+00:00 self._case_results.append(self.process_case(idx=idx, case=case)) 2022-02-21T07:54:32+00:00 File "/opt/algorithm/process.py", line 113, in process_case 2022-02-21T07:54:32+00:00 results.append(self.predict(input_image_array=input_image_array)) 2022-02-21T07:54:32+00:00 File "/opt/algorithm/process.py", line 133, in predict 2022-02-21T07:54:32+00:00 cropping_img.cropping(input_image_array) #original img -> RGB 2022-02-21T07:54:32+00:00 File "/opt/algorithm/cropping_img.py", line 25, in cropping 2022-02-21T07:54:32+00:00 max_obj_label_idx = counts.argsort()[-2] 2022-02-21T07:54:32+00:00 IndexError: index -2 is out of bounds for axis 0 with size 1