Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to generate Unet engine file #27

Open
venkat-kittu opened this issue May 26, 2022 · 1 comment
Open

Unable to generate Unet engine file #27

venkat-kittu opened this issue May 26, 2022 · 1 comment

Comments

@venkat-kittu
Copy link

Hi,
I have tried to generate Unet engine on Nvidia-xavier NX using below command
trtexec --uff=./models/unet-segmentation.uff --uffInput=input_1,1,512,512 --output=conv2d_19/Sigmoid --batch=2 --workspace=2048 --saveEngine=./trt_models/uff_unet_int8.trt --useDLACore=0 --allowGPUFallback --verbose --int8

The parsing of model went good and when there is timing calculation, I am getting below error,

[05/26/2022-13:11:30] [V] [TRT] --------------- Timing Runner: (Reformat)
[05/26/2022-13:11:31] [V] [TRT] Tactic: 1002 time 4.52559
[05/26/2022-13:11:31] [V] [TRT] Tactic: 0 time 0.43014
[05/26/2022-13:11:31] [V] [TRT] Fastest Tactic: 0 Time: 0.43014
[05/26/2022-13:11:31] [V] [TRT] *************** Autotuning format combination: Int8(1,512,1:4,262144) -> Int8(1,512,262144:32,262144) ***************
[05/26/2022-13:11:31] [V] [TRT] --------------- Timing Runner: {conv2d_1/convolution,conv2d_1/BiasAdd,conv2d_1/Relu,conv2d_2/convolution,conv2d_2/BiasAdd,conv2d_2/Relu,max_pooling2d_1/MaxPool,conv2d_3/convolution,conv2d_3/BiasAdd,conv2d_3/Relu,conv2d_4/convolution,conv2d_4/BiasAdd,conv2d_4/Relu,max_pooling2d_2/MaxPool,conv2d_5/convolution,conv2d_5/BiasAdd,conv2d_5/Relu,conv2d_6/convolution,conv2d_6/BiasAdd,conv2d_6/Relu,max_pooling2d_3/MaxPool,conv2d_7/convolution,conv2d_7/BiasAdd,conv2d_7/Relu,conv2d_8/convolution,conv2d_8/BiasAdd,conv2d_8/Relu,max_pooling2d_4/MaxPool,conv2d_9/convolution,conv2d_9/BiasAdd,conv2d_9/Relu,conv2d_10/convolution,conv2d_10/BiasAdd,conv2d_10/Relu,(Unnamed Layer* 67) [Deconvolution],conv2d_transpose_1/BiasAdd,concatenate_1/concat,conv2d_11/convolution,conv2d_11/BiasAdd,conv2d_11/Relu,conv2d_12/convolution,conv2d_12/BiasAdd,conv2d_12/Relu,(Unnamed Layer* 95) [Deconvolution],conv2d_transpose_2/BiasAdd,concatenate_2/concat,conv2d_13/convolution,conv2d_13/BiasAdd,conv2d_13/Relu,conv2d_14/convolution,conv2d_14/BiasAdd,conv2d_14/Relu,(Unnamed Layer* 123) [Deconvolution],conv2d_transpose_3/BiasAdd,concatenate_3/concat,conv2d_15/convolution,conv2d_15/BiasAdd,conv2d_15/Relu,conv2d_16/convolution,conv2d_16/BiasAdd,conv2d_16/Relu,(Unnamed Layer* 151) [Deconvolution],conv2d_transpose_4/BiasAdd,concatenate_4/concat,conv2d_17/convolution,conv2d_17/BiasAdd,conv2d_17/Relu,conv2d_18/convolution,conv2d_18/BiasAdd,conv2d_18/Relu,conv2d_19/convolution,conv2d_19/BiasAdd} (DLA)
[05/26/2022-13:11:31] [W] [TRT] DLA Node compilation Failed.
[05/26/2022-13:11:31] [V] [TRT] Tactic: 548716326947 skipped. ProcessNode failure.
[05/26/2022-13:11:31] [V] [TRT] Fastest Tactic: -3360065831133338131 Time: 3.40282e+38
[05/26/2022-13:11:31] [V] [TRT] *************** Autotuning format combination: Int8(1,512,262144:32,262144) -> Int8(1,512,262144:32,262144) ***************
[05/26/2022-13:11:31] [V] [TRT] --------------- Timing Runner: {conv2d_1/convolution,conv2d_1/BiasAdd,conv2d_1/Relu,conv2d_2/convolution,conv2d_2/BiasAdd,conv2d_2/Relu,max_pooling2d_1/MaxPool,conv2d_3/convolution,conv2d_3/BiasAdd,conv2d_3/Relu,conv2d_4/convolution,conv2d_4/BiasAdd,conv2d_4/Relu,max_pooling2d_2/MaxPool,conv2d_5/convolution,conv2d_5/BiasAdd,conv2d_5/Relu,conv2d_6/convolution,conv2d_6/BiasAdd,conv2d_6/Relu,max_pooling2d_3/MaxPool,conv2d_7/convolution,conv2d_7/BiasAdd,conv2d_7/Relu,conv2d_8/convolution,conv2d_8/BiasAdd,conv2d_8/Relu,max_pooling2d_4/MaxPool,conv2d_9/convolution,conv2d_9/BiasAdd,conv2d_9/Relu,conv2d_10/convolution,conv2d_10/BiasAdd,conv2d_10/Relu,(Unnamed Layer* 67) [Deconvolution],conv2d_transpose_1/BiasAdd,concatenate_1/concat,conv2d_11/convolution,conv2d_11/BiasAdd,conv2d_11/Relu,conv2d_12/convolution,conv2d_12/BiasAdd,conv2d_12/Relu,(Unnamed Layer* 95) [Deconvolution],conv2d_transpose_2/BiasAdd,concatenate_2/concat,conv2d_13/convolution,conv2d_13/BiasAdd,conv2d_13/Relu,conv2d_14/convolution,conv2d_14/BiasAdd,conv2d_14/Relu,(Unnamed Layer* 123) [Deconvolution],conv2d_transpose_3/BiasAdd,concatenate_3/concat,conv2d_15/convolution,conv2d_15/BiasAdd,conv2d_15/Relu,conv2d_16/convolution,conv2d_16/BiasAdd,conv2d_16/Relu,(Unnamed Layer* 151) [Deconvolution],conv2d_transpose_4/BiasAdd,concatenate_4/concat,conv2d_17/convolution,conv2d_17/BiasAdd,conv2d_17/Relu,conv2d_18/convolution,conv2d_18/BiasAdd,conv2d_18/Relu,conv2d_19/convolution,conv2d_19/BiasAdd} (DLA)
[05/26/2022-13:11:31] [W] [TRT] DLA Node compilation Failed.
[05/26/2022-13:11:31] [V] [TRT] Tactic: 548716326947 skipped. ProcessNode failure.
[05/26/2022-13:11:31] [V] [TRT] Fastest Tactic: -3360065831133338131 Time: 3.40282e+38
[05/26/2022-13:11:31] [E] [TRT] Try increasing the workspace size with IBuilderConfig::setMaxWorkspaceSize() if using IBuilder::buildEngineWithConfig, or IBuilder::setMaxWorkspaceSize() if using IBuilder::buildCudaEngine.
[05/26/2022-13:11:31] [E] [TRT] ../builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 (Could not find any implementation for node {conv2d_1/convolution,conv2d_1/BiasAdd,conv2d_1/Relu,conv2d_2/convolution,conv2d_2/BiasAdd,conv2d_2/Relu,max_pooling2d_1/MaxPool,conv2d_3/convolution,conv2d_3/BiasAdd,conv2d_3/Relu,conv2d_4/convolution,conv2d_4/BiasAdd,conv2d_4/Relu,max_pooling2d_2/MaxPool,conv2d_5/convolution,conv2d_5/BiasAdd,conv2d_5/Relu,conv2d_6/convolution,conv2d_6/BiasAdd,conv2d_6/Relu,max_pooling2d_3/MaxPool,conv2d_7/convolution,conv2d_7/BiasAdd,conv2d_7/Relu,conv2d_8/convolution,conv2d_8/BiasAdd,conv2d_8/Relu,max_pooling2d_4/MaxPool,conv2d_9/convolution,conv2d_9/BiasAdd,conv2d_9/Relu,conv2d_10/convolution,conv2d_10/BiasAdd,conv2d_10/Relu,(Unnamed Layer* 67) [Deconvolution],conv2d_transpose_1/BiasAdd,concatenate_1/concat,conv2d_11/convolution,conv2d_11/BiasAdd,conv2d_11/Relu,conv2d_12/convolution,conv2d_12/BiasAdd,conv2d_12/Relu,(Unnamed Layer* 95) [Deconvolution],conv2d_transpose_2/BiasAdd,concatenate_2/concat,conv2d_13/convolution,conv2d_13/BiasAdd,conv2d_13/Relu,conv2d_14/convolution,conv2d_14/BiasAdd,conv2d_14/Relu,(Unnamed Layer* 123) [Deconvolution],conv2d_transpose_3/BiasAdd,concatenate_3/concat,conv2d_15/convolution,conv2d_15/BiasAdd,conv2d_15/Relu,conv2d_16/convolution,conv2d_16/BiasAdd,conv2d_16/Relu,(Unnamed Layer* 151) [Deconvolution],conv2d_transpose_4/BiasAdd,concatenate_4/concat,conv2d_17/convolution,conv2d_17/BiasAdd,conv2d_17/Relu,conv2d_18/convolution,conv2d_18/BiasAdd,conv2d_18/Relu,conv2d_19/convolution,conv2d_19/BiasAdd}.)
[05/26/2022-13:11:31] [V] [TRT] Builder timing cache: created 2 entries, 0 hit(s)
[05/26/2022-13:11:31] [E] [TRT] ../builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 ()
[05/26/2022-13:11:31] [E] Engine creation failed
[05/26/2022-13:11:31] [E] Engine set up failed

@AsawareeBhide
Copy link
Collaborator

Hi! Could you please confirm if you downloaded the Unet model from the instructions in this repo?

python3 utils/download_models.py --all --csv_file_path <path-to>/benchmark_csv/nx-benchmarks.csv --save_dir <absolute-path-to-downloaded-models>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants