-
Notifications
You must be signed in to change notification settings - Fork 74.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf-1.10.1 freeze_graph 'list index out of range' #22029
Comments
Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. |
@tensorflowbutler Updated. |
same issues #22019 If I use tensorflow 1.10 version freeze graph.py, it comes with If I use working version such as tensorflow 1.8, message comes below Traceback (most recent call last): |
/CC @petewarden, can you take a look? |
@petewarden Help ... |
Nagging Assignee @petewarden: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
Tensorflow is hard to use with so many bugs, I wonder why not fixing bugs before developing new functionality. And the issue block on github is useless, since no useful answers are provided. I wonder maybe turn to using pytorch would be better. |
…(Blackwell) Imported from GitHub PR openxla/xla#22029 In addition to SM120a, also add SM101a mentioned in the PTX 8.7 spec (https://docs.nvidia.com/cuda/parallel-thread-execution/#release-notes), which is a slight variation of SM100a. Bumping the max supported PTX version to 8.7, as the LLVM PR (llvm/llvm-project#124155) adding the support is now integrated to OpenXLA. Copybara import of the project: -- be59b7a51721637d880207e7adb69a18c3a92bea by Sergey Kozub <[email protected]>: [XLA:GPU] Add support for SM101a and SM120a architectures (Blackwell) Merging this change closes #22029 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#22029 from openxla:devel/sm120a be59b7a51721637d880207e7adb69a18c3a92bea PiperOrigin-RevId: 721049239
…(Blackwell) Imported from GitHub PR openxla/xla#22029 In addition to SM120a, also add SM101a mentioned in the PTX 8.7 spec (https://docs.nvidia.com/cuda/parallel-thread-execution/#release-notes), which is a slight variation of SM100a. Bumping the max supported PTX version to 8.7, as the LLVM PR (llvm/llvm-project#124155) adding the support is now integrated to OpenXLA. Copybara import of the project: -- be59b7a51721637d880207e7adb69a18c3a92bea by Sergey Kozub <[email protected]>: [XLA:GPU] Add support for SM101a and SM120a architectures (Blackwell) Merging this change closes #22029 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#22029 from openxla:devel/sm120a be59b7a51721637d880207e7adb69a18c3a92bea PiperOrigin-RevId: 721049239
…(Blackwell) Imported from GitHub PR openxla/xla#22029 In addition to SM120a, also add SM101a mentioned in the PTX 8.7 spec (https://docs.nvidia.com/cuda/parallel-thread-execution/#release-notes), which is a slight variation of SM100a. Bumping the max supported PTX version to 8.7, as the LLVM PR (llvm/llvm-project#124155) adding the support is now integrated to OpenXLA. Copybara import of the project: -- be59b7a51721637d880207e7adb69a18c3a92bea by Sergey Kozub <[email protected]>: [XLA:GPU] Add support for SM101a and SM120a architectures (Blackwell) Merging this change closes #22029 PiperOrigin-RevId: 721088886
System information
Describe the problem
I use the following command to freeze a
ckpt
file topb
file:In
tf-1.8.0
everything seems well, however in tf-1.10.1, I meet the following error:I wonder why the
tf-1.10
is not tested before release ???The text was updated successfully, but these errors were encountered: