Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got "NOT_IMPLEMENTED" error on OneHot with int32 #1537

Closed
disktnk opened this issue Aug 1, 2019 · 7 comments · Fixed by #1565
Closed

Got "NOT_IMPLEMENTED" error on OneHot with int32 #1537

disktnk opened this issue Aug 1, 2019 · 7 comments · Fixed by #1565
Assignees

Comments

@disktnk
Copy link

disktnk commented Aug 1, 2019

Describe the bug

When call OneHot with int32 indeices, get "NOT_IMPLEMENTED" RuntimeError.

System information

  • OS Platform and Distribution: Ubuntu18.04
  • ONNX Runtime installed from: binary via PyPI
  • ONNX Runtime version: 0.4.0
  • Python version: 3.7.2
  • Visual Studio version (if applicable):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

To Reproduce

based https://github.com/onnx/onnx/blob/v1.5.0/onnx/backend/test/case/node/onehot.py#L47

    axisValue = 1
    on_value = 3
    off_value = 1
    output_type = np.float32
    node = onnx.helper.make_node(
        'OneHot',
        inputs=['indices', 'depth', 'values'],
        outputs=['y'],
        axis=axisValue,
    )
    indices = np.array([[1, 9],
                        [2, 4]], dtype=np.int32)  # original np.float32
    depth = np.array([10], dtype=np.float32)
    values = np.array([off_value, on_value], dtype=output_type)
    y = one_hot(indices, depth, axis=axisValue, dtype=output_type)
    y = y * (on_value - off_value) + off_value
        if isinstance(path_or_bytes, str):
            self._sess.load_model(path_or_bytes)
        elif isinstance(path_or_bytes, bytes):
>           self._sess.read_bytes(path_or_bytes)
E           RuntimeError: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for the node OneHot(9)

Expected behavior

No error, OneHot allows int32 input by Operators.md

Screenshots

Additional context

@disktnk
Copy link
Author

disktnk commented Aug 1, 2019

Ah, I miss PR #1317 , so I wait next release, thx!

@disktnk disktnk closed this as completed Aug 1, 2019
@disktnk
Copy link
Author

disktnk commented Aug 3, 2019

I tested them with version 0.5.0, and got same error on OneHot op with int32 type.

Case1:

  • indices: int64
  • depth: int64

--> no error, works well, thanks!

Case2:

  • indices: int32
  • depth: int32

--> got "NOT_IMPLEMENTED" error, I think the condition is acceptable on OneHot op.

Case3:

  • indices: float32
  • depth: int64,int32

or

  • indices: int64,int32
  • depth: float32

--> also got "NOT_IMPLEMENTED" error, but I don't know runtimes had better to implement the condition or not. The docs says "In case 'indices' is of non-integer type, the values will be casted to int64 before use", so naturally the conditions are acceptable I think.

@disktnk disktnk reopened this Aug 3, 2019
@hariharans29 hariharans29 self-assigned this Aug 5, 2019
@hariharans29
Copy link
Member

Hi @disktnk ,

Thanks for bringing this up.

Currently, the OneHot op doesn't support all type combinations.

The PR you are referring to (#1317) supported in_type: int64, out_type: float, depth: int64.

  1. From what I understand from your code python snippet, you are looking for in_type: int32, out_type: float, depth: float. Is this correct ?

  2. What other combinations would you like? At this time, I don't think we plan to support all type combinations that are possible as per spec. We would like the type support to be based on intuitive use-cases and request-driven.

Thanks

@disktnk
Copy link
Author

disktnk commented Aug 6, 2019

Thanks for reply! It's reasonable and convinced your policy

We would like the type support to be based on intuitive use-cases and request-driven."

Answer:

1 -> Yes, and the snippet just for reproducible
2. -> We're looking for in_type: int32, depth: int32, out_type: float32

@hariharans29
Copy link
Member

Thanks for reply! It's reasonable and convinced your policy

We would like the type support to be based on intuitive use-cases and request-driven."

Answer:

1 -> Yes, and the snippet just for reproducible
2. -> We're looking for in_type: int32, depth: int32, out_type: float32

Thanks for the confirmation. So there are 2 combinations requested -

  1. in_type: int32, out_type: float, depth: float
  2. in_type: int32, out_type: float, depth: int32

Will send a PR out.

@hariharans29
Copy link
Member

hariharans29 commented Aug 7, 2019

Hi @disktnk - The requested type support should be available if you build from master. Thanks.

@disktnk
Copy link
Author

disktnk commented Aug 7, 2019

Great!! I'll check it later, thx!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants