-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use unsigned long for axis of concat operation #345
Comments
@miaobin , thanks for raising this issue. Given the valid value of And there may be an opportunity to unify the Although the And I suppose the negative axis value might be useful from the user of graph API perspective. Because The implementation should not be much complex, because internally the implementation should know the rank. The conversion of the negative axis value to positive value (if native ML API requires) could be simply by WDYT? |
Link to #307, batchNormalization also uses |
There're a few things to consider in WebNN context:
|
@wacky6 , thanks for your implementation reviewer perspectives, very helpful!
WebNN aims at being framework backend. So the users of this API would be most likely the framework developers. I agree with you that framework can support negative axis in their way and does the conversion if the backend API only supports unsigned integer axis. For example, although TFLite supports negative axis, XNNPACK only support unsigned integer axis. The TFLite XNNPACK delegate does the conversion by So, I guess if WebNN supports unsigned integer axis, the WebNN framework backend can implement in a similar way.
I agree the unsigned mat would be much easier and less error-prone. If WebNN supports unsigned integer axis, when implement a backend that supports signed integer axis, such as MPSGraph, it may be cautious about overflow when converting unsigned integer to signed integer.
I agree it is less useful for static graph that WebNN only supports. |
My another point this spec should make the axis definition consistent across APIs, including @BruceDai mentioned in #307 for
|
/cc @wchao1115 for inputs. Thanks! |
@huningxin Thanks for the holistic table above comparing operators. Yeah, from the consistency POV, I strongly agree that all the operators sharing similar indices/axes either all share this special policy of negative numbers or that they all do not (so if The one case where I would argue in favor of negative axes for generality is if your model supported arbitrary rank inputs (e.g. although uncommon, ONNX models allow arbitrary rank inputs 1D/2D/3D... by just leaving input TensorShapeProto shape blank, and then from-the-end indexing is useful), but again since WebNN requires statically known shapes ahead of time, there should be no ambiguous cases where negative numbers can't be resolved ahead of time. |
@fdwr , great points!
+1. And I think WebNN test developers, @BruceDai , would be happy about that. With inputs from both @fdwr and @wacky6 , I'd drop PR #352 and make another PR that unifies the axis/axes definitions by using unsinged integer with a consistent value range. I'll invite you review once it is ready. Thanks! |
The PR #359 that uses unsigned integer axis is ready for review. Thanks! |
We now limit the value of axis in the interval [0, N) where N is the rank of all the inputs. It should be a positive value and we may use
unsigned long
for axis.The text was updated successfully, but these errors were encountered: