Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Triu operator which can be executed via nvFuser #1627

Closed
protonu opened this issue Jan 9, 2025 · 3 comments
Closed

Triu operator which can be executed via nvFuser #1627

protonu opened this issue Jan 9, 2025 · 3 comments
Assignees
Labels
enhancement New feature or request operators

Comments

@protonu
Copy link
Contributor

protonu commented Jan 9, 2025

🚀 Feature

We want to implement a Triu operator which can be executed by nvFuser

Motivation

We want to improve performance via nvFuser execution

Additional context

We can implement Triu like the current implementation of Tril

def tril(a: TensorLike, /, diagonal: int = 0, *, fill_value: None | Number = None) -> TensorLike:

Or we can execute if calling the Triu exposed in nvFuser
(NVIDIA/Fuser#3637)

cc @t-vi

@protonu protonu added the enhancement New feature or request label Jan 9, 2025
@mruberry
Copy link
Collaborator

mruberry commented Jan 9, 2025

@beverlylytle — would you be interested in adding triu? It and tril are interesting operations

@protonu
Copy link
Contributor Author

protonu commented Jan 10, 2025

I added to PR (#1631) to address this issue.

@protonu
Copy link
Contributor Author

protonu commented Jan 10, 2025

Closed by PR #1631.
Thanks @mruberry @tfogal @beverlylytle @nikitaved !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request operators
Projects
None yet
Development

No branches or pull requests

2 participants