Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-uniform fast Fourier transform (NUFFT) #2698

Closed
jmontalt opened this issue May 11, 2022 · 5 comments
Closed

Non-uniform fast Fourier transform (NUFFT) #2698

jmontalt opened this issue May 11, 2022 · 5 comments

Comments

@jmontalt
Copy link

Describe the feature and the current behavior/state.
I was wondering if TensorFlow Addons might be a good home for a non-uniform fast Fourier transform (NUFFT) operator. This is a special type of fast Fourier transform that can be used for signals that are sampled non-uniformly (i.e. at points not lying in a Cartesian grid).

Although it is much less common than its uniform/Cartesian counterpart (the FFT, implemented in core TensorFlow), the operator is certainly useful for some scientific communities (e.g. medical imaging, astrophysics).

Relevant information

  • Are you willing to contribute it (yes/no): Yes
    If you wish to contribute, then read the requirements for new contributions in CONTRIBUTING.md
  • Are you willing to maintain it going forward? (yes/no): Yes
  • Is there a relevant academic paper? (if so, where): Yes, here. This is the paper for the particular algorithm I propose to incorporate in TF addons, but there are several other papers describing other NUFFT algorithms.
  • Does the relavent academic paper exceed 50 citations? (yes/no): Yes
  • Is there already an implementation in another framework? (if so, where): Yes. Here is the original C implementation (plus wrappers for other languages). Here is an existing TensorFlow port (which I maintain).
  • Was it part of tf.contrib? (if so, where): No.

Which API type would this fall under (layer, metric, optimizer, etc.)
I'm not sure any of the existing submodules is appropriate. The best home might be a new signal submodule that mirrors the one in core TensorFlow.

Who will benefit with this feature?
Scientific communities including medical imaging (MRI, CT) and astrophysics.

Any other info.

  • The proposed operator is complex and involves a large amount of C++/CUDA code (you can have a look here to get an idea). It could increase the size of TF addons wheels by a few MB.
  • The proposed inclusion would support CPU and GPU natively, but I do not have a pure Python implementation nor the bandwidth to write one anytime soon.
  • The CPU implementation depends on FFTW, released under GNU GPL. I'm not sure if there could be any licensing issues with this.
  • I'd like to understand whether there is any interest in having this in TF addons so I can plan accordingly, but if there is it will probably take a few months until a PR is ready (the current op seems to be working generally well and very efficiently, but there are still some rough edges).
@bhack
Copy link
Contributor

bhack commented May 11, 2022

Thanks you for the proposal but we don't have the bandwidth to maintain the overhead of large custom ops.

I suggest to reopen the same issue in the main TF repo.

@bhack bhack closed this as completed May 11, 2022
@jmontalt
Copy link
Author

To be clear, I'd be happy to help maintain this!

@bhack
Copy link
Contributor

bhack commented May 11, 2022

I understand, nothing personal, but the condeowners stats are not sponsoring the decision of maintaining large c++/cuda components.
Generally we lost codeowners after few months. So it is ok for relative small components but it is hard for a large codebase with custom ops.

The CPU implementation depends on FFTW, released under GNU GPL. I'm not sure if there could be any licensing issues with this.

I think this is a problem here but also for the main TF repo /cc @yarri-oss

@jmontalt
Copy link
Author

I understand. Sorry to hear you haven't had a better experience with codeowners.

Core TensorFlow has previously shown limited interest in the feature, which is understandable since only a small subset of the community would have a use for it.

For now it seems it will need to stay in its own package. For anyone reading this who is interested, there's a fully functional NUFFT implementation for TensorFlow here.

@bhack
Copy link
Contributor

bhack commented May 11, 2022

/cc @seanpmorgan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants