Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC, FIX, RF] Fix typos in documentation and refactor fit_decay/fit_decay_ts. #49

Merged
merged 19 commits into from
May 15, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ API
.. _calibration_ref:


:mod:`tedana.workflows`: Component selection
:mod:`tedana.workflows`: Common workflows
--------------------------------------------------

.. automodule:: tedana.workflows
Expand Down
13 changes: 8 additions & 5 deletions tedana/decomposition/eigendecomp.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,18 @@ def tedpca(catd, OCcatd, combmode, mask, t2s, t2sG, stabilize,
2. Decompose normalized data using PCA or SVD.
3. Compute :math:`{\\kappa}` and :math:`{\\rho}`:

- :math:`{\\kappa}_c = \\frac{\sum_{v}^V {\\zeta}_{c,v}^p * \
F_{c,v,R_2^*}}{\sum {\\zeta}_{c,v}^p}`
- :math:`{\\rho}_c = \\frac{\sum_{v}^V {\\zeta}_{c,v}^p * \
F_{c,v,S_0}}{\sum {\\zeta}_{c,v}^p}`
.. math::
{\\kappa}_c = \\frac{\sum_{v}^V {\\zeta}_{c,v}^p * \
F_{c,v,R_2^*}}{\sum {\\zeta}_{c,v}^p}

{\\rho}_c = \\frac{\sum_{v}^V {\\zeta}_{c,v}^p * \
F_{c,v,S_0}}{\sum {\\zeta}_{c,v}^p}

4. Some other stuff. Something about elbows.
5. Classify components as thermal noise if they meet both of the
following criteria:

- Nonsignificant :math:`{\\kappa}` or :math:`{\\rho}`.
- Nonsignificant :math:`{\\kappa}` and :math:`{\\rho}`.
- Nonsignificant variance explained.
"""

Expand Down
6 changes: 4 additions & 2 deletions tedana/model/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,10 @@ def make_optcom(data, t2s, tes, mask, combmode, verbose=True):
-----
1. Estimate voxel- and TE-specific weights based on estimated
:math:`T_2^*`:
- :math:`w(T_2^*)_n = \\frac{TE_n * exp(\\frac{-TE}\
{T_{2(est)}^*})}{\sum TE_n * exp(\\frac{-TE}{T_{2(est)}^*})}`

.. math::
w(T_2^*)_n = \\frac{TE_n * exp(\\frac{-TE}\
{T_{2(est)}^*})}{\sum TE_n * exp(\\frac{-TE}{T_{2(est)}^*})}
2. Perform weighted average per voxel and TR across TEs based on weights
estimated in the previous step.
"""
Expand Down
6 changes: 3 additions & 3 deletions tedana/model/fit.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ def computefeats2(data, mmix, mask, normalize=True):

Returns
-------
data_Z : (S x C) np.ndarray
data_Z : (S x C) :obj:`numpy.ndarray`
Data in component space
"""

Expand Down Expand Up @@ -292,7 +292,7 @@ def get_coeffs(data, mask, X, add_const=False):

Returns
-------
betas : (S x C) np.ndarray
betas : (S x C) :obj:`numpy.ndarray`
Array of `S` sample betas for `C` predictors
"""

Expand Down Expand Up @@ -413,7 +413,7 @@ def spatclust(img, min_cluster_size, threshold=None, index=None, mask=None):

Returns
-------
clustered : np.ndarray
clustered : :obj:`numpy.ndarray`
Boolean array of clustered (and thresholded) `img` data
"""

Expand Down
108 changes: 39 additions & 69 deletions tedana/model/monoexponential.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def fit_decay(data, tes, mask, masksum, start_echo):

Parameters
----------
data : (S x E x T) array_like
data : (S x E [x T]) array_like
Multi-echo data array, where `S` is samples, `E` is echos, and `T` is
time
tes : (E, ) list
Expand All @@ -27,34 +27,43 @@ def fit_decay(data, tes, mask, masksum, start_echo):

Returns
-------
t2sa : (S x E x T) :obj:`numpy.ndarray`
t2sa : (S x E) :obj:`numpy.ndarray`
Limited T2* map
s0va : (S x E x T) :obj:`numpy.ndarray`
s0va : (S x E) :obj:`numpy.ndarray`
Limited S0 map
t2ss : (S x E x T) :obj:`numpy.ndarray`
t2ss : (S x E) :obj:`numpy.ndarray`
???
s0vs : (S x E x T) :obj:`numpy.ndarray`
s0vs : (S x E) :obj:`numpy.ndarray`
???
t2saf : (S x E x T) :obj:`numpy.ndarray`
t2saf : (S x E) :obj:`numpy.ndarray`
Full T2* map
s0vaf : (S x E x T) :obj:`numpy.ndarray`
s0vaf : (S x E) :obj:`numpy.ndarray`
Full S0 map

Notes
-----
1. Fit monoexponential decay function to all values for a given voxel
across TRs, per TE, to estimate voxel-wise :math:`S_0` and
:math:`T_2^*`:
- :math:`S(TE) = S_0 * exp(-R_2^* * TE)`
- :math:`T_2^* = 1 / R_2^*`

.. math::
S(TE) = S_0 * exp(-R_2^* * TE)

T_2^* = 1 / R_2^*

2. Replace infinite values in :math:`T_2^*` map with 500 and NaN values
in :math:`S_0` map with 0.
3. Generate limited :math:`T_2^*` and :math:`S_0` maps by doing something.
"""
if len(data.shape) == 3:
n_samp, n_echos, n_vols = data.shape
else:
n_samp, n_echos = data.shape
n_vols = 1

n_samp, n_echos, n_vols = data.shape
data = data[mask]
t2ss, s0vs = np.zeros([n_samp, n_echos - 1]), np.zeros([n_samp, n_echos - 1])
t2ss = np.zeros([n_samp, n_echos - 1])
s0vs = np.zeros([n_samp, n_echos - 1])

for echo in range(start_echo, n_echos + 1):
# perform log linear fit of echo times against MR signal
Expand All @@ -80,8 +89,8 @@ def fit_decay(data, tes, mask, masksum, start_echo):
fl_ = np.squeeze(fl[..., echo])
fl_[masksum == echo + 2] = True
fl[..., echo] = fl_
t2sa, s0va = utils.unmask(t2ss[fl], masksum > 1), utils.unmask(s0vs[fl], masksum > 1)
# t2sa[masksum > 1], s0va[masksum > 1] = t2ss[fl], s0vs[fl]
t2sa = utils.unmask(t2ss[fl], masksum > 1)
s0va = utils.unmask(s0vs[fl], masksum > 1)

# create full T2* maps with S0 estimation errors
t2saf, s0vaf = t2sa.copy(), s0va.copy()
Expand Down Expand Up @@ -113,71 +122,32 @@ def fit_decay_ts(data, mask, tes, masksum, start_echo):

Returns
-------
t2sa : (S x E x T) :obj:`numpy.ndarray`
t2sa_ts : (S x E x T) :obj:`numpy.ndarray`
Limited T2* map
s0va : (S x E x T) :obj:`numpy.ndarray`
s0va_ts : (S x E x T) :obj:`numpy.ndarray`
Limited S0 map
t2ss : (S x E x T) :obj:`numpy.ndarray`
???
s0vs : (S x E x T) :obj:`numpy.ndarray`
???
t2saf : (S x E x T) :obj:`numpy.ndarray`
t2saf_ts : (S x E x T) :obj:`numpy.ndarray`
Full T2* map
s0vaf : (S x E x T) :obj:`numpy.ndarray`
s0vaf_ts : (S x E x T) :obj:`numpy.ndarray`
Full S0 map
"""
nx, ny, nz, n_echos, n_trs = data.shape
n_samples, _, n_trs = data.shape
echodata = data[mask]
tes = np.array(tes)

t2sa_ts = np.zeros([nx, ny, nz, n_trs])
s0va_ts = np.zeros([nx, ny, nz, n_trs])
t2saf_ts = np.zeros([nx, ny, nz, n_trs])
s0vaf_ts = np.zeros([nx, ny, nz, n_trs])
t2sa_ts = np.zeros([n_samples, n_trs])
s0va_ts = np.copy(t2sa_ts)
t2saf_ts = np.copy(t2sa_ts)
s0vaf_ts = np.copy(t2sa_ts)

for vol in range(echodata.shape[-1]):
t2ss = np.zeros([nx, ny, nz, n_echos - 1])
s0vs = t2ss.copy()
# Fit monoexponential decay first for first echo only,
# then first two echoes, etc.
for i_echo in range(start_echo, n_echos + 1):
B = np.abs(echodata[:, :i_echo, vol]) + 1
B = np.log(B).transpose()
neg_tes = -1 * tes[:i_echo]

# First row is constant, second is TEs for decay curve
# Independent variables for least-squares model
x = np.array([np.ones(i_echo), neg_tes])
X = np.sort(x)[:, ::-1].transpose()

beta, _, _, _ = np.linalg.lstsq(X, B)
t2s = 1. / beta[1, :].transpose()
s0 = np.exp(beta[0, :]).transpose()

t2s[np.isinf(t2s)] = 500.
s0[np.isnan(s0)] = 0.

t2ss[:, :, :, i_echo-2] = np.squeeze(utils.unmask(t2s, mask))
s0vs[:, :, :, i_echo-2] = np.squeeze(utils.unmask(s0, mask))

# Limited T2* and S0 maps
fl = np.zeros([nx, ny, nz, len(tes)-1], bool)
for i_echo in range(n_echos - 1):
fl_ = np.squeeze(fl[:, :, :, i_echo])
fl_[masksum == i_echo + 2] = True
fl[:, :, :, i_echo] = fl_
t2sa = np.squeeze(utils.unmask(t2ss[fl], masksum > 1))
s0va = np.squeeze(utils.unmask(s0vs[fl], masksum > 1))

# Full T2* maps with S0 estimation errors
t2saf = t2sa.copy()
s0vaf = s0va.copy()
t2saf[masksum == 1] = t2ss[masksum == 1, 0]
s0vaf[masksum == 1] = s0vs[masksum == 1, 0]

t2sa_ts[:, :, :, vol] = t2sa
s0va_ts[:, :, :, vol] = s0va
t2saf_ts[:, :, :, vol] = t2saf
s0vaf_ts[:, :, :, vol] = s0vaf

[t2sa, s0va, _, _, t2saf, s0vaf] = fit_decay(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about converting this to, instead of calling fit_decay inside of fit_decay_ts, being able to throw a timeseries=True flag ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomMaullin if you'd like to address this in a different PR, I can merge this one and leave #25 open !

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I think it would be good to leave that to a separate PR! This ones looks good to me 😄

Copy link
Contributor

@TomMaullin TomMaullin May 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @emdupre , @rmarkello

Sorry for the delayed response! Would you still like me to address this or has it now been taken care of?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for my delayed response ! I went ahead and merged this PR, but #25 is still open with the idea of eventually refactoring so that we could pass a timeseries=True argument or something similar. If you're interested in taking that on, we'd love to have your contribution !!

data, mask, tes, masksum, start_echo)

t2sa_ts[:, vol] = t2sa
s0va_ts[:, vol] = s0va
t2saf_ts[:, vol] = t2saf
s0vaf_ts[:, vol] = s0vaf

return t2sa_ts, s0va_ts, t2saf_ts, s0vaf_ts
6 changes: 3 additions & 3 deletions tedana/utils/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ def new_nii_like(ref_img, data, affine=None, copy_header=True):

Returns
-------
nii : :obj:`nibabel.nifti.Nifti1Image`
nii : :obj:`nibabel.nifti1.Nifti1Image`
NiftiImage
"""

Expand Down Expand Up @@ -345,7 +345,7 @@ def new_gii_like(ref_img, data, copy_header=True, copy_meta=False):

Returns
-------
gii : :obj:`nibabel.gifti.GiftiImage`
gii : :obj:`nibabel.gifti.gifti.GiftiImage`
GiftiImage
"""

Expand Down Expand Up @@ -381,7 +381,7 @@ def make_gii_darray(ref_array, data, copy_meta=False):

Returns
-------
gii : :obj:`nibabel.gifti.GiftiDataArray`
gii : :obj:`nibabel.gifti.gifti.GiftiDataArray`
Output data array instance
"""

Expand Down