Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow combining runs with missing conditions/trials to modelfit #21

Open
pcklink opened this issue Jan 27, 2020 · 5 comments
Open

Allow combining runs with missing conditions/trials to modelfit #21

pcklink opened this issue Jan 27, 2020 · 5 comments
Assignees

Comments

@pcklink
Copy link
Contributor

pcklink commented Jan 27, 2020

The fMRI (feat) analysis currently crashes when you try to include sessions that do not contain trials/events for all specified contrasts. It would be convenient if the workflow would detect this as an issue and automatically exclude either the contrast or the session altogether.

@pcklink pcklink self-assigned this Jan 27, 2020
@pcklink
Copy link
Contributor Author

pcklink commented Jan 27, 2020

@kanishkatks can you provide a log file with the precise error?

@pcklink pcklink changed the title Contrasts with not enough trials should not be evaluated by bids_modelfit_workflow.py Allow combining runs with missing conditions/trials to modelfit Apr 15, 2020
@pcklink
Copy link
Contributor Author

pcklink commented Apr 15, 2020

@kanishkatks could you run a 'dummy-analysis' of this kind (1 good run CT without the central task, and 1 good run with the central task). It will throw an error and crash, but the logs will be informative and give us an idea where to start.

@nhpatscnin
Copy link
Collaborator

I have attached the slurm output in the following link. It didn't create a log file for some reason where I wanted it too.
https://www.dropbox.com/sh/xwkzh5kyht43v4u/AAB4MGFrppPYo8X9c0u-JzuMa?dl=0

@pcklink
Copy link
Contributor Author

pcklink commented Apr 23, 2020

Seems like this might be possible. Fsl should take 'empty' ev files if you code them as '0 0 0', which should yield an empty result image. For the second level we should than use the copes, not the low-level feat dirs. All untested....

@pcklink
Copy link
Contributor Author

pcklink commented Jan 19, 2021

Putting this off. Doesn't seem necessary for now. Would only be helpful if one type of trial would be structurally ignored or something. Usually, it's better to fix this with training. The above solution might still work but is not worth the effort right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants