Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] Clarify data types and requirement levels for all JSON files #605

Merged
merged 33 commits into from
Sep 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
4d4dcf4
datatype draft
sappelhoff Sep 14, 2020
1b9ca3d
try smaller margins pdf for better tables
sappelhoff Sep 14, 2020
d08fe4d
specify paper size and margins via tex header
sappelhoff Sep 14, 2020
609e6b9
back to default margins, but valid specification now
sappelhoff Sep 14, 2020
fa17f72
make use of [link][] notation for brevity
sappelhoff Sep 15, 2020
74c1e02
enh tables in modality agnostic files
sappelhoff Sep 15, 2020
d420970
enh tables in common data types
sappelhoff Sep 15, 2020
a0a747d
work on 03-imaging
sappelhoff Sep 15, 2020
3ed71ff
enh genetics tables with req level and datatype
sappelhoff Sep 15, 2020
b34e9d6
enh physio+stim tables
sappelhoff Sep 15, 2020
9999631
enh ieeg, concistency ephys
sappelhoff Sep 15, 2020
cd1def4
fmt fixes, clarify geneticlevel https://github.com/bids-standard/bids…
sappelhoff Sep 15, 2020
5a44332
enh tables in EEG section
sappelhoff Sep 15, 2020
a02e419
backticks
sappelhoff Sep 16, 2020
45a39a4
PowerLineFrequency: number
sappelhoff Sep 16, 2020
9bcc60f
enh task event tables
sappelhoff Sep 16, 2020
04f6443
add links
sappelhoff Sep 16, 2020
36bef00
enh meg tables
sappelhoff Sep 16, 2020
6ee33fa
misc fixes
sappelhoff Sep 16, 2020
51df216
fix reference to by now non-existing section
sappelhoff Sep 16, 2020
71bf05f
get started on MRI tables, but how to proceed?
sappelhoff Sep 16, 2020
de57bd7
continue mri tables as far as possible
sappelhoff Sep 16, 2020
2a36ce6
decrease margins, drop a4 paper in favor of default
sappelhoff Sep 16, 2020
1b3afaa
center header and footer disregarding odd and even pages
sappelhoff Sep 16, 2020
a8cb249
add links, try pdf landscape
sappelhoff Sep 16, 2020
357bfd3
adjust cover abd header for landscape pdf
sappelhoff Sep 16, 2020
0d8cc76
fix: skip broken tables during python script
sappelhoff Sep 16, 2020
13fdd9a
add missing datatypes for mri
sappelhoff Sep 17, 2020
053950e
misc fixes
sappelhoff Sep 17, 2020
f16cfbc
add missing link
sappelhoff Sep 17, 2020
5914cc7
Update src/04-modality-specific-files/04-intracranial-electroencephal…
sappelhoff Sep 17, 2020
e8fe54b
fix pipe
sappelhoff Sep 17, 2020
e87497c
add missing link
sappelhoff Sep 19, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ site/
.DS_Store
.idea
venvs
pdf_build_src/bids-spec.pdf
6 changes: 2 additions & 4 deletions pdf_build_src/cover.tex
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
% LOGO SECTION
%----------------------------------------------------------------------------------------

\includegraphics[width=0.6\textwidth]{images/BIDS_logo.jpg}\\[1cm]
\includegraphics[width=0.6\textwidth]{images/BIDS_logo.jpg}\\[-5cm]

%----------------------------------------------------------------------------------------
% TITLE SECTION
Expand All @@ -21,7 +21,5 @@
{ \huge \bfseries Brain Imaging Data Structure Specification}\\[0.4cm] % Title of your document
\HRule \\[1.5cm]

% \textsc{\large v1.2.1}\\[0.5cm]{\large 2019-08-14}\\[2cm]

% \vfill % Fill the rest of the page with whitespace
\textsc{\large v1.2.1}\\[0.5cm]{\large 2019-08-14}\\[2cm]\vfill\end{titlepage}
% \textsc{\large v1.2.1}\\[0.5cm]{\large 2019-08-14}\\[2cm]\vfill\end{titlepage}
7 changes: 4 additions & 3 deletions pdf_build_src/header.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
% header file
% DO NOT EDIT THE 4 LINES BELOW THIS LINE (see `add_header` in process_markdowns.py)
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\chead{Brain Imaging Data Structure v1.2.1 2019-08-14}
\fancyfoot[LE,RO]{\thepage}
\fancyhead[L]{ Brain Imaging Data Structure v1.4.1-dev 2020-09-16 }
% Edit from here below
\fancyfoot[L]{\thepage}
2 changes: 2 additions & 0 deletions pdf_build_src/header_setup.tex
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,5 @@
linewidth=\textwidth,
basewidth=0.5em
}

\usepackage[a4paper,margin=0.75in,landscape]{geometry}
2 changes: 0 additions & 2 deletions pdf_build_src/pandoc_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,6 @@ def build_pdf(filename):
'--include-in-header=./header_setup.tex',
'-V documentclass=report',
'-V linkcolor:blue',
'-V geometry:a4paper',
'-V geometry:margin=2cm',
'--pdf-engine=xelatex',
'--output={}'.format(filename),
]
Expand Down
19 changes: 13 additions & 6 deletions pdf_build_src/process_markdowns.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def copy_images(root_path):
# walk through the src directory to find subdirectories named 'images'
# and copy contents to the 'images' directory in the duplicate src
# directory
for root, dirs, files in os.walk(root_path):
for root, dirs, files in sorted(os.walk(root_path)):
if 'images' in dirs:
subdir_list.append(root)

Expand Down Expand Up @@ -86,13 +86,13 @@ def add_header():
header = " ".join([title, version_number, build_date])

# creating a header string with latest version number and date
header_string = (r"\chead{ " + header + " }")
header_string = (r"\fancyhead[L]{ " + header + " }")

with open('header.tex', 'r') as file:
data = file.readlines()

# now change the last but 2nd line, note that you have to add a newline
data[-2] = header_string+'\n'
# insert the header, note that you have to add a newline
data[4] = header_string+'\n'

# re-write header.tex file with new header string
with open('header.tex', 'w') as file:
Expand All @@ -112,7 +112,7 @@ def remove_internal_links(root_path, link_type):
# regex that matches references sections within the same markdown
primary_pattern = re.compile(r'\[([\w\s.\(\)`*/–]+)\]\(([#\w\-._\w]+)\)')

for root, dirs, files in os.walk(root_path):
for root, dirs, files in sorted(os.walk(root_path)):
for file in files:
if file.endswith(".md"):
with open(os.path.join(root, file), 'r') as markdown:
Expand Down Expand Up @@ -179,6 +179,13 @@ def correct_table(table, offset=[0.0, 0.0], debug=False):
if i != 1:
nb_of_chars.append([len(elem) for elem in row])

# sanity check: nb_of_chars is list of list, all nested lists must be of equal length
if not len(set([len(i) for i in nb_of_chars])) == 1:
print('ERROR for current table ... "nb_of_chars" is misaligned, see:\n')
print(nb_of_chars)
print('\nSkipping formatting of this table.\n')
return table

# Convert the list to a numpy array and computes the maximum number of chars for each column
nb_of_chars_arr = np.array(nb_of_chars)
max_chars_in_cols = nb_of_chars_arr.max(axis=0)
Expand Down Expand Up @@ -285,7 +292,7 @@ def correct_tables(root_path, debug=False):
.. [1] https://stackoverflow.com/a/21107911/5201771
"""
exclude_files = ['index.md', '01-contributors.md']
for root, dirs, files in os.walk(root_path):
for root, dirs, files in sorted(os.walk(root_path)):
for file in files:
if file.endswith(".md") and file not in exclude_files:
print('Check tables in {}'.format(os.path.join(root, file)))
Expand Down
17 changes: 10 additions & 7 deletions src/02-common-principles.md
Original file line number Diff line number Diff line change
Expand Up @@ -426,13 +426,13 @@ Note that if a field name included in the data dictionary matches a column name
then that field MUST contain a description of the corresponding column,
using an object containing the following fields:

| Field name | Definition |
| :---------- | :--------------------------------------------------------------------------------------------------------------------------- |
| LongName | OPTIONAL. Long (unabbreviated) name of the column. |
| Description | RECOMMENDED. Description of the column. |
| Levels | RECOMMENDED. For categorical variables: a dictionary of possible values (keys) and their descriptions (values). |
| Units | RECOMMENDED. Measurement units. SI units in CMIXF formatting are RECOMMENDED (see [Units](./02-common-principles.md#units)). |
| TermURL | RECOMMENDED. URL pointing to a formal definition of this type of data in an ontology available on the web. |
| **Key name** | **Requirement level** | **Data type** | **Description** |
| ------------ | --------------------- | ------------------------- | --------------------------------------------------------------------------------------------------------------- |
| LongName | OPTIONAL | [string][] | Long (unabbreviated) name of the column. |
| Description | RECOMMENDED | [string][] | Description of the column. |
| Levels | RECOMMENDED | [object][] of [strings][] | For categorical variables: An object of possible values (keys) and their descriptions (values). |
| Units | RECOMMENDED | [string][] | Measurement units. SI units in CMIXF formatting are RECOMMENDED (see [Units](./02-common-principles.md#units)). |
| TermURL | RECOMMENDED | [string][] | URL pointing to a formal definition of this type of data in an ontology available on the web. |

Please note that while both `Units` and `Levels` are RECOMMENDED, typically only one
of these two fields would be specified for describing a single TSV file column.
Expand Down Expand Up @@ -653,3 +653,6 @@ to suppress warnings or provide interpretations of your file names.

[dataset-description]: 03-modality-agnostic-files.md#dataset-description
[derived-dataset-description]: 03-modality-agnostic-files.md#derived-dataset-and-pipeline-description
[string]: https://www.w3schools.com/js/js_json_syntax.asp
[strings]: https://www.w3schools.com/js/js_json_syntax.asp
[object]: https://www.json.org/json-en.html
Loading