Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

335 differential precision (chevron) #343

Merged
merged 16 commits into from
Jan 13, 2023

Conversation

BFalquet
Copy link
Contributor

close #335
This is a proof of concept applied to lbt01

  • add precision arguments
  • put a blank in baseline CHG cell

thank you for your suggestions

@BFalquet BFalquet requested a review from clarkliming December 16, 2022 11:58
@github-actions
Copy link
Contributor

github-actions bot commented Dec 16, 2022

Unit Tests Summary

    1 files    19 suites   1m 5s ⏱️
  93 tests   64 ✔️ 29 💤 0
209 runs  137 ✔️ 72 💤 0

Results for commit 8fde04e.

♻️ This comment has been updated with latest results.

R/lbt01.R Outdated
n_fmt <- "xx"
mean_sd_fmt <- sprintf("xx%s (xx%s)", xn, xn)
median_fmt <- sprintf("xx%s", xn)
min_max_fmt <- "xx.x - xx.x"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for min_max format, I think we usually have one decimal points less than the mean/sd precision, e.g. for mean/sd we have xx.xx, for min-max, we have xx.x. What do you think @barnett11 ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I agree thanks @clarkliming -
3.1. Measures of location and dispersion (e.g., mean, median, variance, SD, SE, and corresponding confidence intervals) are displayed with 1 more decimal place of precision than the most precise collected data.
3.2. Percentiles (eg, Q1, Q3, 99th percentile) are displayed with 1 more decimal place of precision than the most precise collected data.
3.3. When data are summarized in the same unit as collected, the minimum and the maximum are displayed with the same number of decimal places of precision as the most precise collected data.

@@ -94,7 +103,50 @@ lbt01_1_lyt <- function(armvar,
varlabels = summaryvars_lbls,
nested = TRUE
) %>%
summarize_colvars() %>%
analyze_colvars(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may need to abstract this function into tern some time later so that other templates can also use this

@clarkliming
Copy link
Contributor

@barnett11 could you please help review this output, where change from baseline for baseline is cleaned up, and precisions are supported

Copy link
Contributor

@barnett11 barnett11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am getting some odd results,
Precision of 2 - is that for the stats or the incoming value? So if I select 2, I should see mean to 3dp? And Min-Max to 2? Am not seeing it currently
image
Also issue if set to 0 dp,
image
The Min-Max is causing warnings

Should precision values be based on PARAMCD as easier to write?

@clarkliming
Copy link
Contributor

I agree that we can use PARAMCD to input the precisions, so we will need split_rows_by(var = "PARAMCD", labels_var = "PARAM"). This can be much easier for precisions.

@BFalquet BFalquet marked this pull request as draft January 3, 2023 18:25
@BFalquet BFalquet marked this pull request as ready for review January 4, 2023 07:56
@BFalquet
Copy link
Contributor Author

BFalquet commented Jan 4, 2023

New changes;

  • the precision are now given for each value of "PARAMCD" to make the writing more concise-
  • the precision provided correspond to the numbers of digits presented in min/max/median. mean and sd have automatically one more digit of precision
  • new helper functions are used to create the format with an arbitrary precision
  • number of digits is not limited to 3 anymore.
  • test of the helper functions have been added.

thank you for your review

@github-actions
Copy link
Contributor

github-actions bot commented Jan 4, 2023

🧪 $Test coverage: 89.58%$

Code Coverage Summary

Filename                     Stmts    Miss  Cover    Missing
-------------------------  -------  ------  -------  ---------------------------------
R/aet01.R                      330       6  98.18%   38, 240, 253, 316, 322, 554
R/aet02.R                      222       2  99.10%   136, 489
R/aet03.R                       85       0  100.00%
R/aet04.R                      104       6  94.23%   81-86
R/assertions.R                  36       0  100.00%
R/checks.R                      14       0  100.00%
R/chevron_tlg-S4class.R         21       0  100.00%
R/chevron_tlg-S4methods.R       42      16  61.90%   47-55, 90-98, 218-290
R/cmt01a.R                     191       0  100.00%
R/cmt02_pt.R                    52       0  100.00%
R/dmt01.R                       40       0  100.00%
R/dst01.R                      287       0  100.00%
R/dtht01.R                     104       0  100.00%
R/egt01.R                       46       0  100.00%
R/egt02.R                       60       0  100.00%
R/egt03.R                      129      85  34.11%   32-52, 111, 125-232, 291, 305-339
R/ext01.R                       81       8  90.12%   238-241, 245-248
R/gen_args.R                     1       1  0.00%    27
R/lbt01.R                       84       0  100.00%
R/lbt04.R                       52       1  98.08%   127
R/mht01.R                       72       2  97.22%   33-34
R/mng01.R                       93       9  90.32%   113, 117-120, 133-134, 139, 182
R/pdt01.R                       61      38  37.70%   35-52, 108-150
R/utils.R                      138      83  39.86%   70, 83-185
R/vst01.R                       48       0  100.00%
R/vst02.R                      102       3  97.06%   42, 120, 257
TOTAL                         2495     260  89.58%

Results for commit: c60c4208a9da4f5448009825abbbf9a3ab2ab493

Minimum allowed coverage is 80%

♻️ This comment has been updated with latest results

@clarkliming clarkliming mentioned this pull request Jan 10, 2023
@clarkliming
Copy link
Contributor

I think this output looks great. @barnett11 can you have a look?

Copy link
Contributor

@barnett11 barnett11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still need to review the precision implementation, but wanted to quickly raise default behaviour issue. If no values are collected (n=0) then in CENSORED we see the below,
image
but currently in chevron we get warnings and the below,
image
image

@clarkliming
Copy link
Contributor

clarkliming commented Jan 11, 2023

I still need to review the precision implementation, but wanted to quickly raise default behaviour issue. If no values are collected (n=0) then in CENSORED we see the below, image but currently in chevron we get warnings and the below, image image

I believe this is a common issue, and we can open new issues to track that. What do you think about this issue @shajoezhu ?

clarkliming added a commit that referenced this pull request Jan 12, 2023
close #341 

other todo in code:

1. change from baseline for baseline, tackled in #343 
2. check grades in mapping, tackled in #324
@barnett11
Copy link
Contributor

barnett11 commented Jan 12, 2023

This is looking really great thanks @BFalquet and @clarkliming . An observation and a suggestion,
Observation:
I'm not sure if it's an application of the precision, but for my data I have a value of 6.5. If I run it through lbt01_1 with precision of 0 it is being viewed in Min/Max as value 6, but I would expect 7?
image

Suggestion:
CENSORED currently supplies recommended precisions for the majority of standard Labs we observe in our studies - given the variability we see across tests is this something we could implement also in citril.metadata?

@barnett11
Copy link
Contributor

barnett11 commented Jan 12, 2023

This is looking really great thanks @BFalquet and @clarkliming . An observation and a suggestion, Observation: I'm not sure if it's an application of the precision, but for my data I have a value of 6.5. If I run it through lbt01_1 with precision of 0 it is being viewed in Min/Max as value 6, but I would expect 7? image

Suggestion: CENSORED currently supplies recommended precisions for the majority of standard Labs we observe in our studies - given the variability we see across tests is this something we could implement also in citril.metadata?

I've just realised this is due to the different nature of roundings between R and SAS - do we have/want a convention for this?

@clarkliming
Copy link
Contributor

clarkliming commented Jan 13, 2023

This is looking really great thanks @BFalquet and @clarkliming . An observation and a suggestion, Observation: I'm not sure if it's an application of the precision, but for my data I have a value of 6.5. If I run it through lbt01_1 with precision of 0 it is being viewed in Min/Max as value 6, but I would expect 7? image
Suggestion: CENSORED currently supplies recommended precisions for the majority of standard Labs we observe in our studies - given the variability we see across tests is this something we could implement also in citril.metadata?

I've just realised this is due to the different nature of roundings between R and SAS - do we have/want a convention for this?

yes SAS and R have different conventions of rounding, R uses bankers rounding while SAS does not. This bankers rounding is selected as standard, see IEEE 754; I think we should not change this

Copy link
Contributor

@clarkliming clarkliming left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can merge this for now, and open new issues of test; in addition, the spotted issue of rounding and NA when 0 records available can be tracked in other issue

@shajoezhu
Copy link
Contributor

shajoezhu commented Jan 13, 2023

I still need to review the precision implementation, but wanted to quickly raise default behaviour issue. If no values are collected (n=0) then in CENSORED we see the below, image but currently in chevron we get warnings and the below, image image

I believe this is a common issue, and we can open new issues to track that. What do you think about this issue @shajoezhu ?

I think this will be common, perhaps we can create improve this in tern functions, seems like a formatting issue?

…on@main' into 335-differential-precision-chevron@main
@barnett11 barnett11 self-requested a review January 13, 2023 12:02
Copy link
Contributor

@barnett11 barnett11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precision approach works well and looks good - issues identified have been passed to other issues thanks @BFalquet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

differential precision (chevron)
4 participants