Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conclusion #11

Open
unphased opened this issue Apr 13, 2024 · 1 comment
Open

Conclusion #11

unphased opened this issue Apr 13, 2024 · 1 comment

Comments

@unphased
Copy link

unphased commented Apr 13, 2024

Hi, I came across this via nodejs/node-v0.x-archive#7161 after trying to learn some more about assert.deepStrictEqual as it's been a fairly recent development at least to me, though it seems like it was being worked on 10 years ago, thank you for going in depth on this whole topic. I hope I can pick your brain a bit @loveencounterflow

I read your whole readme, and it was super super informative to me. However, now that I'm trying to collect my thoughts incorporating what you taught me, it's a bit difficult to draw actionable conclusions. You see, I too maintain a testing library that i built with a focus on my personal software workflow, and obviously checking for equivalence is a critical and basic piece of a testing interface. At first I was using node-deep-equal, because assert isn't available in a browser, and soon after I discovered some less than ideal performance properties so I'm currently utilizing assert.deepStrictEqual in my library.

  1. Given how deep I already am, it's certainly not too big of a lift or whatever to go ahead and run your jseq assertion meta-tester-scorer tool. However i can't help but feel like it would be sweet if we could, I dunno, automate running it in github actions or any other CI and publish the results so all the community can be appraised on the current state of affairs. I might want to try to contribute that for you but probably will need to pick your brain on a few things.
  2. I think there are a lot of missing deep equality checking routines? There's the aforementioned assert.deepStrictEqual which you helped bring to reality, and there is the longstanding JSON.stringify(a)===JSON.stringify(b), which I have seen reported (will do microbenchmarks on this soon... low priority...) to be STILL faster than assert.deepStrictEqual. I'm quite interested in comparing performance but correctness definitely comes first
  3. Would we get lodash scoring 100 if we adjust ('deal with') the +0 === -0 situation? I'm not entirely clear on what the situation is with that.
  4. https://github.com/loveencounterflow/jseq?tab=readme-ov-file#plus-and-minus-points How would you like to reconcile this scoring you have with the pass/fail test percentage results? Is the determination of these points automated (or even automatable) or would we need to enumerate them manually? Either way it would be helpful to put them together with the pass fail numbers in one table, don't you think?
  5. Assuming none of the items will test to score 100 would your recommendation be always to choose the top most answer? you alluded to combining different equality check implementations to get a 100% correct one. But this was not provided. I am hopeful that assert.deepStrictEqual solves it once and for all, in which case stating that in the readme would be great.
@unphased
Copy link
Author

Thought about it a bit more, and since i am spinning up a framework that is purpose built for test evaluation and benchmarking, I should just take a page out of your book and bring in your 212 test cases and then I'll be able to generate the data analysis using my framework. This would be a pretty good test case for my test library if nothing else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant