Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SymfonyInsight stars badge, run [SymfonyInsight sensiolabs amo chrome-web-store redmine spigetratings vaadin-directory visualstudiomarketplacerating wordpress] #2971

Merged
merged 8 commits into from
Feb 20, 2019

Conversation

calebcartwright
Copy link
Member

@calebcartwright calebcartwright commented Feb 10, 2019

Closes #1433 and fixes #2928

@calebcartwright calebcartwright added the service-badge New or updated service badge label Feb 10, 2019
@shields-ci
Copy link

shields-ci commented Feb 10, 2019

Warnings
⚠️

📚 Remember to ensure any changes to serverSecrets in services/symfony/symfony-test-helpers.js are reflected in the server secrets documentation

⚠️

📚 Remember to ensure any changes to serverSecrets in services/symfony/symfony-insight-base.js are reflected in the server secrets documentation

Messages
📖 ✨ Thanks for your contribution to Shields, @calebcartwright!

Generated by 🚫 dangerJS against 0ac28e0

@paulmelnikow paulmelnikow temporarily deployed to shields-staging-pr-2971 February 15, 2019 02:56 Inactive
@PyvesB PyvesB self-requested a review February 17, 2019 09:29
Copy link
Member

@PyvesB PyvesB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This overall seems like very good work to me, I haven't spotted anything major. 😉

value: 'project not found',
})

createTest(t, '404 project not found grade')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this second "not found" test? Doesn't the previous live: nonexistent project test the exact same code path/wiring with parent classes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests have have been in the codebase for a while, just moved to separate file in this PR as part of adding the new Stars badge. As far as my thinking when I originally added these tests:

My intent for this test was for validating that the expected custom message is being set on a 404 response, just like the test below this one validates the custom 401 message (if this test fails it would mean someone changed the custom message).

The intent for the first test was the actual "live" test trying to validate the underlying API interaction (this test could fail for a variety of reasons: our tokens could have become invalidated, the API behavior could have changed, etc.). In the event the requisite Symfony tokens are not available, then yes, the first test falls back to a mocked behavior that is identical to this test, but that's not the intent/purpose of that first test.

There's some similarity here (especially due to the required fallback behavior on the live test), but they aren't identical IMO and they haven't been problematic.

Copy link
Member

@PyvesB PyvesB Feb 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intent for this test was for validating that the expected custom message is being set on a 404 response, just like the test below this one validates the custom 401 message (if this test fails it would mean someone changed the custom message).

If the expected message is not correctly being set on a 404, the first test will already have failed with the exact same exception message. Adding another mock test does not provide any value, for instance, helping us spot a bug or a change in the API that we would not have spotted otherwise. Again both go through the exact same Shields code. The first one may provide additional value by also checking the API behaviour if credentials are specified, it's a superset of this one in a way.

they haven't been problematic.

They are problematic in the sense that they add to the codebase, which is not exactly known for it conciseness. It's a bit of additional code to maintain and slightly increases the execution time of our test suites. If we start writing mock tests for every single "not found" case where an equivalent live test already exists, this would add up to hundreds of extra tests which would in no way help us make the code more bug-free or more maintainable. It's also problematic in the sense that it isn't consistent with what we usually do, and not advocated in our test documentation.

A few unit tests are probably not worth debating for hours, but I'm somewhat feeling strongly about this because I had very similar comments in #2477, and I thought we had reached an understanding as you removed the tests at the time. I'm all open to constructive discussions and switching to a vocal call if need-be to better share thoughts on a given matter, but if we decide on an option in a PR but in subsequent ones silently do the exact opposite again, I've just got the feeling that we're going round in circles and that I'm wasting my time as a reviewer. 😕

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just got the feeling that we're going round in circles and that I'm wasting my time as a reviewer. 😕

Oh no, thats not good! Thank you for reviewing these changes. I really do sincerely appreciate you taking the time to look this over in detail, even if we don't see eye to eye on every item.

and I thought we had reached an understanding as you removed the tests at the time

I believe we've got some fundamentally different opinions around testing philosophies, and FWIW I think having variety of opinions/perspectives/backgrounds/etc. is a good thing!

I'm often willing to make a requested/suggested code change, even if I don't fully agree, especially if it's relatively minor and helps things move forward. That was the case for me in #2477, but I should have been more explicit that even though I made the change, I still held those differing opinions.

That is also the case for me again here. The purpose of this PR is to add a new badge and fix a reported bug, so I don't think it's worth blocking that long term over a test either. I do still respectfully disagree with some of your points, but I will remove the test anyway because I'd rather get those other changes in.

if we decide on an option in a PR but in subsequent ones silently do the exact opposite again

I did not have a duplicitous intent when I originally refactored the SensioLabs service in #2572 (when these tests were added). There were no existing tests for the legacy service, so as part
of that refactoring I genuinely attempted to add tests to validate the service, taking into account some of the uncommon circumstances associated with this particular service. There was a lot of dialog on #2572 over a period of about two and a half weeks, so although I wouldn't personally characterize those changes as "silent", I do apologize if it looks and/or feels like there was something sneaky there.

and not advocated in our test documentation

I spent some time today reading through both our doc/guide on writing service tests (linked in the quote) and the relatively new code walkthrough, a few times each, and I actually believe that we've got an opportunity to augment and/or update our doc on writing service tests to provide some clarification around testing strategy.

For example, the totality of the automated tests for a particular service reside within the directory for that service. Outside a handful of exceptions, we have a set of tests doing a mixture of semi-unit, semi-integration, semi-functional types of validation for that service class, all via IcedFrisby.

The service test guide verbiage does not IMO establish any denotative differences between types of tests (everything is a "service" tests) which is why I believe there's a degree of subjectiveness around the intent of some service tests.

For a rhetorical example: is a mocked test defined in a *.tester.js that's trying to validate custom logic in a service class transform function more of an integration test because it resides in a *.tester file and uses IcedFrisby? or did the author intend for this to serve the purpose of a more traditional unit test, but wrote the test this way as our service test guide prescribes?

In our code walkthrough, and various other threads, it feels like we're starting to establish a more precise definition of a "unit" test in the context of a service class, and when we'd prefer to use that type of test vs. the types of mocked service tests we've used historically.

I believe that making that clear distinction between the different types of tests (how to write each, when to use which, etc.), in the context of services, would be really beneficial. With that level of clarity/distinction, if we've established additional testing strategies/conventions around services, for example: that we will not write/accept a unit test if there's an integration test that covers that block of service code (something I strongly disagree with FWIW), then we'd be able to document those sorts of decisions explicitly, and unambiguously.

I'd be more than happy to put my thoughts down on paper as a starting point for broader discussion/debate, and then taking whatever we collectively decide and updating the service test guide/code walkthrough docs accordingly.

I expect there will be a couple general test-related topics where you and I may have to agree to disagree 😄. However, I'm confident we can collectively come to a consensus on what must be done (and/or not done) here for service classes, and document it so that we won't end up going around this circle again!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking the time to write up an honest and detailed response, I appreciate it. I do agree that we should not block this PR, so let's get it approved and carry on the discussion in a more appropriate place - and make sure our documentation is updated/clarified to better outline our different testing strategies and needs. 👍

@calebcartwright
Copy link
Member Author

I've removed a couple of the tests (although I've kept the mocked 404 and 401 tests in place) and made a minor update to address the issue described in #2928

@paulmelnikow paulmelnikow temporarily deployed to shields-staging-pr-2971 February 20, 2019 03:20 Inactive
@PyvesB PyvesB temporarily deployed to shields-staging-pr-2971 February 20, 2019 09:45 Inactive
@shields-deployment
Copy link

This pull request was merged to master branch. This change is now waiting for deployment, which will usually happen within a few days. Stay tuned by joining our #ops channel on Discord!

After deployment, changes are copied to gh-pages branch:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
service-badge New or updated service badge
Projects
None yet
4 participants