Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable doc tests with new test infra #391

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

atakavci
Copy link
Collaborator

No description provided.

Comment on lines 16 to 24
public class Bf_tutorial : AbstractNRedisStackTest, IDisposable
{

public void run()
public Bf_tutorial(EndpointsFixture fixture) : base(fixture) { }

[SkippableTheory]
[MemberData(nameof(EndpointsFixture.Env.StandaloneOnly), MemberType = typeof(EndpointsFixture.Env))]
public void run(string endpointId)
{
var muxer = ConnectionMultiplexer.Connect("localhost:6379");
var db = muxer.GetDatabase();
var db = GetCleanDatabase(endpointId);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is that the user can see this part of the code when they view the whole file (eg, if you go here and click the little eye widget on any code tab).

It looks like some of this isn't code that we can recommend to users as part of the example. We can filter out [SkippableTheory] and the other attributes when we pull the example into the docs. We might also be able to remove the inherited test classes, etc. However, I'm not sure about the GetCleanDatabase(). If this is the best/only way to handle the tests, then we might have to add something to the docs processing, a bit like:

// REMOVE_START
var db = GetCleanDatabase(endpointId); // <-- Code that actually gets tested.
// REMOVE_END
/* ADD_START <-- New processing command
var muxer = ConnectionMultiplexer.Connect("localhost:6379"); <-- Code the user sees.
var db = muxer.GetDatabase();
ADD_END */

I guess I could implement this fairly easily in the Python script that processes the docs, but it would mean that the code the user sees isn't exactly the same as the code that gets tested in the NRedisStack repo. Maybe that wouldn't be a problem since most of the real code is the same?

I'm not sure about the best way to handle this, but there are probably other options. Aside from this, it's great to see improvements to the docs testing :-)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andy-stark-redis i am at the same spot with you on your concern.
i guess i just jumped in to enable doc tests with new infra but looks like it was too quick to catch what you've shown.
running and displaying different code is not the ideal, i believe.
then how do we handle it in other client libs? let me see.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants