-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support lower / higher bounds in range loop #56
Comments
I think a Substring-like function may be a better fit here. Something around the lines of this demo: https://play.golang.org/p/vl69PmPTXx The usage would be something like: {{ range Loop 7 }}
{{ Substring (Int 9999999) 2 7 }}
{{ end }} 2 would be the minimum, 7 would be the max number of characters, Not sure yet how to implement this, just a first thought. |
@lucapette I think In my case I'm working with generating a JSON object a key of which points to an Here's how I think this could look like:
Also, it follows the same patters as |
@gmile Indeed, it does follow the pattern of |
@KevinGimbel oops, I just realized I should have referred to you in my reply, but instead I mentioned Luca's github name. Sorry about that! |
@gmile No worries, I subscribe to all issues and read almost everything 😀 I'll see about implementing a demo/prototype if I get the time. :) |
@KevinGimbel I've opened #61 with the minimal patch to get this working. I don't know My main question though is how do I add tests for this. I figured current implementation of |
@gmile thank you! The PR looks good on first sight. Tests in Go are written in a file ending in For fakedata we use Table Driven Tests, which looks like this: func TestSeparatorFormatter(t *testing.T) {
tests := []struct {
name string
sep string
want string
}{
{"default", " ", "Grace Hopper example.com"},
{"csv", ",", "Grace Hopper,example.com"},
{"tab", "\t", "Grace Hopper\texample.com"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
f := &fakedata.SeparatorFormatter{Separator: tt.sep}
if got := f.Format(columns, values); !reflect.DeepEqual(got, tt.want) {
t.Errorf("SeparatorFormatter.Format() = %v, want %v", got, tt.want)
}
})
}
} For each test we create a slice of structs ( for _, tt := range tests {}
In the above test this function has a body of f := &fakedata.SeparatorFormatter{Separator: tt.sep}
if got := f.Format(columns, values); !reflect.DeepEqual(got, tt.want) {
t.Errorf("SeparatorFormatter.Format() = %v, want %v", got, tt.want)
} Which means:
If it's not what we wanted we use I hope this makes any sense. For now I'd say you can ignore the tests but make sure the code is formatted correctly with |
@KevinGimbel wow thanks for a such a thorough response! I'll have to carefully read it later today, but after quick glance it all makes sense. From what I understand, you describe a case where So that's cool. I've used to writing tests in Ruby (where mocking would be heavily involved) and these days Elixir (almost no mocking; things are somewhat similar to what you have described). Thanks for all the I've fixed the PR by pushing the code through |
@gmile You're more than welcome! I'm myself fairly new to Go but I like sharing the knowledge I gained whenever I can. Luca helped me a lot with tests and Table-Driven Tests :)
That's correct. We have a slice which is a set of tests we define. They have a
These slices of tests are like mocking test data. We supply the function we want to test with a set of inputs - I try to have some that fail and some that pass so that I can see my function handles correct input as well as wrong input. |
@gmile I wrote about the kind of technique we use for integrations tests here on fakedata, you can read it here: http://lucapette.me/writing-integration-tests-for-a-go-cli-application The problem with testing @KevinGimbel thank you so much for your work. You're doing a *stellar job and |
As an idea, would it be possible to make a pluggable random generator, and use it as a dependency injection for all functions that rely on randomness? When building By simplified random generator I mean a function that just iterates values within a given type, starting from some "zero" value. So randomness becomes "controlled randomness" during tests. A few examples:
And so on. |
@gmile that wouldn't really test the only thing we don't test already. So if you look at how the test suite is organized, we do test that things are glued together correctly. I think it gets a bit philosophical at this point but I don't mind :) The thing is that I'm no big fan of unit level testing (where unit means testing "one thing with on dependencies") because in reality a green test of this kind can still mean the system doesn't work. My opinion is somewhat unpopular though as there's a lot of people advocating for injecting dependencies for the sake of testability. I find the technique helpful at times, especially when dealing with units that make heavy use of time, timezones, and dates related operations. So I'm not saying it's a no-go in general. My point is that I prefer (and therefore I try always that way first) tests that are green only when the systems exhibit the correct behaviour and red when not. I do understand no test can give that guarantee but I've definitely seen that feature more often in what we generally call "integrations tests". So to go back to Sorry for the rather philosopical and pretty long answer! |
@lucapette thank you for laying out your vision in a long form, I don't mind that at all! This is indeed a philosophical topic.
My vision regarding tests aligns with yours in that integration tests are vital and absolutely essential. I see no replacement to them. I prefer having only a handful of critical integration tests that cover particular big happy & negative cases. That is to ensure the main code paths are just working, and functions those paths are comprised of are glued correctly.
I absolutely agree. Unit tests alone won't tell if the system is working or not. Though I definitely see a place for unit tests, specifically to verify that a function returns expectedly against particular edge casey inputs. In the description to #61 I've listed some of them. Given the above, for example, I'd add 2 integration tests to #61:
I'd probably even go all-in with integration testing, by running a compiled binary and checking stdout using regexps, how crazy is that? ;) For all other cases I'd have a bunch of unit tests written, to see that function behaves as expected (either it returns an error or it returns some sort of expected good value). Since asserting on expected value is hard due to randomness, this is where I believe extracting randomness into some form of interface / adapter would make sense. Unit tests would benefit from pluggable randomness interface / adapters by providing a "controlled randomness", while integration tests could continue to run on "native randomness". Even though I do have my own preferences, I don't feel strong about testing in |
@lucapette what do you think can be done about #61? What tests should I cover it with, if any? |
@gmile sorry for getting back to you so late! It was a busy week. I'm enjoying this conversation a lot, and I feel we have a similar way of approaching tests. Your suggestion is indeed exactly what I would cover in the integrations tests for #61. The short term solution would be to use While reviewing this issue, and your arguments, I started considering we should indeed make the "unit test level" at least possible (using the pluggable randomness you're talking about) so that, in the long term, we can cover all the aspects of a specific features. The result would be:
I like that, but I think we shouldn't tie it to #61 so I suggest I take care of creating issues/milestones to take care of the testing infrastructure of About your question about what to specifically do with #61, I would add the two tests you suggested at integration level. I think it should be enough to extend this table. If you don't feel comfortable doing, don't worry about it. You can just say so and I merge #61 and take care of tests on my own! Thank you very much for everything (the conversation was very stimulating, and the contribution in terms of code and food for thoughts really awesome!) |
Right now the following template will always produce a sequence of 5 numbers:
It'd be great to have an ability to specify a lower and higher bounds, so that
range Loop x y
would generate a sequence of random length:On each iteration specified by
-l 5
,{{ range Loop x y }}
would produce a random number betweenx
andy
, inclusive, and loop that many times.The text was updated successfully, but these errors were encountered: