Skip to content

support for unit testing and documentation #128

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
PavelVozenilek opened this issue Feb 25, 2016 · 3 comments
Closed

support for unit testing and documentation #128

PavelVozenilek opened this issue Feb 25, 2016 · 3 comments
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase.

Comments

@PavelVozenilek
Copy link

The language may directly support unit testing:

TEST()
{
  // unnamed test
  ...
}

TEST(some-name)
{
   // named test
  ...
}

Unit tests would be top level constructs, possibly could be also placed inside the code to hint what the do test:

if (...) {
    ....
    TEST()
    {
       ...
    }
}

The tests would work in context of current source file (i.e they will use already present imports).

When they are compiled in they could be invoked auto-magically at the start of main routine, either all unit tests or those from recently modified source files. Execution order may be randomized. Each test should detect leaks and not to modify global data.

Named tests could be invoked individually.


The language may also support easy to write examples. I assume there will be a tool generating the documentation which would use these examples.

EXAMPLE()
{
  // here is code, if copy-pasted into empty file it would compile and run
  ...
}

The compiler would make sure that the example compiles as standalone code, has no leaks, doesn't crash.

To make short incomplete code examples one could specify what will be presented in documentation:

EXAMPLE()
{
  // here is initial code, not shown in docs
 ...

  ------ // this is the delimiter

  // code here is shown in docs
  ...

  ------
  // cleanup code, not shown in docs
  ...
}

Example may be by default assigned to the previously defined function or explicitly:

EXAMPLE(func1, func2)
{
  // this example will be shown for both functions func1 and func2
  ...
}
@andrewrk
Copy link
Member

We already have this for tests, see some of the files in std/ and test/self_hosted.zig.

@andrewrk
Copy link
Member

As for examples, I'm thinking they will be handled at the build layer. Zig doesn't have the concept of a build configuration yet, but it's planned. And in this file you will specify your examples, your examples' dependencies, and then when generating docs, Zig can automatically find which functions the examples call in order to link to the examples next to the functions in the generated docs.

I'll consider #attribute("example") to go along with #attribute("test").

@andrewrk andrewrk added the enhancement Solving this issue will likely involve adding new logic or components to the codebase. label Feb 25, 2016
@PavelVozenilek
Copy link
Author

#attribute("test") needs explicitly named function,

TEST()
{
   ...
}

does not require one to invent any name. In test heavy code it counts.

I'd once implemented similar functionality in C:

  • Tests from recently modified files were executed automatically on the beginning of every debug run (when Shift key was down it run all the tests). When there were no problems nothing was shown to the user - no console texts, no dialogs. This made testing quick and non-annoying.
  • File/line of currently running test was written into a text file. If something crashed I'd know where to start.
  • Wrapper around memory allocation ensured there were no leaks in every single test.
  • assert was aware of testing. If it failed it would also show which test (if any) was running at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Solving this issue will likely involve adding new logic or components to the codebase.
Projects
None yet
Development

No branches or pull requests

2 participants