-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add loopback test for SLINK #95
Conversation
This is great! Especially the thing with the random traffic generation, since this is something that verifies the correct stream handshake and also the stream software handling. Just one questions: what is this? (could you provide a link to the according documentation?) stall_config => new_stall_config(0.05, 1, 10) Right now, the simple testbench as well as the advanced one do the same thing. This is good for understanding VUnit (this is good for me! 😄) but I think we should actually use the VUnit features to do advanced verification. So I think we should have a new test program ( |
It means "has a This is the array_axis_vcs example from VUnit with the stall probability of the slave set to 0% or 50%: It is exactly the same test, it's also a loopback (but implemented in hardware, instead of a CPU). You can see that execution time is 9us without stall (with probability 0%) and 16us otherwise. |
Absolutely agree. That was the purpose when maintaining a VUnit testbench and also a simple one. After this PR is merged, we should use Wishbone and AXI-Lite VCs for testing the external interface and the bridge. Then, we can remove all the interface related code from the simple testbench. |
Thanks for clearing!
👍 😄 |
@stnolting Note that this addition doesn't do anything useful yet. There is a need for a SW test that creates that loopback and once that is done the verification components need to be connected to the SLINK of the CPU. Right now they are only talking directly to each other |
Yeah I know and that's ok for now. We need to agree on a concept how to actually utilize the verification features provided by this testbench from a CPU point of view. Just some ideas 🤔
|
I like the idea of creating several simple programs instead of a large one. That allows some of the tests to fail without breaking all of them. We want to be able to know whether some bug affects all the peripherals/features or just a few of them. BTW, tests on Windows are failing. I think we forgot to update https://github.com/stnolting/neorv32/blob/master/.github/workflows/Windows.yml#L104-L113 in some of the latest PRs. @stnolting, @LarsAsplund mind having a look at it? |
Me too! We could keep everything in I would also like to keep the current processor check to be run using the simple testbench and use the new tests to target only the VUnit-based testbench.
I think there is just a flag missing. I will fix that. |
Agree, better start with several small tests to verify individual features. Once we have that we can start thinking about stress tests where we run many things at the same time. |
I am not sure yet how to separate the tests. Different files? Different folders? Different flags? 🤔 |
@stnolting, I think the point is to have different tests as different Optionally, we can run VUnit in parallel. However, VUnit can internally paralelise the tests, so I think we won't need more than one job for now. That is, the 2 cores in the machine are enough for getting some time reduction. |
With regard to this specific issue, the test is a loopback, so you only need to implement a software function that checks whether something was received and push it back. The VUnit testbench can take care of finalising the simulation after all the data has been sent, received and checked. Therefore, the software can be infinite. That is, first it executes the fixed tests, and then it goes into an infinite loop waiting for AXI Stream data. That is not the best solution, but it's the simplest for having the VUnit VCs used for something useful. After that, we can focus on #35 and revisit the software and tests. |
👍
Good idea, but the current testbench is waiting for a final report, which is printed via UART after the main function has returned. So we cannot use an eternal stream echo loop here. Anyway, I am sourcing out all tests into separated chunks right now. I'm still not sure how to manage all that, but soon we will have a more flexible test program (hopefully 😉) - also for the stream echo. |
Note that the number of data elements that the VCs will send (and expect to receive) through the loopback is defined in the testbench. Hence, you don't need an infinite software procedure, you can set a for loop to EDIT https://github.com/stnolting/neorv32/blob/master/sim/neorv32_tb.vhd#L229 |
We can set that number from the command line to get a single source. |
This PR shows how SLINK can be tested from the testbench. It assumes that there is a SW loopback in the CPU which has yet to be implemented. @stnolting I could use some help with that. Once that is done, the verification components need to be connected to the CPU instead of directly to each other as it's done now. See TODO in the testbench.