Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build HTML datasheet and publish to GitHub Pages along with Doxygen output #26

Merged
merged 9 commits into from
May 19, 2021

Conversation

umarcor
Copy link
Collaborator

@umarcor umarcor commented May 18, 2021

In this PR, several enhancements to the CI plumbing are contributed:

  • The two existing workflows for the PDF datasheet and the Doxygen output are merged into a single workflow. That allows visualizing the dependencies between them (see below).
  • asciidoctor is used for building the datasheet in HTML too. In order to do so, most of the content from neorv32.adoc is moved to content.adoc. A different entrypoint is used for the HTML build (index.adoc).
  • The HTML datasheet is published to GitHub Pages along with the Doxygen output (in subdir sw). That is, the artifacts from the two jobs are downloaded, reorganised and pushed.
  • The shields/badges in the README are updated, and some of them are added to the HTML datasheet.

See umarcor.github.io/neorv32 and umarcor.github.io/neorv32/sw.

@stnolting, in the current state, the PDF datasheet will be uploaded to branch gh-pages (https://github.com/umarcor/neorv32/tree/gh-pages). That might be not desirable, since it is also added to branch master, and it is not good practice to add PDFs to git repositories. Alternatively, a GitHub Release might be used for keeping the latest PDF(s) available. What do you think?

@stnolting
Copy link
Owner

That looks really awesome! Thank you for your contribution!
I really like the new concept, so I think this can be merged soon 👍 - just give me some time to look through your modifications.

That might be not desirable, since it is also added to branch master, and it is not good practice to add PDFs to git repositories.

I know this is not good practice, but it is quite handy to have them in the repository - at least from my point of view. 😉
I'm still trying to find something like a static URL for accessing the artifacts generated by the current data sheet building workflow...?!

Alternatively, a GitHub Release might be used for keeping the latest PDF(s) available. What do you think?

True. But that would mean a new release for each new typo fix, right? To be honest, I have not worked with automatic releases yet.
However, I like the current "release strategy": packing a lot of modifications together and release it as new major version. Maybe it would be better to have a specific branch for the documentation only - or even an independent repository. Or maybe I am on a completely wrong track there... I am not sure. But I am open for discussion.

@stnolting stnolting added the DOC Improvements or additions to documentation label May 18, 2021
@umarcor
Copy link
Collaborator Author

umarcor commented May 18, 2021

just give me some time to look through your modifications.

Sure! There are some minor modifications which I did not mention explicitly. For instance, creating subdir 'doc/references' for the PDF that belong to some external project (wishbone, riscv, etc.). Hence, take your time and ask whatever you don't get at first.

That might be not desirable, since it is also added to branch master, and it is not good practice to add PDFs to git repositories.

I know this is not good practice, but it is quite handy to have them in the repository - at least from my point of view. 😉

If you want users to get the PDF with git clone or when downloading the zipfile/tarball of a branch, then there is no alternative. However, if you find it acceptable that users need to click one additional link for downloading the PDF, then there are solutions for "having them in the repo/forge, but not in the git repo". See below.

I'm still trying to find something like a static URL for accessing the artifacts generated by the current data sheet building workflow...?!

Unfortunately, GitHub does not provide an static URL for artifacts. You can use the API for getting the ID of the latest CI runs, and then filter the metadata of the latest successful one. Moreover, some people provide scripts for doing so. Yet, I think that asking users to use one of those scripts is an ugly workaround.

Instead, in GHDL, MSYS2 and other projects a dummy GitHub Release is used. In GHDL, a nightly tag was created, and a nightly pre-release was created. Then eine/tip uses the GitHub API for uploading/removing/updating the assets of that release, and for force-pushing the tag to point to the corresponding commit sha. So, https://github.com/ghdl/ghdl/releases/tag/nightly always contains the results of the latest successful CI run. If the CI fails, the content of the release is not updated. You will find that https://github.com/ghdl/ghdl/blob/nightly/.github/workflows/Test.yml#L320-L347 is very similar to the content of this PR. So, only adding the eine/tip step would be needed. That would provide https://github.com/stnolting/neorv32/releases/download/nightly/NEORV32.pdf.

Alternatively, a GitHub Release might be used for keeping the latest PDF(s) available. What do you think?

True. But that would mean a new release for each new typo fix, right? To be honest, I have not worked with automatic releases yet.
However, I like the current "release strategy": packing a lot of modifications together and release it as new major version. Maybe it would be better to have a specific branch for the documentation only - or even an independent repository. Or maybe I am on a completely wrong track there... I am not sure. But I am open for discussion.

eine/tip supports updating the content of an existing 'nightly' (or 'tip', or any name) release, but it also supports generating regular releases and snapshots (releases without assets). Hence, "regular" workflow artifacts are pushed to 'nightly', but tagged commits are handled as regular releases. The benefit is that you don't need to change the CI for dealing with both cases.

I did not implement this in this PR because I wanted to discuss it with you. If you are ok, I can add it. Moreover, the same applies to deciding when to push the artifacts. Currently, as you see, it tries to always push to GitHub Pages. That fails in PRs for security reasons. Therefore, a condition needs to be added. See, for instance, https://github.com/ghdl/ghdl/blob/nightly/.github/workflows/Test.yml#L321. That condition is related to your desired work and contribution flow. That's why I didn't add it before discussing this topic with you.

@stnolting
Copy link
Owner

Sure! There are some minor modifications which I did not mention explicitly. For instance, creating subdir 'doc/references' for the PDF that belong to some external project (wishbone, riscv, etc.). Hence, take your time and ask whatever you don't get at first.

Looks good to me! I think this can be merged. Many references will need updating (especially in all the READMEs and in the documentation itself) but that should be no big deal.

If you want users to get the PDF with git clone or when downloading the zipfile/tarball of a branch, then there is no alternative. However, if you find it acceptable that users need to click one additional link for downloading the PDF, then there are solutions for "having them in the repo/forge, but not in the git repo". See below.
Instead, in GHDL, MSYS2 and other projects a dummy GitHub Release is used. In GHDL, a nightly tag was created, and a nightly pre-release was created. Then eine/tip uses the GitHub API for uploading/removing/updating the assets of that release, and for force-pushing the tag to point to the corresponding commit sha. So, https://github.com/ghdl/ghdl/releases/tag/nightly always contains the results of the latest successful CI run. If the CI fails, the content of the release is not updated. You will find that https://github.com/ghdl/ghdl/blob/nightly/.github/workflows/Test.yml#L320-L347 is very similar to the content of this PR. So, only adding the eine/tip step would be needed. That would provide https://github.com/stnolting/neorv32/releases/download/nightly/NEORV32.pdf.

You have convinced me 😉 I think this is a convenient way to go.

@stnolting stnolting merged commit 4b7344e into stnolting:master May 19, 2021
@stnolting
Copy link
Owner

One additional thought:
How about moving the documentation makefile to docs? I think it would be good to keep the repo's root directory simple and clean. Or is there any specific reason it has to be in root?

@stnolting
Copy link
Owner

stnolting commented May 19, 2021

There is another thing I have noticed. Seems like the images are not included into the HTML deployment. I have not further investigated on this yet... Do you have any idea why this is the case? 🤔

edit

I have fixed that. It was just an include problem.
Now index.adoc uses :imagesdir: https://raw.githubusercontent.com/stnolting/neorv32/master/docs/figures.

@umarcor umarcor deleted the ci-tweaks branch May 20, 2021 11:50
@umarcor
Copy link
Collaborator Author

umarcor commented May 20, 2021

Many references will need updating (especially in all the READMEs and in the documentation itself) but that should be no big deal.

Ups... I completely overlooked that! I saw that you already fixed most of them. You've been so active this last 48h!

You have convinced me 😉 I think this is a convenient way to go.

I will submit a PR 😄

One additional thought:
How about moving the documentation makefile to docs? I think it would be good to keep the repo's root directory simple and clean. Or is there any specific reason it has to be in root?

I thought about that. There are two motivations for keeping it in the root:

Naturally, it is up to you to have a single makefile in the root, multiple makefiles in subdirs and/or how to include/import them in each other. I'm willing to adapt to whatever you prefer. In this specific makefile what I wanted to showcase is building PDF, building HTML and using a container for doing so. I also wanted to decouple it from the CI workflow, so that users can execute these locally.

Moreover, there might be better solutions than makefiles for handling simulation and implementation. For simulation, I would expect a VUnit run.py script to be contributed here soon (@LarsAsplund has been working on it in his fork). For synthesis, edalize/fusesoc, pyfpga, tsfpga, etc. might be used. My position is that it's ok to have some duplication of build scripts. That is, it's ok to maintain Makefiles even if VUnit scripts or edalize configuration files are contributed too. On the one hand, we can use CI for ensuring that none of them is broken. On the other hand, each of them targets a different audience which wants to use the core. If the complexity grows too much, it might be sensible to split the specific build solutions somewhere else. Yet, for now we are considering 2-3 solutions only, and all of them are de facto standards in some community. See umarcor.github.io/osvb (precisely umarcor.github.io/osvb/apis/core and umarcor.github.io/osvb/apis/tool) for a more elaborated discussion on this topic.

There is another thing I have noticed. Seems like the images are not included into the HTML deployment. I have not further investigated on this yet... Do you have any idea why this is the case? 🤔

edit

I have fixed that. It was just an include problem.
Now index.adoc uses :imagesdir: https://raw.githubusercontent.com/stnolting/neorv32/master/docs/figures.

You were too fast in merging this PR 😆. I expected you to discuss and ask me to do some changes before merging...

Anyway, the missing images are my fault, since I didn't copy them in https://github.com/stnolting/neorv32/blob/master/.github/workflows/Documentation.yml#L60. I'm uploading the index.html file only.

The solution you applied is valid, because it picks the figures from branch master, avoiding them to be uploaded to branch gh-pages too. However, that has two probably undesirable effects:

  • The NEORV32-HTML artifact of the CI jobs is incomplete. Someone cannot have a copy of the website/datasheet by downloading that. They will need a connection when they browse it, because images will always be online.
  • Images might go out of sync. It someone downloads an specific artifacts or builds the docs of an specific version/commit, images will not correspond. Images will always be the ones from master.

Therefore, I would recommend that we fix the upload-artifact step so that figures are uploaded together with index.html. We can change the :imagesdir: field which you found about to figures/. Then, all the modifications you did should be valid.

@stnolting
Copy link
Owner

  • Ensuring that everyone will execute it from the same location. If located inside subdir docs, some people might make -C doc and others cd doc; make. Variables such as PWD might not be the same, and that has an effect on the directory that is bind in the container. I don't know what's the equivalent to cd $(dirname "$0") in Makefiles. So, putting it in the root is defensive.

True. I totally agree.

Naturally, it is up to you to have a single makefile in the root, multiple makefiles in subdirs and/or how to include/import them in each other. I'm willing to adapt to whatever you prefer. In this specific makefile what I wanted to showcase is building PDF, building HTML and using a container for doing so. I also wanted to decouple it from the CI workflow, so that users can execute these locally.

We are talking here about a single makefile that provides targets for "everything" (building documentation, running simulation, synthesis, ...), right? 🤔

Moreover, there might be better solutions than makefiles for handling simulation and implementation. For simulation, I would expect a VUnit run.py script to be contributed here soon (@LarsAsplund has been working on it in his fork). For synthesis, edalize/fusesoc, pyfpga, tsfpga, etc. might be used. My position is that it's ok to have some duplication of build scripts. That is, it's ok to maintain Makefiles even if VUnit scripts or edalize configuration files are contributed too. On the one hand, we can use CI for ensuring that none of them is broken. On the other hand, each of them targets a different audience which wants to use the core. If the complexity grows too much, it might be sensible to split the specific build solutions somewhere else. Yet, for now we are considering 2-3 solutions only, and all of them are de facto standards in some community. See umarcor.github.io/osvb (precisely umarcor.github.io/osvb/apis/core and umarcor.github.io/osvb/apis/tool) for a more elaborated discussion on this topic.

To be honest, I did not take care of this in the past. Until the latest contributions, there actually were no script-based synthesis setups. So I am looking forward to some discussion here - especially because the open-source synthesis (and even simulation) setups are quite new to me.

You were too fast in merging this PR 😆. I expected you to discuss and ask me to do some changes before merging...

Yeah.. sorry for that - I was way too excited 😄 The HTML-based documentation on github pages is incredible handy! I think I will delete all references in the READMEs targeting the pdf documentation and redirect them to github pages. Downloading the pdf (the artifact from the workflow) should be optional and not the default way to get the documentation (-> #32).

The solution you applied is valid, because it picks the figures from branch master, avoiding them to be uploaded to branch gh-pages too. However, that has two probably undesirable effects:
...

Godd point. I did not think about this. We should continue this in #32.

@umarcor
Copy link
Collaborator Author

umarcor commented May 22, 2021

We are talking here about a single makefile that provides targets for "everything" (building documentation, running simulation, synthesis, ...), right? 🤔

I am proposing a single entrypoint, which can be a Makefile. However, not everything needs to be written in that single file. It's ok to have other Makefiles or scripts in the subdirs, and call them from the top level.

In practice, most HDL related projects do somehow combine Makefiles/shells scripts and Python. The current entrypoint to cocotb are Makefiles, but Python is loaded internally, since that's how the testbenches are written. Edalize and PyFPGA both have Python entrypoints which generate Makefiles from templates. In VUnit, pytest is used for handling multiple Python scripts. In ghdl-cosim, pytest is used for handling shell scripts, Makefiles and Python scripts. See umarcor.github.io/osvb/apis/tool.

Therefore, it is really up to you which language to use for the top level entrypoint. The point is that it needs to be no more than a wrapper.

To be honest, I did not take care of this in the past. Until the latest contributions, there actually were no script-based synthesis setups. So I am looking forward to some discussion here - especially because the open-source synthesis (and even simulation) setups are quite new to me.

The fact is that there is duplication in this area. It can be done with shell, makefiles, python, etc. and there are multiple libraries/structures you can use for each of the workflows. The challenge is how to avoid duplicating the same metadata for each of them! (thus umarcor.github.io/osvb/apis/core)

Yeah.. sorry for that - I was way too excited 😄

No problem at all! I understand, and fortunately, anything we break can be easily fixed in follow-up PRs 😉

The HTML-based documentation on github pages is incredible handy!

It is! I think it unfortunate that the default template/theme generates everything in a single page. For long documents, it would be interesting to have a sidebar with collapsible items, and have chapters/sections in different pages (such as the typical sphinx site). However, I really like the asciidoc syntax over rst. I hope more complete HTML templates are made available in the near future.

Other than that, I had not used asciidoctor for generating PDF output. Hence, it was really nice to see that you were using it, and how easily we can generate both outputs from the same source. I would like to explore how to use it through a LaTeX template; but the current solution is good enough for this project!

I think I will delete all references in the READMEs targeting the pdf documentation and redirect them to github pages. Downloading the pdf (the artifact from the workflow) should be optional and not the default way to get the documentation (-> #32).

That sounds nice! I find myself reading most of the documents on screens nowadays, so I agree that the PDF is useful, but not the main source I would use.

Godd point. I did not think about this. We should continue this in #32.

👍🏼

@stnolting
Copy link
Owner

I am proposing a single entrypoint, which can be a Makefile. However, not everything needs to be written in that single file. It's ok to have other Makefiles or scripts in the subdirs, and call them from the top level.
In practice, most HDL related projects do somehow combine Makefiles/shells scripts and Python. The current entrypoint to cocotb are Makefiles, but Python is loaded internally, since that's how the testbenches are written. Edalize and PyFPGA both have Python entrypoints which generate Makefiles from templates. In VUnit, pytest is used for handling multiple Python scripts. In ghdl-cosim, pytest is used for handling shell scripts, Makefiles and Python scripts. See umarcor.github.io/osvb/apis/tool.
Therefore, it is really up to you which language to use for the top level entrypoint. The point is that it needs to be no more than a wrapper.

Seems reasonable to me. I would prefer the makefile concept (since that's the one I'm most familiar with 😉).
I think I still have to do some more studying of your documentation - right now I feel a little bit overwhelmed at some point 😄

It is! I think it unfortunate that the default template/theme generates everything in a single page. For long documents, it would be interesting to have a sidebar with collapsible items, and have chapters/sections in different pages (such as the typical sphinx site). However, I really like the asciidoc syntax over rst. I hope more complete HTML templates are made available in the near future.

Collapsing would be very nice to get a quick overview of everything. Adding a simple table of content right at the top might be an okay-ish workaround for now.

I am also thinking about cutting down the main README of the repo. From my point of view, the current version provides way too much in-detail information. Having just a short summary with links to the GH pages (for the interested reader to continue)
might be a better approach.

Other than that, I had not used asciidoctor for generating PDF output. Hence, it was really nice to see that you were using it, and how easily we can generate both outputs from the same source. I would like to explore how to use it through a LaTeX template; but the current solution is good enough for this project!

I also like that. It is pretty cool to be able to use a custom style sheet for pdf export to give the result a more personal touch 😄

That sounds nice! I find myself reading most of the documents on screens nowadays, so I agree that the PDF is useful, but not the main source I would use.

Yeah, me too. But sometimes I like to be old-school and study some pages of freshly printed data sheets - completely offline ;)

@umarcor
Copy link
Collaborator Author

umarcor commented May 22, 2021

Seems reasonable to me. I would prefer the makefile concept (since that's the one I'm most familiar with 😉).

Let me be direct: you will need to use Python. I did also try to resist for some years because I was happy with VHDL, C, bash and golang. However, during the last 20-30 years TCL was the scripting language of choice by EDA vendors. Now, and for the following 20 years, Python is the language of choice by the open source EDA communities. On top of that, Blender, FreeCAD, KiCAD, Matlab, TensorFlow/ONNX... all of them provide Python APIs. Resistance is futile 😉

Anyway, I understand that you want to keep using the tools you are familiar with. That is ok. I am neither convinced by any specific Python "project management solution" (each has advantages and disadvantages). Hence, from a didactic point of view, I believe it makes sense to have Makefiles, even if that implies some more verbosity. Yet, I wouldn't complain to adding a tools subdir, in case someone wants to contribute other specific solutions. All the common VHDL and C sources together with the Makefiles would still be the main content of the repository and the default solution.

I think I still have to do some more studying of your documentation - right now I feel a little bit overwhelmed at some point 😄

I understand. That documentation gathers the thoughts after 4-5 years hitting a wall when trying to understand the ecosystem. Therefore, I acknowledge that it is very dense, with lots of references and not many general explanations. Nonetheless, please, do not hesitate to ask, discuss or argue whichever content you don't see fit.

The following talk by @rodrigomelo9 might be interesting for you as an introduction to FLOSS EDA tooling: http://video.ictp.it/WEB/2021/2021_01_25-smr3562/2021_02_10-11_00-smr3562.mp4 (http://indico.ictp.it/event/9443/session/258/contribution/587/material/slides/). rodrigomelo9/FOSS-for-FPGAs is an updated version of the same presentation. You will find that the example is based on Makefiles, thus very similar to what @tmeissner is contributing in #31, or to the enhancements I want to submit afterwards for using containers in CI.

Collapsing would be very nice to get a quick overview of everything. Adding a simple table of content right at the top might be an okay-ish workaround for now.

I asked to asciidoctor maintainers, and this seems not to be straightfoward. The recommended solution is to use Antora, which is a different (compatible) builder written in JS. Therefore, I think we can stick to the current template for now. Nevertheless, I will keep an eye on that feature.

I am also thinking about cutting down the main README of the repo. From my point of view, the current version provides way too much in-detail information. Having just a short summary with links to the GH pages (for the interested reader to continue)
might be a better approach.

I absolutely agree. If you check any of the repos whose documentation I maintain, all of them have a minimalistic README: https://github.com/VUnit/vunit, https://github.com/hdl/containers, https://github.com/umarcor/osvb/, https://github.com/hdl/MINGW-packages, https://github.com/umarcor/hwstudio... I believe that the purpose of the README is to catch the attention, both from users (brief description and relevant links) and from maintainers (CI and broken stuff). Any additional information you put in the README is likely to be duplicated in the documentation, so the maintenace burden is increased.

Moreover, since the documentation is built in CI, some of the information can be generated dynamically. For instance, area/timing reports/results, bitstreams, simulation results, etc. can be gathered from other jobs. Naturally, this is only feasible with the workflows based on free and open source tools. As soon as VUnit is used, an xUnit file might be generated for granular reporting of test results.

Apart from that, I think that some sections of the README should be standalone documents. That is the case of Contribute/Feedback/Questions. Getting Started would also deserve a more visible location. That content is then extended in chapter 6 of the datasheet, named Let’s Get It Started!. From a project level point of view, the "User Guide" or "Getting Started Guide" is not part of the datasheet, but a sibling document. See https://documentation.divio.com/ for very interesting knowledge about how to organise technical documentation.

If we take this to the limit, each of the chapters in the datasheet might be a different document (a separated PDF and a subdir in the HTML site):

  • NEORV32: Project [Understanding]
  • NEORV32: CPU [Information]
  • NEORV32: System on Chip [Information]
  • NEORV32: Software Framework [Information]
  • NEORV32: Debugging [Problem]
  • NEORV32: User Guide [Learning]
  • NEORV32: Contributing (maybe markdown/HTML only, or included in 'Project')

From a user point of view, if I printed the datasheet, I would not bind it all together, because I will want to read multiple of the documents at the same time. E.g., I want to have the CPU and SoC chapters at hand while reading the User Guide or how to Debug a design.

I did not bring this before because we are already dealing with multiple issues/PRs, and I believe it's better to close some of them before initating further task. Yet, I hope you understand that I found it pertinent to express my vision in this topic.

It is pretty cool to be able to use a custom style sheet for pdf export to give the result a more personal touch 😄

I must say you have a good taste!

@stnolting
Copy link
Owner

Let me be direct: you will need to use Python. [...] Resistance is futile 😉

Oh damn! I feared that! 😆

[...] Hence, from a didactic point of view, I believe it makes sense to have Makefiles, even if that implies some more verbosity. Yet, I wouldn't complain to adding a tools subdir, [...]

I think "makefile" is the way to go. The tools directory sound like a good idea - but maybe it is too early for that right now.

Nonetheless, please, do not hesitate to ask, discuss or argue whichever content you don't see fit.

Thank you very much! :)

The following talk by @rodrigomelo9 might be interesting for you as an introduction to FLOSS EDA tooling: [...]

I will check that out. Or to be honest - I will put it on my todo list. I'm currently a little bit busy arguing with stupid gdb... 😉

I believe that the purpose of the README is to catch the attention, both from users (brief description and relevant links) and from maintainers (CI and broken stuff). Any additional information you put in the README is likely to be duplicated in the documentation, so the maintenace burden is increased.

👍
Right now I am cleaning up the main README - but I am not really happy with the current version. I can't decide on what to keep there. What is important? What should be there as "immediate advertising"? The problem is that I do not want to overwhelm FPGA/RISC-V/VHDL/whatever beginners but on the other side I don't want to offend the pros... You know what I mean? 😄

Moreover, since the documentation is built in CI, some of the information can be generated dynamically. For instance, area/timing reports/results, bitstreams, simulation results, etc. can be gathered from other jobs. Naturally, this is only feasible with the workflows based on free and open source tools. As soon as VUnit is used, an xUnit file might be generated for granular reporting of test results.

That would be great. Unfortunately, the metrics generated by "mainstream" EDA tools like Xilinx and Intel seem to be the most relevant references today...

Apart from that, I think that some sections of the README should be standalone documents.

I also thought about this. But actually, I like having everything in one place. But maybe this has already gotten way too confusing as more and more stuff is added to the documentation.

I did not bring this before because we are already dealing with multiple issues/PRs, and I believe it's better to close some of them before initating further task. Yet, I hope you understand that I found it pertinent to express my vision in this topic.

Yes. Absolutely. 😆
We should really continue this in a new issue as I already feel like loosing track sometimes. 😅

@umarcor
Copy link
Collaborator Author

umarcor commented May 24, 2021

I think "makefile" is the way to go. The tools directory sound like a good idea - but maybe it is too early for that right now.

Agree. No need to create the tools directory until some specific contribution requires it.

I will check that out. Or to be honest - I will put it on my todo list. I'm currently a little bit busy arguing with stupid gdb... 😉

No rush at all! There is too much interesting content in the wild, and days still have 24h only 😆

Right now I am cleaning up the main README - but I am not really happy with the current version. I can't decide on what to keep there. What is important? What should be there as "immediate advertising"? The problem is that I do not want to overwhelm FPGA/RISC-V/VHDL/whatever beginners but on the other side I don't want to offend the pros... You know what I mean? 😄

Don't take it too seriously, you are already doing a very good job with the documentation 😉. As interest and usage grow, you will get more feedback from users and that will give you an idea about which areas are more/less relevant for them. As you get to know the different "user profiles", you will also learn how to send each one to the better documentation for them.

Unfortunately, the metrics generated by "mainstream" EDA tools like Xilinx and Intel seem to be the most relevant references today...

So, yes but no. I agree that the metrics generated for Xilinx, Intel/Altera and Lattice devices are the most used, because the largest portion of the market uses them. However, the metrics generated by open source tools for those devices are not very different. For instance, see the Radiant results from https://github.com/stnolting/neorv32#neorv32-processor and the results with GHDL, Yosys and nextpnr in https://github.com/stnolting/neorv32/runs/2650617473?check_suite_focus=true#step:4:3582 and https://github.com/stnolting/neorv32/runs/2650617473?check_suite_focus=true#step:4:3684. You will find that LUT and FF usage is within a 5% difference. Vendor tools are expected to behave better in corner cases. That's why Radiant allows faster clocks with 97% LUT usage. However, from a general point of view, all are valid.

I created #42 for following the discussion about reorganising the docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DOC Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants