-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build HTML datasheet and publish to GitHub Pages along with Doxygen output #26
Conversation
That looks really awesome! Thank you for your contribution!
I know this is not good practice, but it is quite handy to have them in the repository - at least from my point of view. 😉
True. But that would mean a new release for each new typo fix, right? To be honest, I have not worked with automatic releases yet. |
Sure! There are some minor modifications which I did not mention explicitly. For instance, creating subdir 'doc/references' for the PDF that belong to some external project (wishbone, riscv, etc.). Hence, take your time and ask whatever you don't get at first.
If you want users to get the PDF with
Unfortunately, GitHub does not provide an static URL for artifacts. You can use the API for getting the ID of the latest CI runs, and then filter the metadata of the latest successful one. Moreover, some people provide scripts for doing so. Yet, I think that asking users to use one of those scripts is an ugly workaround. Instead, in GHDL, MSYS2 and other projects a dummy GitHub Release is used. In GHDL, a
eine/tip supports updating the content of an existing 'nightly' (or 'tip', or any name) release, but it also supports generating regular releases and snapshots (releases without assets). Hence, "regular" workflow artifacts are pushed to 'nightly', but tagged commits are handled as regular releases. The benefit is that you don't need to change the CI for dealing with both cases. I did not implement this in this PR because I wanted to discuss it with you. If you are ok, I can add it. Moreover, the same applies to deciding when to push the artifacts. Currently, as you see, it tries to always push to GitHub Pages. That fails in PRs for security reasons. Therefore, a condition needs to be added. See, for instance, https://github.com/ghdl/ghdl/blob/nightly/.github/workflows/Test.yml#L321. That condition is related to your desired work and contribution flow. That's why I didn't add it before discussing this topic with you. |
Looks good to me! I think this can be merged. Many references will need updating (especially in all the READMEs and in the documentation itself) but that should be no big deal.
You have convinced me 😉 I think this is a convenient way to go. |
One additional thought: |
There is another thing I have noticed. Seems like the images are not included into the HTML deployment. I have not further investigated on this yet... Do you have any idea why this is the case? 🤔 edit I have fixed that. It was just an include problem. |
Ups... I completely overlooked that! I saw that you already fixed most of them. You've been so active this last 48h!
I will submit a PR 😄
I thought about that. There are two motivations for keeping it in the root:
Naturally, it is up to you to have a single makefile in the root, multiple makefiles in subdirs and/or how to include/import them in each other. I'm willing to adapt to whatever you prefer. In this specific makefile what I wanted to showcase is building PDF, building HTML and using a container for doing so. I also wanted to decouple it from the CI workflow, so that users can execute these locally. Moreover, there might be better solutions than makefiles for handling simulation and implementation. For simulation, I would expect a VUnit
You were too fast in merging this PR 😆. I expected you to discuss and ask me to do some changes before merging... Anyway, the missing images are my fault, since I didn't copy them in https://github.com/stnolting/neorv32/blob/master/.github/workflows/Documentation.yml#L60. I'm uploading the The solution you applied is valid, because it picks the figures from branch master, avoiding them to be uploaded to branch gh-pages too. However, that has two probably undesirable effects:
Therefore, I would recommend that we fix the upload-artifact step so that figures are uploaded together with |
True. I totally agree.
We are talking here about a single makefile that provides targets for "everything" (building documentation, running simulation, synthesis, ...), right? 🤔
To be honest, I did not take care of this in the past. Until the latest contributions, there actually were no script-based synthesis setups. So I am looking forward to some discussion here - especially because the open-source synthesis (and even simulation) setups are quite new to me.
Yeah.. sorry for that - I was way too excited 😄 The HTML-based documentation on github pages is incredible handy! I think I will delete all references in the READMEs targeting the pdf documentation and redirect them to github pages. Downloading the pdf (the artifact from the workflow) should be optional and not the default way to get the documentation (-> #32).
Godd point. I did not think about this. We should continue this in #32. |
I am proposing a single entrypoint, which can be a Makefile. However, not everything needs to be written in that single file. It's ok to have other Makefiles or scripts in the subdirs, and call them from the top level. In practice, most HDL related projects do somehow combine Makefiles/shells scripts and Python. The current entrypoint to cocotb are Makefiles, but Python is loaded internally, since that's how the testbenches are written. Edalize and PyFPGA both have Python entrypoints which generate Makefiles from templates. In VUnit, pytest is used for handling multiple Python scripts. In ghdl-cosim, pytest is used for handling shell scripts, Makefiles and Python scripts. See umarcor.github.io/osvb/apis/tool. Therefore, it is really up to you which language to use for the top level entrypoint. The point is that it needs to be no more than a wrapper.
The fact is that there is duplication in this area. It can be done with shell, makefiles, python, etc. and there are multiple libraries/structures you can use for each of the workflows. The challenge is how to avoid duplicating the same metadata for each of them! (thus umarcor.github.io/osvb/apis/core)
No problem at all! I understand, and fortunately, anything we break can be easily fixed in follow-up PRs 😉
It is! I think it unfortunate that the default template/theme generates everything in a single page. For long documents, it would be interesting to have a sidebar with collapsible items, and have chapters/sections in different pages (such as the typical sphinx site). However, I really like the asciidoc syntax over rst. I hope more complete HTML templates are made available in the near future. Other than that, I had not used asciidoctor for generating PDF output. Hence, it was really nice to see that you were using it, and how easily we can generate both outputs from the same source. I would like to explore how to use it through a LaTeX template; but the current solution is good enough for this project!
That sounds nice! I find myself reading most of the documents on screens nowadays, so I agree that the PDF is useful, but not the main source I would use.
👍🏼 |
Seems reasonable to me. I would prefer the makefile concept (since that's the one I'm most familiar with 😉).
Collapsing would be very nice to get a quick overview of everything. Adding a simple table of content right at the top might be an okay-ish workaround for now. I am also thinking about cutting down the main README of the repo. From my point of view, the current version provides way too much in-detail information. Having just a short summary with links to the GH pages (for the interested reader to continue)
I also like that. It is pretty cool to be able to use a custom style sheet for pdf export to give the result a more personal touch 😄
Yeah, me too. But sometimes I like to be old-school and study some pages of freshly printed data sheets - completely offline ;) |
Let me be direct: you will need to use Python. I did also try to resist for some years because I was happy with VHDL, C, bash and golang. However, during the last 20-30 years TCL was the scripting language of choice by EDA vendors. Now, and for the following 20 years, Python is the language of choice by the open source EDA communities. On top of that, Blender, FreeCAD, KiCAD, Matlab, TensorFlow/ONNX... all of them provide Python APIs. Resistance is futile 😉 Anyway, I understand that you want to keep using the tools you are familiar with. That is ok. I am neither convinced by any specific Python "project management solution" (each has advantages and disadvantages). Hence, from a didactic point of view, I believe it makes sense to have Makefiles, even if that implies some more verbosity. Yet, I wouldn't complain to adding a
I understand. That documentation gathers the thoughts after 4-5 years hitting a wall when trying to understand the ecosystem. Therefore, I acknowledge that it is very dense, with lots of references and not many general explanations. Nonetheless, please, do not hesitate to ask, discuss or argue whichever content you don't see fit. The following talk by @rodrigomelo9 might be interesting for you as an introduction to FLOSS EDA tooling: http://video.ictp.it/WEB/2021/2021_01_25-smr3562/2021_02_10-11_00-smr3562.mp4 (http://indico.ictp.it/event/9443/session/258/contribution/587/material/slides/). rodrigomelo9/FOSS-for-FPGAs is an updated version of the same presentation. You will find that the example is based on Makefiles, thus very similar to what @tmeissner is contributing in #31, or to the enhancements I want to submit afterwards for using containers in CI.
I asked to asciidoctor maintainers, and this seems not to be straightfoward. The recommended solution is to use Antora, which is a different (compatible) builder written in JS. Therefore, I think we can stick to the current template for now. Nevertheless, I will keep an eye on that feature.
I absolutely agree. If you check any of the repos whose documentation I maintain, all of them have a minimalistic README: https://github.com/VUnit/vunit, https://github.com/hdl/containers, https://github.com/umarcor/osvb/, https://github.com/hdl/MINGW-packages, https://github.com/umarcor/hwstudio... I believe that the purpose of the README is to catch the attention, both from users (brief description and relevant links) and from maintainers (CI and broken stuff). Any additional information you put in the README is likely to be duplicated in the documentation, so the maintenace burden is increased. Moreover, since the documentation is built in CI, some of the information can be generated dynamically. For instance, area/timing reports/results, bitstreams, simulation results, etc. can be gathered from other jobs. Naturally, this is only feasible with the workflows based on free and open source tools. As soon as VUnit is used, an xUnit file might be generated for granular reporting of test results. Apart from that, I think that some sections of the README should be standalone documents. That is the case of Contribute/Feedback/Questions. Getting Started would also deserve a more visible location. That content is then extended in chapter 6 of the datasheet, named Let’s Get It Started!. From a project level point of view, the "User Guide" or "Getting Started Guide" is not part of the datasheet, but a sibling document. See https://documentation.divio.com/ for very interesting knowledge about how to organise technical documentation. If we take this to the limit, each of the chapters in the datasheet might be a different document (a separated PDF and a subdir in the HTML site):
From a user point of view, if I printed the datasheet, I would not bind it all together, because I will want to read multiple of the documents at the same time. E.g., I want to have the CPU and SoC chapters at hand while reading the User Guide or how to Debug a design. I did not bring this before because we are already dealing with multiple issues/PRs, and I believe it's better to close some of them before initating further task. Yet, I hope you understand that I found it pertinent to express my vision in this topic.
I must say you have a good taste! |
Oh damn! I feared that! 😆
I think "makefile" is the way to go. The
Thank you very much! :)
I will check that out. Or to be honest - I will put it on my todo list. I'm currently a little bit busy arguing with stupid gdb... 😉
👍
That would be great. Unfortunately, the metrics generated by "mainstream" EDA tools like Xilinx and Intel seem to be the most relevant references today...
I also thought about this. But actually, I like having everything in one place. But maybe this has already gotten way too confusing as more and more stuff is added to the documentation.
Yes. Absolutely. 😆 |
Agree. No need to create the
No rush at all! There is too much interesting content in the wild, and days still have 24h only 😆
Don't take it too seriously, you are already doing a very good job with the documentation 😉. As interest and usage grow, you will get more feedback from users and that will give you an idea about which areas are more/less relevant for them. As you get to know the different "user profiles", you will also learn how to send each one to the better documentation for them.
So, yes but no. I agree that the metrics generated for Xilinx, Intel/Altera and Lattice devices are the most used, because the largest portion of the market uses them. However, the metrics generated by open source tools for those devices are not very different. For instance, see the Radiant results from https://github.com/stnolting/neorv32#neorv32-processor and the results with GHDL, Yosys and nextpnr in https://github.com/stnolting/neorv32/runs/2650617473?check_suite_focus=true#step:4:3582 and https://github.com/stnolting/neorv32/runs/2650617473?check_suite_focus=true#step:4:3684. You will find that LUT and FF usage is within a 5% difference. Vendor tools are expected to behave better in corner cases. That's why Radiant allows faster clocks with 97% LUT usage. However, from a general point of view, all are valid. I created #42 for following the discussion about reorganising the docs. |
In this PR, several enhancements to the CI plumbing are contributed:
neorv32.adoc
is moved tocontent.adoc
. A different entrypoint is used for the HTML build (index.adoc
).sw
). That is, the artifacts from the two jobs are downloaded, reorganised and pushed.See umarcor.github.io/neorv32 and umarcor.github.io/neorv32/sw.
@stnolting, in the current state, the PDF datasheet will be uploaded to branch gh-pages (https://github.com/umarcor/neorv32/tree/gh-pages). That might be not desirable, since it is also added to branch master, and it is not good practice to add PDFs to git repositories. Alternatively, a GitHub Release might be used for keeping the latest PDF(s) available. What do you think?