-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make sure key crates have crev
code reviews
#46
Comments
Empirically, foundational gamedev crates seem to often be unsound, so this is both important and unlikely to go smoothly (if done right). |
Seems okay. One problem is that the reviews are for particular versions (right?) so we'd need these foundational crates to settle a bit more perhaps. |
I think this would be great for foundational crates and the cores of engines but less so for the more feature-y stuff that would likely be often built from scratch anyways at a lot of studios |
After messing with
Correct
One side-effect I hope for is that more widespread |
Okay, how do we define "foundational" crates? Besides, naturally, "crates that ggez uses". In my own
The first type are usually the easiest to review: no malicious Edit: That said, to start with I'm pretty happy with a crate having just two So let me propose a core list of, at least, system crates for gamedev. This list is necessarily not complete, of course.
Core "unsafe stuff abstraction" crates:
|
I'm adding a tag that we want a guide for this one. I know that you wrote a guide Icefox but it was very long. We should try to come up with a guide that is as terse as possible while explaining what to do. I couldn't even remember how to use |
Yeah, a good HOWTO is a good idea. Crev actually has a pretty good one, last I checked. Its command line flags have mutated a little the last version or two though, so we should make sure the guides are up to date and such. |
I think Winit should also get some proper crev reviews as well, given how widely used it is and how it's pretty much all unsafe code by nature. That's a bit of a difficult project for a single person to tackle though, since it's going to be hard to review code that you can't personally test. |
I would argue that a code review shouldn't generally involve executing the code (other than perhaps running the tests, but I'm assuming those get run by the maintainer) |
I'd argue that it's important to be able to run code when reviewing it for unsafe code in particular, since a) being able to check code with valgrid or similar memory safety tools can give insights a standard review wouldn't |
I kept seeing this thread pop on and on a few times, and today I decided to investigate this more seriously. As a spoiler, I am still slightly confused at the purpose of this tool :D. It overlaps with cargo-deny and clippy and also I don't super understand the purpose of having a "web of trusted reviewers" with code reviews being git files actually does for me as a crate maintainer. Even their page states that:
Ok that's great, Then it goes on to state:
Isn't cargo-deny (a newer and better option to just deny/ban and warn you about shady crates) rather than a side-effect of a code review tool. Then I went on to the getting started guide and I will show you an example here:
So it seems to keep track of thoroughness / understanding and these fields are entered by humans. Ok sure, so each crate has a rating of qualityA and qualityB. Now, these things are obviously subjective measures, even the docs are encouraging people to be truthful. Some info and guidelines that confuse me even more:
This semi-gatekeepy vibe that I get by reading the docs, coupled with the fact that it seems to be like one of those devices that has 1000 features have lead me to conclude that I should just stop trying to understand it. After I have just Honestly cargo-deny is a way better tool for things that matter in my opinion. Speaking of trust, I probably trust this tool it as it came out of Embark and highly regard the people there :). In any case, if anyone can explain the purpose of this thing and how it relates to gamedev, I would super appreciate it. |
This is not a tool that is specific to just gamedev, but it's a general concern that we should also make sure that gamedev crates are handling properly the same as we're generally obligated to make sure that all gamedev crates follow SemVer and provide good and truthful docs and other crate quality concerns. |
I do get that, and usually the license is kind-of a make-or-break it for a crate if you want to actually ship something that's not just another open source crate. But for example, let me be more specific with the bits I don't understand:
|
|
@bitshifter That is neat. I don't think it makes @AlexEne I love the numbers because then I can also use numbers to sort my reply:
|
|
GIthub stars was meant as a funny exacmple, but let's take a crate's usage numbers and crev rating I just want to know if they ever differ :). I don't know how to check that now. For security I'd strongly suggest people go with what cargo-deny does, or tools like address sanitizer, or other automated checkers, rather than some person with a reputation of x took a look at it and said it's good enough :D. This is not how you assess security-related things. I do see value in automated things like clippy / cargo deny or other tools like thread/address sanitizer, but to me, this is at best a different kind of popularity contest, and I am still not sure of the usefulness. For the people that mentioned they found it useful. What actions did you take based on this tool's output? Did you replace a dependency? Get a confirmation that your dependency is good, etc? How exactly do you use it? You use it to review things / checkthe quality of the dependencies you have? From what i see the status on each of the "quality" markers can be low/medium/high. What does this even mean?/ What does medium mean? It's at best as useful as uber ratings from what I see. I guess my ultimate point here is this really a thing we (the gamedev working group) should encourage people to use (I don't see it promoted by the other wgs.)? For example is this a thing we expect a game studio to use? |
I don't understand in what way you think there's some sort of popularity contest, and until that's cleared up I don't think any of the rest of your points need to be addressed because I think they all stem from the same confusion. |
I think my questions were clear and targeted, regardless of my first impression of this tool. I will repeat them here: For the people that mentioned they found it useful. What actions did you take based on this tool's output? Did you replace a dependency? Get a confirmation that your dependency is good, etc? How exactly do you use it? You use it to review things / checkthe quality of the dependencies you have? From what i see the status on each of the "quality" markers can be low/medium/high. What does this even mean?/ What does medium mean? It's at best as useful as uber ratings from what I see. I guess my ultimate point here is this really a thing we (the gamedev working group) should encourage people to use (I don't see it promoted by the other wgs.)? For example is this a thing we expect a game studio to use? |
I chose to trust only a very small number of people and so none of the crates that I bothered to check had any existing reviews. Instead I found myself reviewing crates. |
Plenty of common crates are chock full of unsafe if not outright unsound code. Popularity is a poor approximation of quality.
Automatic checks are useful but insufficient. Part of a proper security assessment/review of a crate can consider if the crate has good miri/asan/fuzz/test coverage, and if they're employed in a way that's sufficient to catch the kinds of errors I'd expect the crate to run across.
It has triggered manual source code review of various crates for me, resulting in several issues filed and fixed against crates due to unsound APIs. Quality markers are admittedly a bit fuzzy - I define how I'm using them in the readme of my crev-proofs repoistory, and then further go on to enumerate my exact rationale in the comment section of my review... which I've been told is handy by some of the people who read said reviews, and also is handy when re-reviewing a crate on account of a new release. I've chosen to avoid or replace some dependencies as a result as well. |
I'm not sure how this is related to the gamedev WG, or rather what actions the gamedev WG are supposed to take. Yes, these crates should dependable and sound, but the same is true for a lot of crates. I think it would be more productive if gamedev collaborated with the Unsafe Code Guidelines and Secure Code (who already perform security audits and publish crev reviews) working groups for improving these crates, rather than trying to take on this work entirely on themselves. |
The UCG-wg is not about doing code reviews, and they almost for sure would not be interested in doing any of this work. The Secure Code wg are actually already listing crev as a tool to use, and one of their members was the one to push people to using |
ClarificationIt came up in the wg Discord channel from Alex that there has been some miscommunication about who does this work. The process of doing reviews is not an extra burden for library authors, and absolutely does not place any additional work requirements on them. If they choose to put out a review of their crate that's fine, but if they choose to not participate that's also fine. A review of a crate can be done by anyone at any time, and each person's personal web of which reviews they choose to trust is accounted for by |
IMO However, "what actions did you take off this tool's output" is really a question that hits the heart of the matter, so thanks. Personally I've used its feedback to replace some dependencies with simpler ones ( As for review quality guidelines, I find the crev guidelines pretty pragmatic and understandable. They're in the template for creating a @XAMPPRocky I made this issue here because in my experience there's a decent but not huge assortment of crates that are very heavily used in gamedev and maybe don't get much use outside of it, so we have a real ecosystem to work with that is specialized enough to be useful and small enough to be feasibly tackled. We definitely should be working with other groups too, but I didn't even know the So, maybe this issue should be broader: "how should the gamedev WG make sure key crates are high quality?" |
Thanks, things have been clarified so thanks all for the input on this, I think that's a great question to tackle, crev is one of the tools that can offer some benefit in this quality space. I am glad people posted actions they took of it, as the examples of usage are really lacking in the official docs so this really helps me (and I hope others) in understanding how things should flow: who does the review ( I really thought for the longest time that the ask was that I - as the hypotetical crate provider- have to provide reviews for my dependencies and publish them) and how that review is used. and what actions people take based off them. In the end, it does have the implicit assumption that people are using this in the first place to assess the quality of something or that the reviews out there are discoverable (or valuable and here comes the trust part), so there are still rough parts in this space. E.g. How do I find @Lokathor's reviews, etc. I know, I probably should go read the docs again, but as an user I do expect things to be easier than they are today, so it still has a lot of rough edges. Is it based on github username?, is it expected for me to ask him about what his reviewer id is / repo, etc. (I really don't remember how this should work so please let;s not dwell on this comment). It's just meant to be an example of rough edge that might be unclear to users trying this thing out. Next steps I proposeI think for the particular issue here, I propose we have a way to track these reviews that we (someppl of this WG have started doing). As I said, for a long time it was unclear to me even who does these reviews. Maybe let's unify them in a repo here? (just an idea) |
I think something goal oriented like this is very reasonable. Starting with the very basics: I want to discover, use, and to be able to recommend, high quality crates. "High quality" here includes safe and sound - no security vulnerabilities, not a crashy mess, and sound APIs so my code is also safe and sound for the same reasons. Why use crev?To find what crates I'm already using, and if I've audited the code before. To find and share reviews with an admittedly currently very niche audience - other crev users. To Alex's point - crev reviews are not particularly readable, nor easily discoved by the wider audience of Rust users. To this end I've been aiming to duplicate my reviews in a github repository, so you only need a browser to read: MaulingMonkey/rust-reviews/reviews/arrayvec.md is a good example of what I'm striving for. The root index is a work in progress and partially automated - I'm working on improving that too, I need to reimplement categorization. In the past I've had long rambling comments in my crev reviews, but going forward those may instead become links to reviews on github instead - where they're more easily readable. Crev would still be useful for coordination and verification and stuff. Why review crates?There's a bunch of sites out there for discovering crates, like arewegameyet, arewewebyet, areweguiyet, crates.io, lib.rs. But I want something more curated... I want to know the pros/cons of alternatives, their strengths/weaknesses, opinions as to if someone other than the author can recommend the crate to me. There's not a lot of that kind of curation going on, so I'm resorting to doing so for myself, for my own benefit. I share my curation list to make it easier to recommend things to others (I don't have to repeat or recall what I liked about a crate). Why audit code?There's a lot of unsafe and unsound Rust code out there, including in some of the crates I'm using. If I'm being particularly paranoid, I'd like to catch malicious code as well. But just not wanting to miss a project deadline because I had to spend two weeks hunting down a heisenbug crash every couple months is enough reason to want to start auditing code. What should the WG do, specifically?So I just listed a whole bunch of personal goals and actions. What needs more than just personal action? Well, from this thread, coordination and discoverability seem like priorities. Coordination could perhaps be done through existing channels like Rust Safety Dance, Rust Secure Code WG, RPL Discord's Discoverability - I don't know of a great way to boost the signal for crev directly. But perhaps when I audit/review a crate and it gets into a state I like, I can suggest cross-linking the review? E.g. arrayvec could add a line to their Readme.md? It feels a bit too self-aggrandizing to just add my reviews to other people's Readmes, but maybe if there's a few of us...? Or etablished precedent? Could also do https://shields.io/ badges, link crev itself to boost the signal of that specifically, ...
|
I think, thinking about making these crates high quality in terms of
I'm sure that's true, however I would say that none of the crates you listed above fit that description. I think it would be more helpful mention what specific crates that you think need improvement (or are missing), and need to be stable for gamedev specifically than a generalist approach. |
I recently learned about the
crev
tool which produces crypto signed code reviews to produce some manner of web of trust. The main innovation is that code reviews are just text files in git, and each ID includes a list of other ID's whose reviews it also trusts. While it's a young idea yet, it appeals to me, and since the WG is a place for people to get together and talk about gamedev ecosystem stuff, it seems like a possible place to coordinate efforts to get trusted reviews for a lot of baseline gamedev-related crates.I'm sleepy and explaining this poorly. Anyway. Does this interest anyone else?
The text was updated successfully, but these errors were encountered: