Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: PICOS: A Python interface to conic optimization solvers #3915

Closed
45 of 60 tasks
whedon opened this issue Nov 13, 2021 · 85 comments
Closed
45 of 60 tasks

[REVIEW]: PICOS: A Python interface to conic optimization solvers #3915

whedon opened this issue Nov 13, 2021 · 85 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Shell

Comments

@whedon
Copy link

whedon commented Nov 13, 2021

Submitting author: @Viech (Maximilian Stahlberg)
Repository: https://gitlab.com/picos-api/picos
Version: v2.4
Editor: @melissawm
Reviewers: @marwahaha, @GuillaumeDerval
Archive: 10.5281/zenodo.6053359

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/bd72442e34002c825f672716b2870c98"><img src="https://joss.theoj.org/papers/bd72442e34002c825f672716b2870c98/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/bd72442e34002c825f672716b2870c98/status.svg)](https://joss.theoj.org/papers/bd72442e34002c825f672716b2870c98)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@sfuxy & @marwahaha, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @melissawm know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @sfuxy

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@Viech) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @marwahaha

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@Viech) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @GuillaumeDerval

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@Viech) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Nov 13, 2021

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @sfuxy, @marwahaha it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Nov 13, 2021

PDF failed to compile for issue #3915 with the following error:

 Can't find any papers to compile :-(

@whedon
Copy link
Author

whedon commented Nov 13, 2021

Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.97 s (188.5 files/s, 59811.8 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                         152          10394          13823          27256
reStructuredText                22           1063           1364           2899
YAML                             3             35             72            300
DOS Batch                        1             23              1            166
make                             1             28              6            145
Bourne Shell                     3             32             58             75
-------------------------------------------------------------------------------
SUM:                           182          11575          15324          30841
-------------------------------------------------------------------------------


Statistical information for the repository 'd936f91e597d826bb4579fb6' was
gathered on 2021/11/13.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
Arno Ulbricht                    2           111              7            0.04
Guillaume                       51         31019          53135           25.67
Guillaume Sagnol               157         39135          15114           16.55
Guillaume Sagnol (th            15           949            169            0.34
Jin H. Jung                      1             6              5            0.00
Leonard Kleinhans                1             4              4            0.00
Malte Renken                     1             2              2            0.00
Maximilian Stahlberg           977         93761          64502           48.27
Peter Wittek                    14           286             58            0.10
Petur Helgi                      1             7              2            0.00
Ray Ganardi                      1             1              1            0.00
Seeland                          5           171             20            0.06
Sergio Callegari                 1             5              3            0.00
Shaun Ren                        4            28             18            0.01
Xavier Valcarce                  3            41              7            0.01
bzffourn                         4           188             28            0.07
bzfsagno                        67         23885           4966            8.80
bzfulbri                         1             2              0            0.00
gsagnol                          1             6              2            0.00
[email protected]             1            87              0            0.03
ziofil                           3            46             36            0.03
Élie Gouzien                     1            17              3            0.01

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
Guillaume                    12            0.0         74.5                0.00
Guillaume Sagnol           2180            5.6         20.0                7.98
Maximilian Stahlberg      49151           52.4         22.2               16.50
Peter Wittek                  2            0.7         67.3                0.00
bzfulbri                    125         6250.0         69.8               27.20
Élie Gouzien                  3           17.6         32.3                0.00

@melissawm
Copy link

@whedon generate pdf from branch joss

@whedon
Copy link
Author

whedon commented Nov 13, 2021

Attempting PDF compilation from custom branch joss. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Nov 13, 2021

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@melissawm
Copy link

@Viech @sfuxy @marwahaha this is the review thread for the paper. All of our communications will happen here from now on.

Both reviewers have checklists at the top of this thread with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#3915 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use Whedon (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@melissawm) if you have any questions/concerns.

@mjstahlberg
Copy link

Notifying my co-author @gsagnol.

@whedon
Copy link
Author

whedon commented Nov 27, 2021

👋 @marwahaha, please update us on how your review is going (this is an automated reminder).

@whedon
Copy link
Author

whedon commented Nov 27, 2021

👋 @sfuxy, please update us on how your review is going (this is an automated reminder).

@melissawm
Copy link

Hello, folks! Just as a reminder, if you have any questions or need to update the status of the review, please do so here. Thanks!

@marwahaha
Copy link

marwahaha commented Dec 16, 2021

Very nice library!

Some comments below:

We can now use our variables to create affine expressions, which are stored as instances of ComplexAffineExpression or of its subclass AffineExpression. In fact, our variables already have the full affine expression interface to work with! Let’s transpose them using the suffix .T:

  1. (TUTORIAL) Why does taking the transpose cause an "affine expression"? Can you define this term somewhere?
  2. (EXAMPLES) It was hard to try out an example. Even though several of them are listed, I just wanted to try one out (the MaxCut relaxation) and could not figure out how to run it without finding all the code blocks and pasting them in from other sections. Could each example be self-contained?
  3. (DOCS) Who exactly is your target audience?
  4. (DOCS) Not all methods are documented. I don't think it's necessary to document all of them, but each "main" method should be documented.
  5. (PAPER) What do you mean by handling "distributionally robust optimization models to deal with uncertainty in the data"?

Aside from these comments, I approve this submission.

@mjstahlberg
Copy link

Hello @marwahaha,

thank you for your review and the kind words!

I'll answer your points now and will try to find time to work on the changes next week.


  1. (TUTORIAL) Why does taking the transpose cause an "affine expression"? Can you define this term somewhere?

Let me first explain what happens here in detail:

PICOS variables are a special kind of an affine expression. Internally, any (multidimensional) variable object is associated with a numeric vector whose entries correspond to the scalar variables handled by solvers, so that the numeric result reported by the solver can be written directly to those vectors. Every entry of a PICOS matrix expression (including matrix variables) is thus stored as a linear combination of any number of such scalar variables plus a constant term, or in other words each entry is an affine expression of those variables. Therefor the lower left entry of a two-by-two variable object X would be stored as e.g. 1*x[1] + 0 while that of X.T is stored as e.g. 1*x[2] + 0 where x is some vectorization of the mathematical object represented by X.

Now, the main difference between variable objects and other expressions is that you can directly assign a numeric value to a variable. For other expressions, including general affine ones, such an assignment would already amount to solving an optimization problem. Note that recent versions of PICOS do allow you to assign values to arbitrary affine expression as this only amounts to solving a system of linear equations, but other expressions can only be valued by valuing the variables that appear in them.

Now, this is of course a bit much to include in the tutorial, so what part do you think is important to mention? Is it sufficient to explain that "affine = linear + constant offset" or should the reader be made aware of any of the internals?


  1. (EXAMPLES) It was hard to try out an example. Even though several of them are listed, I just wanted to try one out (the MaxCut relaxation) and could not figure out how to run it without finding all the code blocks and pasting them in from other sections. Could each example be self-contained?

I admit that most of our examples are a bit clunky and not suited for a quick test. Since their code listings are long, I would not want to present them without some intertext or display of intermediate outputs. However, I can introduce a section of "Simple examples" at the top where all code blocks can be copy-pasted individually in a console to reproduce. Would that be sufficient to you?


  1. (DOCS) Who exactly is your target audience?

That is not easy to delimit as apart from quantum information we don't really address a particular application or scientific field but allow an entire class of numerical algorithms to be implemented. These are of practical interest to fields as diverse as operations research, finance (e.g. portfolio optimization), physics, math (computing counter examples), machine learning, and all sorts of engineering (I've personally used PICOS to optimize a Minecraft farm!).

In the paper we wrote:

PICOS was designed to be used by both application developers and researchers as well as instructors teaching courses on mixed integer, convex, conic, or robust optimization.

I can certainly put something like that also in the online introduction, although this is of course a rather vague answer to your question.


  1. (DOCS) Not all methods are documented. I don't think it's necessary to document all of them, but each "main" method should be documented.

The API Reference documents every public class, method, and function, including mathematical details and additional examples (see Ellipsoid for an example of both). In particular, this page lists everything available in the picos namespace.

Having an application example for every kind of task that PICOS is up to would be great but is not feasible for us as we maintain the software alongside our current research. What particular methods do you feel are missing from the tutorial or examples?


  1. (PAPER) What do you mean by handling "distributionally robust optimization models to deal with uncertainty in the data"?

(Distributionally) robust optimization deals with problems where a cautious decision is required that accounts for uncertainty in the data that you are analyzing. For instance, in finance you could look at the past performance of assets and assume that they will behave similarly in the future, modeled in terms of an estimated probability distribution over their returns. Now if you use plain optimization (more precisely stochastic programming) to find an investement strategy, you are in a mathematically strict sense guaranteed to be disappointed by the results as the method has an intrinsic bias towards overly positive estimates. This is known as the optimizer's curse and DRO is a method to avoid it.

We mention this in the paper as one feature where we are currently ahead of the competition. As the competition is pretty tough and all of our tools have been around for a decade, such a we-have-it-but-you-don't feature will always be somewhat niche and its purpose not immediately clear to a general audience. In a paper as short as this one, I can hardly include an explanation of these two models (RO and DRO are related but different) beyond of what there already is, that is mentioning that they deal with uncertainty and giving a reference (which happens to be about PICOS).

Instead, I propose adding an RO example to the documentation. Note that concise DRO examples are hard to craft (some of the techniques are from 2020 and don't even have a known application yet) and would overstrain the automatic validation that has only open source solvers at its disposal.

@marwahaha
Copy link

  1. I think it is sufficient to explain that "affine = linear + constant offset".
  2. Including simple examples works; and possibly a statement saying these examples are a walkthrough. It was most frustrating when I copy-paste a cell and I need to import packages myself.
  3. It would be nice to include a short statement saying that PICOS is for any practitioner of numerical optimization (in the docs). The paper is fine.
  4. Sorry. I meant "docstring" here. It might be helpful to add docstrings to a few of the primary methods.
  5. Adding an RO example to the docs is a good idea; is the advantage that you handle RO or DRO?

@mjstahlberg
Copy link

Thank you for the clarification!


  1. I think it is sufficient to explain that "affine = linear + constant offset".

I added the following to the Tutorial:

The fundamental building blocks of optimization models are affine (matrix) expressions. Each entry of such an expression is simply a linear combination of any number of scalar variables plus a constant offset. The variable objects that we have defined above are special cases of affine expression that refer to themselves via an identity transformation.


2. Including simple examples works; and possibly a statement saying these examples are a walkthrough. It was most frustrating when I copy-paste a cell and I need to import packages myself.

I've added a Quick examples section with three new short examples (least norm, robust optimization, integer programming) that are fully contained in independent listings, including all imports and a shebang line.

I've further added the following to the start of the Examples page:

The quick examples are all self-contained and can be copy-pasted to a source file or Python console to reproduce them. Most of the remaining examples have a tutorial character and are presented in multiple dependent code sections.

(All examples are CI/CD-validated, so even those in multiple blocks can be reproduced by executing them in order.)


3. It would be nice to include a short statement saying that PICOS is for any practitioner of numerical optimization (in the docs). The paper is fine.

I've changed the first sentence of the README (Introduction in the docs) to:

PICOS is a user friendly Python API to several conic and integer programming solvers, designed to be used by both application developers and researchers as well as instructors teaching courses on mathematical optimization.


4. Sorry. I meant "docstring" here. It might be helpful to add docstrings to a few of the primary methods.

All public (non-underscore) functions, methods and properties and every __init__ method already have a docstring. This and the docstring's PEP 257 compliance is checked by the CI/CD pipeline.

Thus, you must be referring to special/magic methods (e.g. __add__, __le__) that are in a sense both public and private as they do begin with an underscore and the user would indeed not call them directly but might still be interested in their documentation. Indeed, the special methods used for mathematical operations were only defined for affine expression types and they appeared nowhere in the online documentation.

I now moved the docstrings of these "mathematical" special methods to the Expression base class so that they are now defined (and thus can be read using help()) for every expression type. I've further included them in the online documentation, although there they show up only when

  1. you are looking at the Expression base class or
  2. you are looking at a subclass that actually implements the operation.

There is not much I can do about this but I also find it sensible as it introduces little redundancy. Note that with every expression you can follow the link at the top to its base class until you arrive at a parent class where the method is documented.

Apart from expressions, I've now documented relevant special methods for Objective and Valuable (and Problem.__iadd__).


5. Adding an RO example to the docs is a good idea; is the advantage that you handle RO or DRO?

Pyomo has two third-party RO extensions that I know of (PyROS, ROmodel), CVXPY does not seem to have one yet. For DRO I think we are currently the only general purpose interface with direct support, though there are certainly specialized Python interfaces for both techniques that have a more narrow scope than PICOS/Pyomo/CVXPY.

@marwahaha
Copy link

This all looks great to me! Thanks.

@mjstahlberg
Copy link

Thank you for taking the time to review!

@mjstahlberg
Copy link

Dear @sfuxy, could you try to complete your review this month? There is currently at least one person looking to cite PICOS in a paper.

@mjstahlberg
Copy link

(I've send them a mail.)

@melissawm
Copy link

Thanks, @Viech - @sfuxy gentle ping on this :)

@mjstahlberg
Copy link

I did not get a response by mail either. As this has now been open for over two months, could you assign another reviewer @melissawm?

@melissawm
Copy link

I think that sounds reasonable @Viech - do you have suggestions?

@mjstahlberg
Copy link

Sure @melissawm, there are a few candidates in the list with Python as a preferred language and who seem to be particularly interested in mathematical optimization or OR. Ordered by newest list entry first:

  1. GuillaumeDerval
  2. Viech 😉
  3. jdreo
  4. g4brielvs
  5. torressa

I suggest to ask in this order, skipping anyone who may be actively involved in the software. At this point it woud be nice if the reviewer can start soon.

@melissawm
Copy link

@whedon set v2.4 as version

@whedon
Copy link
Author

whedon commented Feb 16, 2022

OK. v2.4 is the version.

@melissawm
Copy link

@whedon recommend accept

@whedon
Copy link
Author

whedon commented Feb 16, 2022

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@whedon commands

@melissawm
Copy link

@whedon recommend-accept

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Feb 16, 2022
@whedon
Copy link
Author

whedon commented Feb 16, 2022

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Feb 16, 2022

PDF failed to compile for issue #3915 with the following error:

 Can't find any papers to compile :-(

@melissawm
Copy link

@whedon recommend-accept from branch joss

@whedon
Copy link
Author

whedon commented Feb 16, 2022

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Feb 16, 2022

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1080/23307706.2017.1397554 is OK
- 10.1007/s12532-011-0026-8 is OK
- 10.5281/zenodo.6077420 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Feb 16, 2022

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2968

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2968, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true from branch joss 

@mjstahlberg
Copy link

Now that the links on the sidebar are supposed to work, I checked them and found that your URL http://mtm.ufsc.br/~melissa is broken. Sorry for not noticing earlier!

@melissawm
Copy link

@whedon recommend-accept from branch joss

@whedon
Copy link
Author

whedon commented Feb 16, 2022

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Feb 16, 2022

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1080/23307706.2017.1397554 is OK
- 10.1007/s12532-011-0026-8 is OK
- 10.5281/zenodo.6077420 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Feb 16, 2022

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2969

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2969, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true from branch joss 

@melissawm
Copy link

Should be fixed now - thanks @Viech !

@mjstahlberg
Copy link

LGTM. Thank you!

@danielskatz
Copy link

@whedon accept deposit=true from branch joss

draft looks good to me!

@whedon whedon added accepted published Papers published in JOSS labels Feb 16, 2022
@whedon
Copy link
Author

whedon commented Feb 16, 2022

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Feb 16, 2022

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Feb 16, 2022

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.03915 joss-papers#2970
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.03915
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@danielskatz
Copy link

Congratulations to @Viech (Maximilian Stahlberg) and co-author!!

And thanks to @marwahaha and @GuillaumeDerval for reviewing, and @melissawm for editing!
We couldn't do this without you!

@whedon
Copy link
Author

whedon commented Feb 16, 2022

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.03915/status.svg)](https://doi.org/10.21105/joss.03915)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.03915">
  <img src="https://joss.theoj.org/papers/10.21105/joss.03915/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.03915/status.svg
   :target: https://doi.org/10.21105/joss.03915

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@mjstahlberg
Copy link

mjstahlberg commented Feb 16, 2022

Awesome! Thank you very much @melissawm, @marwahaha and @GuillaumeDerval for your time and effort! Also to @danielskatz for proofreading and pushing the button! ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Shell
Projects
None yet
Development

No branches or pull requests

8 participants