Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into r-updates
Browse files Browse the repository at this point in the history
  • Loading branch information
jbedo committed May 17, 2022
2 parents f07c969 + 343bf78 commit e052be0
Show file tree
Hide file tree
Showing 3,528 changed files with 133,205 additions and 149,817 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
5 changes: 5 additions & 0 deletions .editorconfig
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,11 @@ trim_trailing_whitespace = unset
[*.lock]
indent_size = unset

# trailing whitespace is an actual syntax element of classic Markdown/
# CommonMark to enforce a line break
[*.md]
trim_trailing_whitespace = unset

[eggs.nix]
trim_trailing_whitespace = unset

Expand Down
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -192,8 +192,8 @@
/nixos/tests/knot.nix @mweinelt

# Dhall
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch @ehmry
/pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry

# Idris
/pkgs/development/idris-modules @Infinisil
Expand Down
34 changes: 34 additions & 0 deletions .github/ISSUE_TEMPLATE/build_failure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
name: Build failure
about: Create a report to help us improve
title: ''
labels: '0.kind: build failure'
assignees: ''

---

### Steps To Reproduce
Steps to reproduce the behavior:
1. build *X*

### Build log
```
log here if short otherwise a link to a gist
```

### Additional context
Add any other context about the problem here.

### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->

### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.

```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
```
11 changes: 6 additions & 5 deletions .github/workflows/update-terraform-providers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,15 @@ jobs:
git commit -m "${{ steps.setup.outputs.title }}" providers.json
popd
- name: create PR
uses: peter-evans/create-pull-request@v3
uses: peter-evans/create-pull-request@v4
with:
body: |
Automatic update of terraform providers.
Automatic update by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
Created by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
Check that all providers build with `@ofborg build terraform-full`
Check that all providers build with:
```
@ofborg build terraform-full
```
branch: terraform-providers-update
delete-branch: false
labels: "2.status: work-in-progress"
Expand Down
22 changes: 11 additions & 11 deletions doc/builders/fetchers.chapter.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ For those who develop and maintain fetchers, a similar problem arises with chang

## `fetchurl` and `fetchzip` {#fetchurl}

Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below.

```nix
{ stdenv, fetchurl }:
Expand All @@ -24,9 +24,9 @@ stdenv.mkDerivation {
}
```

The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.

`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.

Most other fetchers return a directory rather than a single file.

Expand All @@ -38,9 +38,9 @@ Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha25

Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.

Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.

If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more infomation:
If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more information:

```nix
{ stdenv, fetchgit }:
Expand Down Expand Up @@ -78,29 +78,29 @@ A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are m

## `fetchFromGitHub` {#fetchfromgithub}

`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred.

`fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.

## `fetchFromGitLab` {#fetchfromgitlab}

This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above.

## `fetchFromGitiles` {#fetchfromgitiles}

This is used with Gitiles repositories. The arguments expected are similar to fetchgit.
This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`.

## `fetchFromBitbucket` {#fetchfrombitbucket}

This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.

## `fetchFromSavannah` {#fetchfromsavannah}

This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above.

## `fetchFromRepoOrCz` {#fetchfromrepoorcz}

This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above.

## `fetchFromSourcehut` {#fetchfromsourcehut}

Expand All @@ -111,4 +111,4 @@ or "hg"), `domain` and `fetchSubmodules`.

If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
respectively. Otherwise the fetcher uses `fetchzip`.
respectively. Otherwise, the fetcher uses `fetchzip`.
16 changes: 8 additions & 8 deletions doc/builders/images/dockertools.section.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ After the new layer has been created, its closure (to which `contents`, `config`

At the end of the process, only one new single layer will be produced and added to the resulting image.

The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`.

It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.

Expand Down Expand Up @@ -87,15 +87,15 @@ pkgs.dockerTools.buildImage {
}
```

and now the Docker CLI will display a reasonable date and sort the images as expected:
Now the Docker CLI will display a reasonable date and sort the images as expected:

```ShellSession
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
```

however, the produced images will not be binary reproducible.
However, the produced images will not be binary reproducible.

## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}

Expand All @@ -119,13 +119,13 @@ Create a Docker image with many of the store paths being on their own layer to i

`contents` _optional_

: Top level paths in the container. Either a single derivation, or a list of derivations.
: Top-level paths in the container. Either a single derivation, or a list of derivations.

*Default:* `[]`

`config` _optional_

: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).

*Default:* `{}`

Expand Down Expand Up @@ -195,9 +195,9 @@ pkgs.dockerTools.buildLayeredImage {

Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.

Modern Docker installations support up to 128 layers, however older versions support as few as 42.
Modern Docker installations support up to 128 layers, but older versions support as few as 42.

If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further.

The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.

Expand All @@ -213,7 +213,7 @@ The image produced by running the output script can be piped directly into `dock
$(nix-build) | docker load
```

Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry:

```ShellSession
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
Expand Down
6 changes: 3 additions & 3 deletions doc/builders/images/ocitools.section.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# pkgs.ociTools {#sec-pkgs-ociTools}

`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container.

## buildContainer {#ssec-pkgs-ociTools-buildContainer}

This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command.

The parameters of `buildContainer` with an example value are described below:

Expand All @@ -30,7 +30,7 @@ buildContainer {
}
```

- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container.

- `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)

Expand Down
2 changes: 1 addition & 1 deletion doc/builders/images/snaptools.section.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ in snapTools.makeSnap {

## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}

Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package.

``` {#ex-snapTools-buildSnap-firefox .nix}
let
Expand Down
10 changes: 5 additions & 5 deletions doc/builders/packages/citrix.section.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a

## Basic usage {#sec-citrix-base}

The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix.
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.

## Citrix Selfservice {#sec-citrix-selfservice}
## Citrix Self-service {#sec-citrix-selfservice}

The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.

In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this:
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this:

```ShellSession
$ storebrowse -C ~/Downloads/receiverconfig.cr
Expand All @@ -19,7 +19,7 @@ $ selfservice

## Custom certificates {#sec-citrix-custom-certs}

The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:

```nix
with import <nixpkgs> { config.allowUnfree = true; };
Expand Down
Loading

0 comments on commit e052be0

Please sign in to comment.