Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Verify integrations using package signatures from EPR and public key bundled with Kibana #133822

Closed
21 of 22 tasks
kpollich opened this issue Jun 7, 2022 · 34 comments · Fixed by #137239
Closed
21 of 22 tasks
Assignees
Labels
QA:Validated Issue has been validated by QA Team:Fleet Team label for Observability Data Collection Fleet team

Comments

@kpollich
Copy link
Member

kpollich commented Jun 7, 2022

Summary

We want to provide Kibana users with confidence that the integrations they're installing have not been corrupted or otherwise tampered with. To facilitate this, we're adding support for signing packages: elastic/package-registry#728.

In the integrations UI, we'll need to add elements that indicate whether an integration is verified or not, as well as a service to handle verifying integrations using a public key bundled with Kibana

Implementation

  • Add a build/bootstrap step to fetch Elastic's GPG key from https://artifacts.elastic.co/GPG-KEY-elasticsearch and store it on disk
    • Create a utility to grab this public key as needed in the Fleet codebase
  • Add support for providing an alternative public key file path via kibana.yml in cases where a new public key should be used
  • Create a service method for verifying an integration using bundled public key and a .zip.sig file published in Package Storage v2 e.g. https://package-storage.elastic.co/artifacts/packages/${packageName}-${packageVersion}.zip.sig
  • Display a badge for each unverified integration on the "Browse integrations" grid
    • Deferred due to constraints around downloading an integrations .zip archive in order to verify
  • Display a callout for an unverified integration on the "Integration details" page
  • Display a callout + badges for unverified integrations on the "Installed integrations" grid, including a link to docs explaining the verification process
  • Ensure that any integrations installed prior to these changes is not immediately flagged as unverified (SO Migration?)
  • Address licensing of openpgp NPM module
    • Ensure LGPL is added to the license checker utility:
    • Ensure the notice generator utility generates the proper LGPL licensing/dependency notice for the openpgp package
    • display modal asking user if they wish to force install a package when verification error is returned
    • add documentation link when one is available

Designs

Show designs

image

image

image

Open questions

Answered questions

References

@kpollich kpollich added the Team:Fleet Team label for Observability Data Collection Fleet team label Jun 7, 2022
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@akshay-saraswat
Copy link

akshay-saraswat commented Jun 7, 2022

How can Fleet verify each integration on the "browse integrations" grid without downloading a .zip file for every integration in existence?

True. Maybe let's download each version once and keep it in the cache until we get a different version of the package from EPR. For the first time, it will be a lot of bandwidth and compute consumption. We will have to verify it somehow before a user tries to install it.

For integrations installed prior to 8.4.0, will we be showing that the installed integration is unverified until they're updated? If I upgrade Kibana to 8.4, do all of my installed integrations get flagged as unverified?

No. That would create chaos and confusion because users may not be aware that we have released such a feature. Anything installed after 8.4 upgrade should be flagged as unverified if those packages fail the validation.

@kpollich
Copy link
Member Author

kpollich commented Jun 7, 2022

I found some fairly helpful documentation around some of my questions

https://github.com/elastic/elastic-package/blob/main/docs/howto/use_package_storage_v2.md

This doc details some of the usage of the package storage v2 API, which we'll need to leverage here in order to fetch signature files. Reading through it covered two of my questions:

What does the actual code process look like to verify a package? Download package -> Run some cryptographic operation (what specifically?) using private key -> Verify output matches what's published under signature_path returned from EPR?

I think the verification process will look something like this:

  1. Download an integration's .zip archive from https://package-storage.elastic.co/artifacts/packages/${packageName}-${packageVersion}.zip
  2. Fetch the integration's signature file from https://package-storage.elastic.co/artifacts/packages/${packageName}-${packageVersion}.zip.sig
  3. Verify the .zip file's signature using the GPG-KEY-elasticsearch public key hosted at https://artifacts.elastic.co/GPG-KEY-elasticsearch - or potentially this will be bundled with Kibana? Still unclear to me.

Verification will look something like

$ gpg --no-default-keyring  --keyring /path/to/pubkey.gpg --verify ${packageName}.zip.sig ${packageName}.zip

We'll need to use a JS implementation of GPG e.g. https://www.npmjs.com/package/openpgp#create-and-verify-detached-signatures. I don't think this is something that Kibana has currently, just from a cursory look around at dependencies and a quick grep. @joshdover might know better.

When can we expect the signature_path field mentioned #126101 (comment) to be published in EPR? e.g. https://epr.elastic.co/package/fleet_server/1.1.1/

It won't be deployed to the legacy EPR API - it exists only in package storage v2.

@akshay-saraswat
Copy link

akshay-saraswat commented Jun 7, 2022

The design doc mentions a setting in Integrations UI for updating the public key bundled with Kibana in cloud environments. Is this setting in scope here?

For SaaS, no we don't need that. For ECE, yes, we need this capability because I believe customers would not be bothered about the outdated public key for their stack but as soon as they upgrade their EPR, every tile would become yellow for them.

Having said that, it's not as urgent as the signature validation because we are not going to restrict anyone from installing unverified packages and I don't think we will rotate our key pair anytime soon. So, it doesn't have to be now and we can add it in the next phase if the scope is enormous for it.

@kpollich
Copy link
Member Author

kpollich commented Jun 8, 2022

I wrote a POC script to verify package signatures using a JS based PGP implementation called openpgpjs that seems like the most "industry standard" implementation. https://github.com/openpgpjs/openpgpjs

I roughly followed their docs and came up with this

import * as openpgp from "openpgp";
import fetch from "node-fetch";

const ELASTIC_GPG_KEY_URL =
  "https://artifacts.elastic.co/GPG-KEY-elasticsearch";

const EXAMPLE_PACKAGE = "elastic_agent-1.3.1";
const BASE_PACKAGE_STORAGE_URL =
  "https://package-storage.elastic.co/artifacts/packages";

const EXAMPLE_PACKAGE_ZIP_URL = `${BASE_PACKAGE_STORAGE_URL}/${EXAMPLE_PACKAGE}.zip`;
const EXAMPLE_PACKAGE_SIG_URL = `${EXAMPLE_PACKAGE_ZIP_URL}.sig`;

export function streamToBuffer(stream) {
  return new Promise((resolve, reject) => {
    const chunks = [];
    stream.on("data", (chunk) => chunks.push(Buffer.from(chunk)));
    stream.on("end", () => resolve(Buffer.concat(chunks)));
    stream.on("error", reject);
  });
}

try {
  const key = await fetch(ELASTIC_GPG_KEY_URL).then((res) => res.text());
  const publicKey = await openpgp.readKey({ armoredKey: key });

  const zip = await fetch(EXAMPLE_PACKAGE_ZIP_URL).then((res) =>
    streamToBuffer(res.body)
  );

  const signature = await fetch(EXAMPLE_PACKAGE_SIG_URL).then((res) =>
    res.text()
  );

  const message = await openpgp.createMessage({
    binary: zip,
  });
  const parsedSignature = await openpgp.readSignature({
    armoredSignature: signature,
  });

  const verificationResult = await openpgp.verify({
    verificationKeys: publicKey,
    signature: parsedSignature,
    message: message,
  });

  const { verified, keyID } = verificationResult.signatures[0];

  await verified;

  console.log("Signed by key id", keyID.toHex());
} catch (e) {
  console.error("Unable to verify signature");
  console.error(e);
}

image

I'm assuming the Key ID I get here in some way corresponds to the one listed on https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html based on these digits:

image

I've never really used or looked at GPG stuff so this is all pretty foreign to me. Hopefully I'm on the right track. Maybe one of the folks who worked on the package signature implementation would be of help to sanity check this?

@joshdover
Copy link
Contributor

  • How can Fleet verify each integration on the "browse integrations" grid without downloading a .zip file for every integration in existence?

True. Maybe let's download each version once and keep it in the cache until we get a different version of the package from EPR. For the first time, it will be a lot of bandwidth and compute consumption. We will have to verify it somehow before a user tries to install it.

I don't think downloading every package will ever be practical, especially as we scale to 1000k+ integrations. Some packages are already 100MB+ and this is going to consume a large amount of bandwidth to have every Kibana instance do this, both for the customers and for our registry CDN. Instead, I think we should only verify these packages on install or maybe do it lazily when the user opens the integration details view for a single integration.

If the package fails verification during install, we should give the user an opportunity to continue anyway or abort.

Admins should probably have the ability to not allow unverified packages at all, but maybe we can add this as a follow up feature. How we expose this setting is unclear to me, because we'd only want cluster admins to be able to modify it, so I think kibana.yml would be the best location, though it's not the best UX. We may also be able to add an RBAC sub-privilege for "Integration Settings" so that admins can modify this from the UI without giving users who need to install packages the ability to modify this setting.

still unclear whether the public key will be bundled with Kibana's source

IMO it'd be ok to have a flow similar to how we download bundled packages to package the latest public key from artifacts.elastic.co/GPG-KEY-elasticsearch into a file that Kibana can read. Ideally this is integrated as a yarn kbn bootstrap step instead of a build-only step so development has the same experience as production.

The design doc mentions a setting in Integrations UI for updating the public key bundled with Kibana in cloud environments. Is this setting in scope here?

For SaaS, no we don't need that. For ECE, yes, we need this capability because I believe customers would not be bothered about the outdated public key for their stack but as soon as they upgrade their EPR, every tile would become yellow for them.

Having said that, it's not as urgent as the signature validation because we are not going to restrict anyone from installing unverified packages and I don't think we will rotate our key pair anytime soon. So, it doesn't have to be now and we can add it in the next phase if the scope is enormous for it.

Could we start with a kibana.yml setting that allows pointing to an alternative key file on disk? We can add UI for it later.

@joshdover
Copy link
Contributor

@elastic/kibana-security Any feedback on our plan to use opengpgjs to verify zip signatures? #133822 (comment)

@hop-dev
Copy link
Contributor

hop-dev commented Jun 8, 2022

@joshdover I agree that we should only verify on install in the first instance.

Presumably it will be very rare that there is a verification issue, and we will be wasting a lot of bandwidth even by lazily verifying the package on the package details page.

Could we return an error on the backend if the verification fails, then add a URL param that allows users to ignore validation errors, the UI could then give the user the choice to proceed:

sequenceDiagram
    autonumber
    participant User
    participant UI
    participant Backend
    User->>UI: Install button clicked
    UI->>Backend: POST /epm/packages/mypkg-1
    Note right of Backend: Donwload package
    Note right of Backend: Verify package
    Note right of Backend: Verification fails
    Backend->>UI: 400 Package Verification Failed
    UI->>User: Display error modal
    Note right of User: This package is unverified <br/> continue | cancel
    User->>UI: Continue button clicked
    UI->>Backend: POST /epm/packages/mypkg-1?allowUnverified
        Note right of Backend: Donwload package
    Note right of Backend: Verify package
    Note right of Backend: Verification fails
    Note right of Backend: Install continues
    Backend->>UI: 200 Installation successful
    UI->>User: Installation complete

Loading

@kpollich
Copy link
Member Author

kpollich commented Jun 8, 2022

@elastic/ecosystem - Does anyone have any feedback on our understanding of how we'll verify package signatures in Kibana here? I wrote a quick POC using https://openpgpjs.org/, but we've got some open questions about specifics, e.g.

What GPG / PGP implementation for JavaScript can we use here? https://openpgpjs.org/ is the JS equivalent to https://github.com/ProtonMail/gopenpgp which we're using in the EPR codebase, but it's licensed as LGPLv3 which isn't completely acceptable by Elastic's licensing standards

Does using https://package-storage.elastic.co/ for .sig files mean we should also switch our zip download logic to use https://package-storage.elastic.co/ instead of https://epr.elastic.co/?

Also, the concerns raised around verifying all integrations being an expensive, tough to scale operation would be great to get feedback on as well.

@mtojek
Copy link
Contributor

mtojek commented Jun 8, 2022

Hi Kyle, let me respond to your questions.

IMO it'd be ok to have a flow similar to how we download bundled packages to package the latest public key from artifacts.elastic.co/GPG-KEY-elasticsearch into a file that Kibana can read. Ideally this is integrated as a yarn kbn bootstrap step instead of a build-only step so development has the same experience as production.

As we're controlling the package-storage, we can upload and manage Elastic keys there. On the other hand, I'd rather use the same source of keys, so if there is any recommended place by the Infra team, I would go for that place. We haven't received any recommendations around this or key invalidation.

What GPG / PGP implementation for JavaScript can we use here? https://openpgpjs.org/ is the JS equivalent to https://github.com/ProtonMail/gopenpgp which we're using in the EPR codebase, but it's licensed as LGPLv3 which isn't completely acceptable by Elastic's licensing standards

In our case this is MIT, but if you prefer we can change the library/implementation. It shouldn't hurt as that much at this moment.

Does using https://package-storage.elastic.co/ for .sig files mean we should also switch our zip download logic to use https://package-storage.elastic.co/ instead of https://epr.elastic.co/?

We use https://package-storage.elastic.co/ for storing artifacts, those will be also accessible via https://epr.elastic.co/, including signatures. EPR will stream content to the CDN. Maybe we can introduce another property download_cdn side to download? If Kibana supports this property, it will consume resources directly from https://package-storage.elastic.co/.

To be honest, I thought that Kibana will be still reaching out to EPR and the EPR will stream content from https://package-storage.elastic.co/. I'm not sure if Kibana should be aware of https://package-storage.elastic.co/ (hardcoded).

@kpollich
Copy link
Member Author

kpollich commented Jun 8, 2022

As we're controlling the package-storage, we can upload and manage Elastic keys there. On the other hand, I'd rather use the same source of keys, so if there is any recommended place by the Infra team, I would go for that place. We haven't received any recommendations around this or key invalidation.

Happy to fetch a key from package storage instead. I think we're both blocked by infra's recommendation here then. Our logic will be to fetch the key from wherever -> store it in something like a git ignored config/GPG-KEY-elasticsearch file in Kibana. The source doesn't necessarily matter too much to us as long as its consistent with other usages.

In our case this is MIT, but if you prefer we can change the library/implementation. It shouldn't hurt as that much at this moment.

I opened https://github.com/elastic/open-source/issues/298 to hopefully get approval from legal to use the JS OpenPGP package. It'd be great to use this package because then both Kibana and Package Storage are using PGP implementations maintained by ProtonMail. The sync on the same package maintainer across languages feels like a solid implementation choice.

To be honest, I thought that Kibana will be still reaching out to EPR and the EPR will stream content from https://package-storage.elastic.co/. I'm not sure if Kibana should be aware of https://package-storage.elastic.co/ (hardcoded).

This would be preferred on our end too, so I'm happy to move forward with that assumption. However I'm not seeing signatures available via EPR today e.g. https://epr.elastic.co/epr/elastic_agent/elastic_agent-1.3.1.zip.sig. Is this work still p ending and if so where we can track it?

Thanks a ton for your response, @mtojek - really cleared a few things up here 👍

@mtojek
Copy link
Contributor

mtojek commented Jun 9, 2022

Happy to fetch a key from package storage instead. I think we're both blocked by infra's recommendation here then. Our logic will be to fetch the key from wherever -> store it in something like a git ignored config/GPG-KEY-elasticsearch file in Kibana. The source doesn't necessarily matter too much to us as long as its consistent with other usages.

Yes, my main concern is lack of procedure in case of invalidation, but we have an issue to focus on this problem.

This would be preferred on our end too, so I'm happy to move forward with that assumption. However I'm not seeing signatures available via EPR today e.g. https://epr.elastic.co/epr/elastic_agent/elastic_agent-1.3.1.zip.sig. Is this work still p ending and if so where we can track it?

This is because signatures are valid only for .zip packages, and current setup of the Package Storage operates on unpacked packages (like in the Git repository). Once we switch to the bucket storage indexer (connects to GCP buckets), you will see all signatures. Do you need something for Fleet development purposes now? I guess that we can arrange it.

@kpollich
Copy link
Member Author

kpollich commented Jun 9, 2022

Do you need something for Fleet development purposes now? I guess that we can arrange it.

It'd be great to be able to fetch signatures and the proper package .zip archives from https://epr.elastic.co/ while we develop this feature in 8.4. Is switching over to the bucket storage indexer in EPR something that's planned in the near future?

@hop-dev
Copy link
Contributor

hop-dev commented Jun 9, 2022

Do you need something for Fleet development purposes now? I guess that we can arrange it.

agreed with Kyle above, something as close to how it will be in production would be great, even if its the ability to set up an EPR locally with the new structure for dev.

@hop-dev
Copy link
Contributor

hop-dev commented Jun 9, 2022

@kpollich regarding:

Add a build/bootstrap step to fetch Elastic's GPG key from https://artifacts.elastic.co/GPG-KEY-elasticsearch and store it on disk

Here is my breakdown of the pros/cons of build vs bootstrap:

Build step:

  • con: if key changes then we have the incorrect key bundled with old versions
  • pro: on prem/airgapped customers will not have to download the key separately to have package verification
  • pro: saves bandwidth/time on startup

Bootstrap step:

  • con: on-prem users would have to use kibana.yml to configure package verification
  • pro: key will always be the latest
  • pro: we could later extend this step to check if the key has been invalidated?

I think there could also be a hybrid approach where it is bundled but we also check for an updated key. I am not sure which is better from a security standpoint, e.g could requests be intercepted/bundles be messed with?

@kpollich
Copy link
Member Author

kpollich commented Jun 9, 2022

con: if key changes then we have the incorrect key bundled with old versions

I think this is okay as long as we allow for overriding the bundled key via kibana.yml. For example if we support an option like xpack.fleet.packageVerificationPublicKey, there will always be an escape hatch to update the bundled public key even on an outdated version of Kibana. This should only be necessary in case of some kind of keypair compromise in which Elastic needs to regenerate the private/public key due to a leak.

I think there could also be a hybrid approach where it is bundled but we also check for an updated key

I'm in agreement here. I think what we want to do is something like

check for xpack.fleet.packageVerificationPublicKey (a file path)
if above path exists
  read configured file path
else
  read default file path (public key bundled with kibana source - maybe something like <kibana>/config/GPG-KEY-elasticsearch

The important thing with generating this during bootstrap as well is that we need to allow for package verification to run in development. I don't think bootstrap is relevant for production builds - only development.

@joshdover
Copy link
Contributor

This is because signatures are valid only for .zip packages, and current setup of the Package Storage operates on unpacked packages (like in the Git repository). Once we switch to the bucket storage indexer (connects to GCP buckets), you will see all signatures. Do you need something for Fleet development purposes now? I guess that we can arrange it.

This is a pretty important detail, should we even keep spending time here right now if the registry doesn't support what we need? My understanding is that storage v2 is not necessarily shipping in 8.4, which is what I think @mtojek is talking about.

@hop-dev
Copy link
Contributor

hop-dev commented Jun 13, 2022

@mtojek is there an issue for this "storage v2" work we can link to?:

Once we switch to the bucket storage indexer (connects to GCP buckets)

And is Josh right that it won't be in 8.4?

@jlind23
Copy link
Contributor

jlind23 commented Jun 13, 2022

@hop-dev This is the meta issue we use to track the work:
https://github.com/elastic/ingest-dev/issues/1040

It should be done in 8.4 but may slip through a little bit. I'll let @mtojek answer as soon as he is back.

@hop-dev
Copy link
Contributor

hop-dev commented Jun 14, 2022

@dborodyansky will you have some availability in the next couple of weeks to create a design for the following scenario (diagram above)?

  • user installs package
  • an error is returned from the server informing us the that package verification has failed
  • user is able to elect to ignore this and install anyway (or abort/cancel)

let me know if there is any more info you would need 👍

@joshdover
Copy link
Contributor

Yesterday, @akshay-saraswat and @kpollich agreed to sync up and figure out how much progress the UI team can make before this is available in the production registry. Until then, I think we can defer on any additional engineering effort here.

@mtojek
Copy link
Contributor

mtojek commented Jun 14, 2022

I'd rather keep works on both sides (Package Storage, Fleet) loosely coupled as the contract is pretty clear. There are many steps until we deploy it in production:

  • finish implementation (we're here),
  • build legacy Docker image of Package Storage for customers and Elastic Cloud
  • work with Infra to deploy required changes on the Kubernetes cluster and adjust DNS configuration
  • adjust all Jenkins pipelines (Integrations, APM, Endpoint) responsible for building packages.

Many unpredicted blockers might occur until we deploy the improved EPR to production.

If possible I would keep the signature verification optional in Fleet - enabled if signature_path is present in the /package. Once the new EPR is stable, I guess it can be in the following release, we can make the verification mandatory.

PS. We have one person working on this stuff at the moment, interchangeably with the "input package" which has a higher priority.

@kpollich
Copy link
Member Author

kpollich commented Jun 14, 2022

I'd rather keep works on both sides (Package Storage, Fleet) loosely coupled as the contract is pretty clear. There are many steps until we deploy it in production:

  • finish implementation (we're here),
  • build legacy Docker image of Package Storage for customers and Elastic Cloud
  • work with Infra to deploy required changes on the Kubernetes cluster and adjust DNS configuration
  • adjust all Jenkins pipelines (Integrations, APM, Endpoint) responsible for building packages.

Many unpredicted blockers might occur until we deploy the improved EPR to production.

If possible I would keep the signature verification optional in Fleet - enabled if signature_path is present in the /package. Once the new EPR is stable, I guess it can be in the following release, we can make the verification mandatory.

PS. We have one person working on this stuff at the moment, interchangeably with the "input package" which has a higher priority.

Thanks for clarifying the state of the project, @mtojek. I am in agreement with @joshdover here then, and we should defer most engineering work on the Fleet UI side until we can retrieve package signatures from EPR.

If possible I would keep the signature verification optional in Fleet - enabled if signature_path is present in the /package. Once the new EPR is stable, I guess it can be in the following release, we can make the verification mandatory.

I think this approach is valid, but we can't realistically develop the signature verification process if signature_path isn't present in the /package endpoint. @hop-dev and I synced up yesterday and determined we'd put verification behind a feature flag once we start development here, so it could be opted out of via kibana.yml.


@akshay-saraswat to address what engineering work can be done on the Fleet UI side of things here in the meantime:

  • We could implement a service to handle package verification via openpgp even though it will be unused in the actual package fetch/install pipeline
  • We could implement UI components like callouts, badges, etc in an isolated Storybook environment

Other than these two things I'd really rather defer on "wiring up" the signature verification process to the package registry until https://epr.elastic.co is serving package signatures at least in a staging/snapshot environment we can use during development. I think it'd be fine to use Marcin's suggestion of "only apply verification if the signature_path field exists" so we're only doing verification in that staging/snapshot environment as a precaution.

cc @jlind23 as well.

@akshay-saraswat
Copy link

We discussed this today in the weekly meeting. We'll capture the final plan next week when Marcin is back from PTO.

My 2 cents

The Fleet UI work can be divided into the following three items:

  1. Retrieve packages and signatures from EPR.
  2. Verify package sanity.
  3. Alter UI to warn users of potential risks.

From the threat assessment report recommendations and enterprise-readiness requirements perspective, the priority for this work is pretty high. We must aim to deliver it as soon as possible. I agree with what @kpollich mentioned above. IMO, we should start working on the last two items while the dependencies for # 1 are handled by the ecosystem team during 8.4.

@kpollich
Copy link
Member Author

@akshay-saraswat - I'm glad we've some time on the calendar now to align here, but I wanted to just address your comment here before we sync up next week.

I captured how I'm understanding the state of work and what's blocking what here in this quick sketch:

image

  1. Retrieve packages and signatures from EPR.

We can't do this until EPR is wired up to the new Package Storage v2 GCP buckets, so considering Fleet UI blocked here

  1. Verify package sanity.

We could technically implement a service/module that takes in a package .zip buffer and a signature and returns a "valid" status and implement unit tests for it, but it would sit unused until we can actually fetch these package .zips and signatures from EPR. So, while we can do some level of isolated implementation here, we won't be able to actually develop the "sanity verification" work against real packages.

  1. Alter UI to warn users of potential risks.

Similar to number 2 above, we could deliver these UI's in an "isolated" development pattern using mock data or Storybook, but we won't be able to develop against real package data until number 1 is addressed.

Completely agree on the priority here. Looking forward to chatting further next week and clearing up dependencies here.

@jlind23
Copy link
Contributor

jlind23 commented Jun 20, 2022

Hi folks,
This is the summary of our today's meeting with @kpollich @mtojek @akshay-saraswat.

  1. To unblock local development @mtojek will provide an Docker image of the package registry that will include all packages and their signatures.
  2. @kpollich and @hop-dev will make sure that if there is no signature for a package it should silently ignore it for now and it shouldn't flag this package.

As soon as the docker image is ready, @mtojek will provide an update here.

@mtojek
Copy link
Contributor

mtojek commented Jun 20, 2022

@jlind23 @kpollich

Here is a freshly baked Docker image:

docker run -p 8080:8080 docker.elastic.co/observability-ci/package-registry/distribution:PR-4631

Remember to docker-auth before :)

@mtojek
Copy link
Contributor

mtojek commented Jul 5, 2022

FYI We started publishing more fresh package Package Storage distributions:

docker.elastic.co/package-registry/distribution:lite-v2-experimental - subset of packages, rather small distribution
docker.elastic.co/package-registry/distribution:production-v2-experimental - full distribution with all packages

@hop-dev
Copy link
Contributor

hop-dev commented Aug 2, 2022

@amolnater-qasource I've written a manual testing guide here

Here are the test files needed
Signature Testing.ZIP

@amolnater-qasource
Copy link

Hi @hop-dev
Thank you for sharing testing guidelines and files.

We will be revalidating this at our end and will be sharing results here.
Thanks!

@amolnater-qasource
Copy link

Hi @hop-dev
As per shared information, we have revalidated this feature on 8.4 BC2 Kibana self-managed environment and found it working fine.

We had below observations:

  • On installing integration we have observed a confirmation pop-up, Install unverified integration.
  • On confirming the installation we have observed Integrations not verified warning call out under Installed integrations and Integration overview page.

Screenshots:
3
1
2

Other Observations:

Screenshot:
4

Build details:
BUILD: 55166
COMMIT: 9e9e0d6

Please let us know if we are missing anything here.
Thanks

@hop-dev
Copy link
Contributor

hop-dev commented Aug 10, 2022

could you please confirm if Unverified label should be available under Browse Integrations page?

Unverified labels shouldn't show in the browse integrations view ✅

@amolnater-qasource
Copy link

Hi Team
We have created 02 testcases for this feature under our Fleet Test suite at links:

Please let us know if we are missing any scenario here.
Thanks

@amolnater-qasource
Copy link

Hi Team

We have executed 02 testcases for this feature under our Fleet Test run at link:

We have observed no issues while revalidating this feature on 8.4 BC5 self-managed environment.

Build details:

Version: 8.4 BC5 Self-managed 
BUILD: 55374
COMMIT: f12954223a8ad66bbbf77becc4f0557ffd1c92c3
ARTIFACT LINK: https://staging.elastic.co/8.4.0-f8287a32/summary-8.4.0.html 

As the testing is completed on this feature, we are marking it as QA:Validated.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
QA:Validated Issue has been validated by QA Team:Fleet Team label for Observability Data Collection Fleet team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants