Skip to content

Commit

Permalink
ref(azure): start using new azure storage BlobServiceClient
Browse files Browse the repository at this point in the history
Signed-off-by: Cryptophobia <[email protected]>
  • Loading branch information
Cryptophobia committed Oct 10, 2020
1 parent 819bebf commit bcaab90
Show file tree
Hide file tree
Showing 5 changed files with 17 additions and 11 deletions.
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@

Hephy - A Fork of Deis Workflow

[![Build Status](https://ci.deis.io/job/postgres/badge/icon)](https://ci.deis.io/job/postgres)
[![Docker Repository on Quay](https://quay.io/repository/deis/postgres/status "Docker Repository on Quay")](https://quay.io/repository/deis/postgres)

Deis (pronounced DAY-iss) Workflow is an open source Platform as a Service (PaaS) that adds a developer-friendly layer to any [Kubernetes](http://kubernetes.io) cluster, making it easy to deploy and manage applications on your own servers.
Expand Down
5 changes: 2 additions & 3 deletions charts/database/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
name: database
home: https://github.com/teamhephy/postgres
version: <Will be populated by the ci before publishing the chart>
description: A PostgreSQL database used by Deis Workflow.
description: A PostgreSQL database used by Hephy Workflow.
keywords:
- database
- postgres
maintainers:
- name: Deis Team
email: [email protected]
- email: [email protected]
2 changes: 1 addition & 1 deletion contrib/ci/test-minio.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ MINIO_JOB=$(docker run -d \
-v "${CURRENT_DIR}"/tmp/aws-admin:/var/run/secrets/deis/minio/admin \
-v "${CURRENT_DIR}"/tmp/aws-user:/var/run/secrets/deis/minio/user \
-v "${CURRENT_DIR}"/tmp/k8s:/var/run/secrets/kubernetes.io/serviceaccount \
quay.io/deisci/minio:canary boot server /home/minio/)
hephy/minio:latest boot server /home/minio/)

# boot postgres, linking the minio container and setting DEIS_MINIO_SERVICE_HOST and DEIS_MINIO_SERVICE_PORT
PG_CMD="docker run -d --link ${MINIO_JOB}:minio -e PGCTLTIMEOUT=1200 \
Expand Down
18 changes: 12 additions & 6 deletions rootfs/bin/create_bucket
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,16 @@ from boto.s3.connection import S3Connection, OrdinaryCallingFormat
from oauth2client.service_account import ServiceAccountCredentials
from gcloud.storage.client import Client
from gcloud import exceptions
from azure.storage.blob import BlobService
from azure.storage.blob import BlobServiceClient


def bucket_exists(conn, name):
bucket = conn.lookup(name)
if not bucket:
return False
return True


bucket_name = os.getenv('BUCKET_NAME')
region = os.getenv('S3_REGION')

Expand All @@ -37,15 +39,17 @@ if os.getenv('DATABASE_STORAGE') == "s3":
# TODO(bacongobbler): deprecate this once we drop support for v2.8.0 and lower
except S3CreateError as err:
if region != 'us-east-1':
print('Failed to create bucket in {}. We are now assuming that the bucket was created in us-east-1.'.format(region))
print(
'Failed to create bucket in {}. We are now assuming that the bucket was created in us-east-1.'.format(region))
with open(os.path.join(os.environ['WALE_ENVDIR'], "WALE_S3_ENDPOINT"), "w+") as file:
file.write('https+path://s3.amazonaws.com:443')
else:
raise

elif os.getenv('DATABASE_STORAGE') == "gcs":
scopes = ['https://www.googleapis.com/auth/devstorage.full_control']
credentials = ServiceAccountCredentials.from_json_keyfile_name(os.getenv('GOOGLE_APPLICATION_CREDENTIALS'), scopes=scopes)
credentials = ServiceAccountCredentials.from_json_keyfile_name(
os.getenv('GOOGLE_APPLICATION_CREDENTIALS'), scopes=scopes)
with open(os.getenv('GOOGLE_APPLICATION_CREDENTIALS')) as data_file:
data = json.load(data_file)
conn = Client(credentials=credentials, project=data['project_id'])
Expand All @@ -59,10 +63,12 @@ elif os.getenv('DATABASE_STORAGE') == "gcs":
if not exists:
conn.create_bucket(bucket_name)

# WIP: Currently broken and needs to be update using the new BlobServiceClient
elif os.getenv('DATABASE_STORAGE') == "azure":
conn = BlobService(account_name=os.getenv('WABS_ACCOUNT_NAME'), account_key=os.getenv('WABS_ACCESS_KEY'))
#It doesn't throw an exception if the container exists by default(https://github.com/Azure/azure-storage-python/blob/master/azure/storage/blob/baseblobservice.py#L504).
conn.create_container(bucket_name)
connection_string = os.getenv("AZURE_STORAGE_CONNECTION_STRING")
azure_blob_service_client = BlobServiceClient.from_connection_string(connection_string)

azure_blob_service_client.create_container(bucket_name)

elif os.getenv('DATABASE_STORAGE') == "swift":
conn = swiftclient.Connection(
Expand Down
2 changes: 2 additions & 0 deletions rootfs/docker-entrypoint-initdb.d/001_setup_envdir.sh
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,11 @@ elif [ "$DATABASE_STORAGE" == "gcs" ]; then
echo $GOOGLE_APPLICATION_CREDENTIALS > GOOGLE_APPLICATION_CREDENTIALS
echo $BUCKET_NAME > BUCKET_NAME
elif [ "$DATABASE_STORAGE" == "azure" ]; then
AZURE_STORAGE_CONNECTION_STRING=$(cat /var/run/secrets/deis/objectstore/creds/azure-storage-conn-string)
WABS_ACCOUNT_NAME=$(cat /var/run/secrets/deis/objectstore/creds/accountname)
WABS_ACCESS_KEY=$(cat /var/run/secrets/deis/objectstore/creds/accountkey)
BUCKET_NAME=$(cat /var/run/secrets/deis/objectstore/creds/database-container)
echo $AZURE_STORAGE_CONNECTION_STRING > AZURE_STORAGE_CONNECTION_STRING
echo $WABS_ACCOUNT_NAME > WABS_ACCOUNT_NAME
echo $WABS_ACCESS_KEY > WABS_ACCESS_KEY
echo "wabs://$BUCKET_NAME" > WALE_WABS_PREFIX
Expand Down

0 comments on commit bcaab90

Please sign in to comment.