Releases: googleapis/google-cloud-java
0.1.7
Features
Datastore
gcloud-java-datastore
now uses Cloud Datastore v1beta3. You can read more about updates in Datastore v1beta3 here. Note that to use this new API, you may have to re-enable the Google Cloud Datastore API in the Developers Console. The following API changes are coupled with this update.- Entity-related changes:
- Entities are indexed by default, and
indexed
has been changed toexcludeFromIndexes
. Properties of typeEntityValue
and typeListValue
can now be indexed. Moreover, indexing and querying properties inside of entity values is now supported. Values inside entity values are indexed by default. LatLng
andLatLngValue
, representing the new property type for latitude & longitude, are added.- The getter for a value's
meaning
has been made package scope instead of public, as it is a deprecated field.
- Entities are indexed by default, and
- Read/write-related changes:
- Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the
BatchOption
andTransactionOption
classes are now removed. ReadOption
is added to allow users to specify eventual consistency on Datastore reads. This can be a useful optimization when strongly consistent results forget
/fetch
or ancestor queries aren't necessary.
- Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the
- Query-related changes:
QueryResults.cursorAfter()
is updated to point to the position after the last consumed result. In v1beta2,cursorAfter
was only updated after all results were consumed.groupBy
is replaced bydistinctOn
.- The
Projection
class inStructuredQuery
is replaced with a string representing the property name. Aggregation functions are removed. - There are changes in GQL syntax:
- In synthetic literal KEY, DATASET is now PROJECT.
- The BLOBKEY synthetic literal is removed.
- The FIRST aggregator is removed.
- The GROUP BY clause is replaced with DISTINCT ON.
- Fully-qualified property names are now supported.
- Query filters on timestamps prior to the epoch are now supported.
- Other miscellaneous changes:
- The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use
gcloud-java-datastore
. - The default value for namespace is now an empty string rather than null.
- The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use
- Entity-related changes:
Fixes
General
- In
gcloud-java-bigquery
,gcloud-java-dns
, andgcloud-java-storage
, the fieldid()
has been renamed togeneratedId
for classes that are assignedid
s from the service.
Datastore
- Issue #548 (internal errors when trying to load large numbers of entities without setting a limit) is fixed. The work around mentioned in that issue is no longer necessary.
0.1.6
Features
DNS
gcloud-java-dns
, a new client library to interact with Google Cloud DNS, is released and is in alpha. See the docs for more information and samples.
Resource Manager
- Project-level IAM (Identity and Access Management) functionality is now available. See docs and example code here.
Fixes
Big Query
startPageToken
is now calledpageToken
(#774) andmaxResults
is now calledpageSize
(#745) to be consistent with page-based listing methods in othergcloud-java
modules.
Storage
- Default content type, once a required field for bucket creation and copying/composing blobs, is now removed (#288, #762).
- A new boolean
overrideInfo
is added to copy requests to denote whether metadata should be overridden (#762). startPageToken
is now calledpageToken
(#774) andmaxResults
is now calledpageSize
(#745) to be consistent with page-based listing methods in othergcloud-java
modules.
0.1.5
Features
Storage
- Add
versions(boolean versions)
option toBlobListOption
to enable/disable versionedBlob
listing. If enabled all versions of an object as distinct results (#688). BlobTargetOption
andBlobWriteOption
classes are added toBucket
to allow setting options forcreate
methods (#705).
Fixes
BigQuery
- Fix pagination when listing tables and dataset with selected fields (#668).
Core
- Fix authentication issue when using revoked Cloud SDK credentials with local test helpers. The
NoAuthCredentials
class is added with theAuthCredentials.noAuth()
method, to ne used when testing service against local emulators (#719).
Storage
- Fix pagination when listing blobs and buckets with selected fields (#668).
- Fix wrong usage of
Storage.BlobTargetOption
andStorage.BlobWriteOption
inBucket
'screate
methods. New classes (Bucket.BlobTargetOption
andBucket.BlobWriteOption
) are added to provide options toBucket.create
(#705). - Fix "Failed to parse Content-Range header" error when
BlobWriteChannel
writes a blob whose size is a multiple of the chunk size used (#725). - Fix NPE when reading with
BlobReadChannel
a blob whose size is a multiple of the chunk/buffer size (#725).
0.1.4
Features
BigQuery
- The
JobInfo
andTableInfo
class hierarchies are flattened (#584, #600). Instead,JobInfo
contains a fieldJobConfiguration
, which is subclassed to provide configurations for different types of jobs. Likewise,TableInfo
contains a new fieldTableDefinition
, which is subclassed to provide table settings depending on the table type. - Functional classes (
Job
,Table
,Dataset
) now extend their associated metadata classes (JobInfo
,TableInfo
,DatasetInfo
) (#530, #609). TheBigQuery
service methods now return functional objects instead of the metadata objects.
Datastore
-
Setting list properties containing values of a single type is more concise (#640, #648).
For example, to set a list of string values as a property on an entity, you'd previously have to type:
someEntity.set("someStringListProperty", StringValue.of("a"), StringValue.of("b"), StringValue.of("c"));
Now you can set the property using:
someEntity.set("someStringListProperty", "a", "b", "c");
-
There is now a more concise way to get the parent of an entity key (#640, #648).
Key parentOfCompleteKey = someKey.parent();
-
The consistency setting (defaults to 0.9 both before and after this change) can be set in
LocalGcdHelper
(#639, #648). -
You no longer have to cast or use the unknown type when getting a
ListValue
from an entity (#648). Now you can use something like the following to get a list of double values:List<DoubleValue> doublesList = someEntity.get("myDoublesList");
ResourceManager
- Paging for the
ResourceManager
list
method is now supported. (#651) Project
is now a subclass ofProjectInfo
(#530). TheResourceManager
service methods now returnProject
instead ofProjectInfo
.
Storage
- Functional classes (
Bucket
,Blob
) now extend their associated metadata classes (BucketInfo
,BlobInfo
) (#530, #603, #614). TheStorage
service methods now return functional objects instead of metadata objects.
Fixes
BigQuery
0.1.3
Features
BigQuery
-
Resumable uploads via write channel are now supported (#540)
An example of uploading a CSV file in chunks of CHUNK_SIZE bytes:
try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/your/file"))) { ByteBuffer buffer = ByteBuffer.allocate(256 * 1024); TableId tableId = TableId.of("YourDataset", "YourTable"); LoadConfiguration configuration = LoadConfiguration.of(tableId, FormatOptions.of("CSV")); WriteChannel writeChannel = bigquery.writer(configuration); long position = 0; long written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel); while (written > 0) { position += written; written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel); } writeChannel.close(); }
-
defaultDataset(String dataset)
(inQueryJobInfo
andQueryRequest
) can be used to specify a default dataset (#567).
Storage
- The name of the method to submit a batch request has changed from
apply
tosubmit
(#562).
Fixes
BigQuery
hashCode
andequals
are now overridden in subclasses ofBaseTableInfo
(#565, #573).jobComplete
is renamed tojobCompleted
inQueryResult
(#567).
Datastore
-
The precondition check that cursors are UTF-8 strings has been removed (#578).
-
EntityQuery
,KeyQuery
, andProjectionEntityQuery
classes have been introduced (#585). This enables users to use modify projections and group by clauses for projection entity queries after usingtoBuilder()
. For example, this now works:ProjectionEntityQuery query = Query.projectionEntityQueryBuilder() .kind("Person") .projection(Projection.property("name")) .build(); ProjectionEntityQuery newQuery = query.toBuilder().projection(Projection.property("favorite_food")).build();
0.1.2
Features
Core
-
By default, requests are now retried (#547).
For example:
// Use the default retry strategy Storage storageWithRetries = StorageOptions.defaultInstance().service(); // Don't use retries Storage storageWithoutRetries = StorageOptions.builder() .retryParams(RetryParams.noRetries()) .build() .service()
BigQuery
- Functional classes for datasets, jobs, and tables are added (#516)
- Query Plan is now supported (#523).
- Template suffix is now supported (#514).
Fixes
Datastore
-
QueryResults.cursorAfter()
is now set when all results from a query have been exhausted (#549).When running large queries, users may see Datastore-internal errors with code 500 due to a Datastore issue. This issue will be fixed in the next version of Datastore. Until then, users can set a limit on their query and use the cursor to get more results in subsequent queries. Here is an example:
int limit = 100; StructuredQuery<Entity> query = Query.entityQueryBuilder() .kind("user") .limit(limit) .build(); while (true) { QueryResults<Entity> results = datastore.run(query); int resultCount = 0; while (results.hasNext()) { Entity result = results.next(); // consume all results // do something with the result resultCount++; } if (resultCount < limit) { break; } query = query.toBuilder().startCursor(results.cursorAfter()).build(); }
-
load
is renamed toget
in functional classes (#535)
0.1.1
Features
BigQuery
-
Introduce support for Google Cloud BigQuery (#503): create datasets and tables, manage jobs, insert and list table data. See BigQueryExample for a complete example or API Documentation for
gcloud-java-bigquery
javadoc.import com.google.gcloud.bigquery.BaseTableInfo; import com.google.gcloud.bigquery.BigQuery; import com.google.gcloud.bigquery.BigQueryOptions; import com.google.gcloud.bigquery.Field; import com.google.gcloud.bigquery.JobStatus; import com.google.gcloud.bigquery.LoadJobInfo; import com.google.gcloud.bigquery.Schema; import com.google.gcloud.bigquery.TableId; import com.google.gcloud.bigquery.TableInfo; BigQuery bigquery = BigQueryOptions.defaultInstance().service(); TableId tableId = TableId.of("dataset", "table"); BaseTableInfo info = bigquery.getTable(tableId); if (info == null) { System.out.println("Creating table " + tableId); Field integerField = Field.of("fieldName", Field.Type.integer()); bigquery.create(TableInfo.of(tableId, Schema.of(integerField))); } else { System.out.println("Loading data into table " + tableId); LoadJobInfo loadJob = LoadJobInfo.of(tableId, "gs://bucket/path"); loadJob = bigquery.create(loadJob); while (loadJob.status().state() != JobStatus.State.DONE) { Thread.sleep(1000L); loadJob = bigquery.getJob(loadJob.jobId()); } if (loadJob.status().error() != null) { System.out.println("Job completed with errors"); } else { System.out.println("Job succeeded"); } }
Resource Manager
-
Introduce support for Google Cloud Resource Manager (#495): get a list of all projects associated with an account, create/update/delete projects, undelete projects that you don't want to delete. See ResourceManagerExample for a complete example or API Documentation for
gcloud-java-resourcemanager
javadoc.import com.google.gcloud.resourcemanager.ProjectInfo; import com.google.gcloud.resourcemanager.ResourceManager; import com.google.gcloud.resourcemanager.ResourceManagerOptions; import java.util.Iterator; ResourceManager resourceManager = ResourceManagerOptions.defaultInstance().service(); // Replace "some-project-id" with an existing project's ID ProjectInfo myProject = resourceManager.get("some-project-id"); ProjectInfo newProjectInfo = resourceManager.replace(myProject.toBuilder() .addLabel("launch-status", "in-development").build()); System.out.println("Updated the labels of project " + newProjectInfo.projectId() + " to be " + newProjectInfo.labels()); // List all the projects you have permission to view. Iterator<ProjectInfo> projectIterator = resourceManager.list().iterateAll(); System.out.println("Projects I can view:"); while (projectIterator.hasNext()) { System.out.println(projectIterator.next().projectId()); }
Storage
- Remove the
RemoteGcsHelper.create(String, String)
method (#494)
Fixes
Datastore
- HTTP Transport is now specified in
DefaultDatastoreRpc
(#448)
0.1.0
Features
Core
-
The project ID set in the Google Cloud SDK now supersedes the project ID set by Compute Engine (#337).
Before
The project ID is determined by iterating through the following list in order, stopping when a valid project ID is found:
- Project ID supplied when building the service options
- Project ID specified by the environment variable
GCLOUD_PROJECT
- App Engine project ID
- Compute Engine project ID
- Google Cloud SDK project ID
After
- Project ID supplied when building the service options
- Project ID specified by the environment variable
GCLOUD_PROJECT
- App Engine project ID
- Google Cloud SDK project ID
- Compute Engine project ID
-
The explicit
AuthCredentials.noCredentials
option was removed.
Storage
-
The testing helper class
RemoteGCSHelper
now usesGOOGLE_APPLICATION_CREDENTIALS
andGCLOUD_PROJECT
environment variables to set credentials and project (#335, #339).Before
export GCLOUD_TESTS_PROJECT_ID="MY_PROJECT_ID" export GCLOUD_TESTS_KEY=/path/to/my/key.json
After
export GCLOUD_PROJECT="MY_PROJECT_ID" export GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/key.json
-
BlobReadChannel
throws aStorageException
if a blob is updated during a read (#359, #390) -
generation
is moved fromBlobInfo
toBlobId
, andgenerationMatch
andgenerationNotMatch
methods are added toBlobSourceOption
andBlobGetOption
(#363, #366).Before
BlobInfo myBlobInfo = someAlreadyExistingBlobInfo.toBuilder().generation(1L);
After
BlobId myBlobId = BlobId.of("bucketName", "idName", 1L);
-
The
Blob
's batch delete method now returns false for blobs that were not found (#380).
Fixes
Core
- An exception is no longer thrown when reading the default project ID in the App Engine environment (#378).
SocketTimeoutExceptions
are now retried (#410, #414).
Datastore
- A
SocketException
exception is no longer thrown when creating the Datastore service object from within the App Engine production environment (#411).