-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Add table_schema parameter for user-defined BigQuery schema #46
Changes from 1 commit
8791c49
f6f7dc9
2783f7c
6740634
4cfc00b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1422,6 +1422,48 @@ def test_schema_is_subset_fails_if_not_subset(self): | |
assert self.sut.schema_is_subset( | ||
dataset, table_name, tested_schema) is False | ||
|
||
def test_upload_data_with_valid_user_schema(self): | ||
# Issue #46; tests test scenarios with user-provided | ||
# schemas | ||
df = tm.makeMixedDataFrame() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. add the issue number as a comment |
||
test_id = "15" | ||
test_schema = [{'name': 'A', 'type': 'FLOAT'}, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's a chance this might fail with version 0.29.0 of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Generated schemas do not include the |
||
{'name': 'B', 'type': 'FLOAT'}, | ||
{'name': 'C', 'type': 'STRING'}, | ||
{'name': 'D', 'type': 'TIMESTAMP'}] | ||
destination_table = self.destination_table + test_id | ||
gbq.to_gbq(df, destination_table, _get_project_id(), | ||
private_key=_get_private_key_path(), | ||
table_schema=test_schema) | ||
dataset, table = destination_table.split('.') | ||
assert self.table.verify_schema(dataset, table, | ||
dict(fields=test_schema)) | ||
|
||
def test_upload_data_with_invalid_user_schema_raises_error(self): | ||
df = tm.makeMixedDataFrame() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can you also tests with missing keys in the schema There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Test added. |
||
test_id = "16" | ||
test_schema = [{'name': 'A', 'type': 'FLOAT'}, | ||
{'name': 'B', 'type': 'FLOAT'}, | ||
{'name': 'C', 'type': 'FLOAT'}, | ||
{'name': 'D', 'type': 'FLOAT'}] | ||
destination_table = self.destination_table + test_id | ||
with tm.assertRaises(gbq.StreamingInsertError): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. StreamingInsertError was removed in 0.3.0. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Replaced with the generic error. Errors need a better hierarchy in this module though. |
||
gbq.to_gbq(df, destination_table, _get_project_id(), | ||
private_key=_get_private_key_path(), | ||
table_schema=test_schema) | ||
|
||
def test_upload_data_with_missing_schema_fields_raises_error(self): | ||
df = tm.makeMixedDataFrame() | ||
test_id = "16" | ||
test_schema = [{'name': 'A', 'type': 'FLOAT'}, | ||
{'name': 'B', 'type': 'FLOAT'}, | ||
{'name': 'C', 'type': 'FLOAT'}] | ||
destination_table = self.destination_table + test_id | ||
with tm.assertRaises(gbq.StreamingInsertError): | ||
gbq.to_gbq(df, destination_table, _get_project_id(), | ||
private_key=_get_private_key_path(), | ||
table_schema=test_schema) | ||
|
||
def test_list_dataset(self): | ||
dataset_id = self.dataset_prefix + "1" | ||
assert dataset_id in self.dataset.datasets() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assume validation is now up to BQ. can you test this though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current implementation will throw
StreamingInsetError
after a chunk is done (see tests) along printing error trace from the BQ API. Which is OK.