forked from tensorflow/tensorflow
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
recent changes squash #25
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
PiperOrigin-RevId: 213027176
PiperOrigin-RevId: 213028338
PiperOrigin-RevId: 213034078
PiperOrigin-RevId: 213037039
PiperOrigin-RevId: 213040362
Also added some experimental C APIs for facilitate the use of eager C APIs in S4TF compiler. PiperOrigin-RevId: 213041780
PiperOrigin-RevId: 213049674
…nt ever since xla::DotGeneral was added. PiperOrigin-RevId: 213052269
PiperOrigin-RevId: 213053512
…adability. - Logic change: Moved getting metric name and function out of the training/eval loops in eager mode - Moved setting metric attributes on the model out the function which calls metric functions. PiperOrigin-RevId: 213060143
…ehavior of Optimizer.compute_gradients(). PiperOrigin-RevId: 213060585
PiperOrigin-RevId: 213062112
Previously, tf.Variable arguments to a defun-d Python function were made captured inputs. This change makes it possible to parameterize functions on DT_RESOURCE inputs. PiperOrigin-RevId: 213064739
Mixing index type doesn't work well with latest Eigen. PiperOrigin-RevId: 213067224
It breaks. should be s/input_shape/inputs_shape. PiperOrigin-RevId: 213070141
… defuns PiperOrigin-RevId: 213074939
update to newest master
PiperOrigin-RevId: 213100589
…p in lieu of a new num_cores_per_replica. PiperOrigin-RevId: 213111326
PiperOrigin-RevId: 213128841
PiperOrigin-RevId: 213161736
I need these to write readable unit tests for TF graph transformations. All of my use cases will live inside tensorflow/compiler so putting it in tensorflow/compiler/jit for now; but we can move these out if other users are interested. In the future we may want to auto-generate type safe versions of these from the op registrations like we generate C++ wrappers today. PiperOrigin-RevId: 213186810
…n shape. PiperOrigin-RevId: 213191899
…ild_link_issue PiperOrigin-RevId: 213208519
phawkins@ suggested these in cr/212715067 but I accidentally made the changes in another client. PiperOrigin-RevId: 213208811
PiperOrigin-RevId: 213210253
PiperOrigin-RevId: 213212445
PiperOrigin-RevId: 213214616
PiperOrigin-RevId: 213886813
PiperOrigin-RevId: 213890403
…OS & environment configurations to a separate test target, and disables running them on Windows. PiperOrigin-RevId: 213895372
This CL splits the functionality in XlaLaunch into two separate operations: - XlaCompile, responsible for compiling a TF function into a LocalExecutable - XlaRun, responsible for executing a LocalExecutable created by XlaCompile This CL is a stepping stone towards implementing lazy compilation for TF/XLA. The XlaCompile op is spec'ed to return a boolean indicating whether the compilation was successful. Right now that boolean is always set to true by XlaCompile and its value is otherwise ignored, but in the future it will be used to indicate whether the TF function was compiled or not, and thus whether we should execute XlaRun or just directly call the TF function. XlaLaunch still exists, and will be created by create_xla_launch_op.cc. In the future we may consider removing it altogether. build_xla_launch_ops.cc, now renamed to build_xla_ops.cc, creates a XlaCompile/XlaRun pair instead of XlaLaunch. This CL is organized as follows: - jit/ops/xla_ops.cc gets two new XLA-specific operations, XlaCompile and XlaRun, described above. XlaRun redundantly takes the must-be-constant inputs to the TensorFlow cluster to keep the implementation simple (simple in the sense of similar to XlaLaunch), but I will remove this in a subsequent cleanup CL. - jit/kernels/xla_ops.cc implements XlaCompile and XlaRun in a fairly straightforward manner. XlaCompile compiles the TF function, puts it in a process-global storage, XlaExecutableClosureStore, and produces a int64 key. XlaRun uses the key to read out the LocalExecutable and execute it. I'm not sure if XlaExecutableClosureStore should be a resource like XlaCompilationCache; I did not immediately see any reason to make it so. - There are changes to the various _device files to register XlaCompile and XlaRun for the XLA_* devices. - Finally, I had to fix some tests that were expecting XlaLaunch in the execution timeline. PiperOrigin-RevId: 213895405
PiperOrigin-RevId: 213896057
depthwise convolution instead of a full convolution now that it exists in XLA. PiperOrigin-RevId: 213896333
PiperOrigin-RevId: 213906379
…refactoring the API for exposing tunable parameters, and removing `model::Node` from the public API. PiperOrigin-RevId: 213907565
PiperOrigin-RevId: 213908983
…d tensors. PiperOrigin-RevId: 213912507
PiperOrigin-RevId: 213912651
PiperOrigin-RevId: 213913013
PiperOrigin-RevId: 213915666
PiperOrigin-RevId: 213917881
PiperOrigin-RevId: 213917946
…ction into the number of shards used. This is a variant of threadpool::parallelFor PiperOrigin-RevId: 213920649
… in python3 threading.local cannot be pickled. PiperOrigin-RevId: 213928766
allowing callers to know if we up-converted a SessionBundle to SavedModel format. PiperOrigin-RevId: 213937542
… dependency. PiperOrigin-RevId: 213942340
self.test_session() has been deprecated in 9962eb5 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 213944355
self.test_session() has been deprecated in 9962eb5 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement. PiperOrigin-RevId: 213944932
Inspired by: https://stackoverflow.com/questions/52428939/eager-mode-optimizers/ PiperOrigin-RevId: 213948133
PiperOrigin-RevId: 213948394
PiperOrigin-RevId: 213952786
This was blocked by an LLVM bug, which was fixed in r342542. PiperOrigin-RevId: 213953743
PiperOrigin-RevId: 213955428
Given a class @attr.s() class SampleAttr(object): field_1 = attr.ib() field_2 = attr.ib() we will be able to run obj = SampleAttr(tensor_1, tensor_2) session.run(obj) # equivalent with session.run([obj.field_1, obj.field_2]) Please note, this does not need nest flatten support (which is only relevant to the feed_dict argument). Also, the information in __attrs_attrs__ is provided for extensions (as per the docs: http://www.attrs.org/en/stable/extending.html#extending-metadata) like this and is not an "implementation detail". PiperOrigin-RevId: 213963978
…umber of circular references. Replace unnecessary OrderedDict with a regular dict. PiperOrigin-RevId: 213982097
…, logical core) indexing scheme for cores. Previously the DeviceAssignment class mixed both a general concept (a mapping from (replica, logical core) to physical TPU core) and a specific instantiation of that concept, by imposing a particular 3D grid structure on the logical core numbers. This was excessive ? while the physical core numbers have a particular structure, there is no need to impose any particular structure on the logical core numbers. This change simplifies the DeviceAssignment scheme, changing it so logical cores within a replica are numbered sequentially without any particular semantics. PiperOrigin-RevId: 213984629
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.